SOTA for Different Device Types: What Changes

Hatem Rabeh

Written by HATEM RABEH, MD, MSc Ing

Your Clinical Evaluation Expert And Partner

in
S

You have mastered SOTA for one device type. Then you move to a different project and everything feels different. Implants are not like diagnostics. Software is not like implants. The fundamentals remain, but the emphasis shifts. Understanding these shifts prevents costly rework and ensures your SOTA addresses what reviewers expect for each device category.

The five-step SOTA method applies to all device types, but the content, emphasis, and depth required vary significantly. Implants need long-term safety data. Diagnostics need accuracy benchmarks. Software needs algorithmic transparency. Understanding these differences helps you allocate effort appropriately and avoid deficiencies.

Implantable Devices

For implants, SOTA must emphasize long-term outcomes. Reviewers want to see survival data, revision rates, and complication rates over years, not months. Short-term efficacy data alone is insufficient.

Key SOTA elements for implants include material biocompatibility history, long-term clinical registry data, known failure modes and their timing, and surgical technique influences on outcomes. Your benchmark table should include 5-year and 10-year data where available.

Gap documentation for implants often focuses on duration of follow-up. If your evidence base lacks long-term data, acknowledge this explicitly and describe how PMCF will generate it. Reviewers accept that novel implants lack long-term data. They do not accept pretending this gap does not exist.

Key Insight
For implants, time is a critical variable. SOTA must address not just what happens, but when it happens and how outcomes evolve over the device lifetime.

Diagnostic Devices

For diagnostics, SOTA centers on accuracy metrics: sensitivity, specificity, positive and negative predictive values, area under the ROC curve. Your benchmark table must include these metrics with confidence intervals.

Key SOTA elements include reference standard definitions and their limitations, prevalence effects on predictive values, subgroup performance variations, and clinical consequences of false positives and negatives. The clinical context section must explain what happens when the test is wrong.

Gap documentation for diagnostics often addresses spectrum bias, verification bias, or limited external validation. If your evidence comes primarily from controlled settings with selected patients, acknowledge how real-world performance might differ.

SOTA Emphasis by Device Type

Implants: Long-term Data
95%
Diagnostics: Accuracy Metrics
90%
Software: Three Pillars
85%
High-Risk: Comprehensive Analysis
90%
Novel: Clinical Reasoning
80%

Software as a Medical Device

For software devices, SOTA requires the three-pillar framework from MDCG 2020-1: valid clinical association, analytical performance, and clinical performance in intended settings.

Key SOTA elements include the clinical rationale for why the software’s output is meaningful, technical performance against specifications, and real-world effectiveness with actual users. For AI-based software, add dataset representativeness, training-test separation, and drift monitoring considerations.

Gap documentation for software often addresses external validation, subgroup performance, and version control. How does performance change across different patient populations, clinical sites, or software updates?

Common Deficiency
Software SOTA that only addresses technical performance without clinical performance. Analytical accuracy in the lab does not demonstrate clinical utility in practice.

High-Risk Class III Devices

For Class III devices, SOTA depth must match the risk level. Superficial analysis is immediately flagged. Reviewers expect comprehensive literature coverage, detailed competitor analysis, and rigorous benchmark establishment.

Key SOTA elements include pivotal study designs used for similar devices, regulatory precedents and their evidence requirements, clinical guideline positioning, and expert consensus documents. Your SOTA should demonstrate that you understand what evidence has historically been required for approval.

Gap documentation for high-risk devices must be especially thorough. Any limitation in your evidence base will be scrutinized. Better to acknowledge and address gaps proactively than to have them discovered during review.

Novel Devices Without Predicates

For truly novel devices, SOTA cannot rely on device-specific benchmarks because they do not exist. Instead, focus on how the clinical need is currently addressed without devices.

Key SOTA elements include current clinical practice without your device category, outcomes achieved through alternative approaches, unmet needs and their clinical consequences, and clinical thresholds that would represent meaningful improvement.

Gap documentation for novel devices acknowledges the fundamental limitation: no directly comparable evidence exists. Your PMCF strategy becomes critical because most evidence will be generated post-market.

Key Insight
Novel devices require more extensive clinical reasoning in SOTA. Without benchmarks, you must argue from first principles why your device’s performance targets are clinically appropriate.

Combination Products

For drug-device combinations, SOTA must address both components and their interaction. This often requires separate literature streams that are then synthesized.

Key SOTA elements include the drug’s established efficacy and safety profile, the device’s delivery characteristics compared to alternatives, combination-specific outcomes data if available, and regulatory precedents for similar combinations.

Gap documentation for combinations often addresses interaction effects. Even if both components are well-characterized separately, combined performance may differ.

Practical Recommendations

Before starting SOTA, identify your device type and its specific regulatory pathway. Review MDCG guidance relevant to your category. Study successful submissions for similar devices to understand reviewer expectations.

Allocate more effort to the areas that matter most for your device type. Implants need long-term data. Diagnostics need accuracy metrics. Software needs the three-pillar framework. High-risk devices need comprehensive analysis.

This completes our series on State of the Art Mastery. Apply these principles consistently, and your SOTA sections will demonstrate the understanding and analysis that reviewers require.

Peace,
Hatem
Your Clinical Evaluation Partner

Frequently Asked Questions

How do I know which device type category applies?

Your device classification under MDR determines the baseline. Then consider specific guidance: MDCG 2020-1 for software, MDCG 2020-6 for high-risk devices, relevant harmonized standards for your device category. When multiple categories apply (e.g., software in an implant), address requirements for each.

What if my device spans multiple categories?

Address requirements for each applicable category. A diagnostic software device needs both software three-pillar evidence and diagnostic accuracy benchmarks. Combination requirements are additive, not alternative.

How detailed should competitor analysis be?

Detailed enough to establish meaningful benchmarks. For high-risk devices, this may mean analyzing multiple competitors across several metrics. For lower-risk devices, a representative sample may suffice. The key is demonstrating that your benchmarks reflect current market reality.

Series: State of the Art Mastery

Part 5 of 5

Need Expert Help with Your Clinical Evaluation?

Get personalized guidance on MDR compliance, CER writing, and Notified Body preparation.

Peace, Hatem

Your Clinical Evaluation Partner

Follow me for more insights and practical advice.

References:
– MDCG 2020-1: Guidance on Clinical Evaluation for Medical Device Software
– MDCG 2020-6: Sufficient Clinical Evidence
– MDR 2017/745 Annex XIV