Class III Clinical Evaluation: Why Most Fail Before Review
I’ve seen Class III clinical evaluation reports returned at first review more times than I can count. The manufacturer had clinical data. They had literature. They had a plan. But the structure failed before the Notified Body even assessed the science. The requirements for Class III devices are not simply stricter versions of Class IIa rules—they follow a different logic entirely.
In This Article
Most manufacturers approach Class III clinical evaluation as if it were a Class IIb file with more pages. That assumption costs months in submission delays and creates deficiencies that cascade through every section of the technical documentation.
The MDR does not explicitly separate Class III requirements into a different article. But the interpretation, the depth, and the evidential standard demanded by Notified Bodies for Class III devices differ fundamentally from lower-risk classes. Understanding that difference is not optional.
The Foundational Difference: Equivalence Is Rarely Sufficient
For Class IIa and IIb devices, equivalence can often serve as the primary clinical evidence route—if demonstrated properly. For Class III devices, equivalence alone is almost never accepted as sufficient.
Article 61(5) of the MDR states clearly that demonstration of equivalence requires clinical data from the equivalent device. But for Class III, even when equivalence is technically possible, Notified Bodies expect the manufacturer to generate clinical data on their own device unless the equivalence is so direct that no safety or performance difference could reasonably exist.
This is not written as a hard rule. It emerges from审查 practice. Class III devices carry higher risk. The tolerance for assumptions drops. The burden shifts to the manufacturer to prove, not argue, that their device performs safely in the intended patient population.
Manufacturers claim equivalence for a Class III device based on similar materials and design, then rely entirely on literature from other devices. The Notified Body rejects the equivalence demonstration and requests clinical investigation data. The manufacturer has no study planned, no protocol ready, and no timeline that accounts for this.
The moment you classify your device as Class III, assume you will need device-specific clinical data. Plan for it early. Equivalence may reduce the scope of that data, but it will not eliminate the requirement.
Clinical Investigation: When It Becomes Mandatory
Article 61(4) of the MDR lists situations where a clinical investigation is required. For implantable and Class III devices, the threshold is lower. If you cannot demonstrate sufficient clinical evidence through other means, the investigation is not a choice—it is an obligation.
Sufficient clinical evidence means data that addresses all intended uses, all patient populations, all claimed benefits, and all identified risks. For Class III devices, that standard is high.
If your device is a modification of an existing device, you may argue that prior data remains valid. But the argument must be supported by a structured gap analysis. You must show exactly which aspects of performance and safety are covered by existing data and which gaps remain.
If gaps remain—especially gaps related to long-term safety or new patient populations—Notified Bodies will request additional data. Often, that means a clinical investigation.
The gap analysis is not a formality. It is the document that determines whether you can proceed without a clinical study. If the gap analysis is weak, vague, or incomplete, the Notified Body will default to requiring an investigation. Write it with precision.
State of the Art: Depth and Currency Matter
The SOTA section in a Class III clinical evaluation report must reflect the current medical consensus, published evidence, and real-world clinical practice. For Class IIa devices, a general overview may suffice. For Class III devices, the SOTA must be comprehensive and current.
Notified Bodies assess whether your SOTA section demonstrates awareness of competing therapies, alternative treatments, complications, and long-term outcomes. If your literature review stops at studies published three years ago, the reviewers will notice. If you exclude studies that report complications, they will ask why.
The SOTA is not background material. It is the foundation against which your device’s safety and performance will be judged. A weak SOTA undermines every claim you make later in the report.
I have reviewed reports where the SOTA section cited only studies favorable to the device technology. The negative or inconclusive studies were omitted. The Notified Body identified this immediately and requested a full re-analysis. The submission was delayed by four months.
How to Structure the SOTA for Class III Devices
Start with the medical condition. Describe prevalence, severity, patient populations, and current treatment pathways. Use recent epidemiological data.
Then address existing treatment options. Include surgical, pharmacological, and device-based therapies. For each option, summarize clinical outcomes, complication rates, and long-term data. Cite high-quality sources—preferably systematic reviews, clinical guidelines, or large cohort studies.
Next, introduce your device category. Show where your device fits within the treatment landscape. Identify the comparators—what your device replaces or improves upon.
Finally, summarize the state of the art for your specific device type. Include published studies, registry data, post-market surveillance findings, and any relevant safety alerts or field actions.
This structure makes it clear that you understand the clinical context and that your device will be evaluated against real alternatives, not in isolation.
Manufacturers write the SOTA as a product brochure—highlighting only the benefits of their technology and ignoring competing therapies. This signals to the reviewer that the analysis is biased. The credibility of the entire report drops.
Clinical Data Appraisal: Rigor and Transparency
For Class III devices, the appraisal of clinical data must be systematic, transparent, and traceable. The Notified Body needs to see how you selected studies, how you assessed their quality, and how you resolved conflicting results.
If you include literature data, explain your search strategy. Document the databases, the search terms, the inclusion and exclusion criteria, and the number of studies screened versus included. This does not need to be a formal systematic review, but it must be reproducible.
For each included study, appraise the quality. Was it randomized? Was it prospective? What was the sample size? How closely does the study population match your intended users? Were there conflicts of interest or funding biases?
Then synthesize the findings. Do not simply list study results. Identify patterns, highlight consistencies, and address inconsistencies. If two studies report different complication rates, explain why. Was it due to patient selection, follow-up duration, or device generation?
Notified Bodies are trained to spot selective reporting. If you cite only the positive outcomes from a study and ignore the reported complications, the reviewer will go back to the original publication and find what you omitted. That destroys trust.
Handling Conflicting Data
Conflicting data is common, especially in device-based interventions. Do not hide it. Address it directly.
Explain the source of the conflict. Different study designs, different patient populations, different follow-up periods—all of these can produce different results. Your job is to interpret the conflict in context, not to eliminate it from the record.
If the conflict raises a safety concern, acknowledge it. Describe how your device design, your instructions for use, or your risk mitigation measures address that concern. Transparency here builds credibility.
Benefit-Risk Analysis: Quantitative When Possible
For Class III devices, the benefit-risk analysis cannot rely on qualitative statements alone. Notified Bodies expect quantitative data wherever possible—complication rates, success rates, time to recovery, quality of life improvements.
The analysis must compare your device against the relevant alternatives identified in the SOTA. If your device is an implant, compare it to the standard surgical option. If it is a therapeutic device, compare it to pharmacological treatment.
For each comparator, list the expected benefits and the known risks. Then position your device within that landscape. Is your device safer but less effective? More effective but with higher complication risk? Equivalent in outcomes but less invasive?
This comparison must be supported by data, not opinion. If you claim lower complication rates, cite the studies that support that claim. If you claim faster recovery, show the evidence.
The benefit-risk conclusion must be proportionate to the evidence quality. If your evidence comes from case series and retrospective data, you cannot claim definitive superiority. State the conclusion that the data supports—no more, no less.
Post-Market Clinical Follow-Up: Not Optional
Article 61(11) of the MDR requires manufacturers to conduct post-market clinical follow-up (PMCF) unless justified otherwise. For Class III devices, justification for not conducting PMCF is almost never accepted.
Your PMCF plan must address the residual uncertainties identified in your clinical evaluation report. If your pre-market data comes from short-term studies, your PMCF must capture long-term outcomes. If your data comes from a limited patient population, your PMCF must monitor real-world use across diverse patients.
The plan must include specific methods—registries, surveys, retrospective chart reviews, prospective follow-up studies. It must define the data to be collected, the frequency of analysis, and the criteria that would trigger a safety review.
Vague PMCF plans are a top cause of deficiency letters.
Frequently Asked Questions
What is a Clinical Evaluation Report (CER)?
A CER is a mandatory document under MDR 2017/745 that demonstrates the safety and performance of a medical device through systematic analysis of clinical data. It must be updated throughout the device lifecycle based on PMCF findings.
How often should the CER be updated?
The CER should be updated whenever significant new clinical data becomes available, after PMCF activities, when there are changes to the device or intended purpose, and at minimum during annual reviews as part of post-market surveillance.
What causes CER rejection by Notified Bodies?
Common reasons include inadequate equivalence demonstration, insufficient clinical data for claims, poorly structured SOTA analysis, missing gap analysis, and lack of clear benefit-risk determination. Structure and logical flow are as important as the data itself.
Which MDCG guidance documents are most relevant for clinical evaluation?
Key documents include MDCG 2020-5 (Equivalence), MDCG 2020-6 (Sufficient Clinical Evidence), MDCG 2020-13 (CEAR Template), MDCG 2020-7 (PMCF Plan), and MDCG 2020-8 (PMCF Evaluation Report).
Need Expert Help with Your Clinical Evaluation?
Get personalized guidance on MDR compliance, CER writing, and Notified Body preparation.
✌
Peace, Hatem
Your Clinical Evaluation Partner
Follow me for more insights and practical advice.





