When sufficient evidence is not the same as available evidence

Hatem Rabeh

Written by HATEM RABEH, MD, MSc Ing

Your Clinical Evaluation Expert And Partner

in
S

I see this pattern in almost every clinical evaluation that lands on my desk for review. The manufacturer presents all the data they have collected. They call it sufficient. But when you ask whether it actually demonstrates safety and performance for the intended use, the file goes quiet. Having data is not the same as having the right data.

This confusion sits at the center of most major findings from Notified Bodies and competent authorities. Teams confuse two very different questions. One question is: what clinical data do we have? The other question is: what clinical data do we need to demonstrate conformity?

The first question produces an inventory. The second question defines a target. And between these two questions lies the gap that determines whether your clinical evaluation report will pass scrutiny or trigger a cascade of deficiencies.

What the regulation actually requires

MDR Article 61 and Annex XIV are clear. The manufacturer must demonstrate conformity to the general safety and performance requirements through clinical evidence. The regulation does not say “provide the evidence you have.” It says demonstrate conformity.

This is a target-driven requirement, not an availability-driven requirement.

MDCG 2020-5 on clinical evaluation elaborates this further. The clinical evaluation must be sufficient to demonstrate that the device is safe and performs as intended. Sufficient means adequate to reach that conclusion with confidence. It does not mean “as much as we could find” or “what was feasible to gather.”

Yet in practice, I see files structured around availability. The appraisal section lists what studies exist. The analysis section summarizes those studies. The conclusion states that based on available data, the device appears acceptable. The word sufficient appears, but it refers to the data available, not the data needed.

Common Deficiency
“The available clinical evidence is considered sufficient to support the safety and performance of the device.” This sentence appears in hundreds of reports. It confuses the volume of data with the adequacy of demonstration. Reviewers will ask: sufficient to demonstrate what, specifically, and by what criteria?

How the gap forms

The gap between available and sufficient forms in predictable ways.

Sometimes the available data addresses the wrong clinical questions. A manufacturer inherits literature from a predicate device. That literature focuses on technical performance or procedural success. But the current device has different indications, a different patient population, or a different risk profile. The questions that matter now are not the questions that literature answered.

The data is available. But it does not demonstrate what needs to be demonstrated.

Sometimes the available data is too sparse to reach conclusions with confidence. A handful of case series exist. Each has small sample sizes. The outcomes vary. The follow-up is short. No single study establishes performance. The combined data still does not provide statistical strength or clinical certainty.

You have data. But you cannot make claims based on it.

Sometimes the available data is not representative. The studies enrolled younger patients, lower-risk cases, or experienced operators. Your intended use includes older patients, higher complexity, and general practitioners. The available evidence does not reflect your actual use scenario.

The question is not whether evidence exists. The question is whether that evidence allows you to claim conformity for your specific intended purpose and patient population.

Key Insight
Sufficient evidence is defined by the residual uncertainty after appraisal and analysis. If significant clinical questions remain unanswered, the evidence is not sufficient, no matter how much data you have reviewed.

What sufficiency actually means in regulatory terms

Sufficiency is a conclusion about uncertainty, not a statement about volume.

After you appraise and analyze all available clinical data, you must be able to state that the benefit-risk profile is favorable, that the device performs as intended, and that the residual risks are acceptable. If you cannot state that with confidence, the evidence is not sufficient.

MDCG 2020-5 frames this as the need to demonstrate that “the device performs as intended by the manufacturer and is safe for patients when used under the conditions and for the purposes intended.” This is not ambiguous. You must demonstrate performance and safety. Demonstration requires evidence strong enough to support that claim.

In my reviews, I ask a simple question at the end of the clinical evaluation report: If a competent authority asked whether this device is safe and performs as intended, could you answer yes based on this evidence alone?

If the answer depends on assumptions, on favorable interpretations, or on the absence of contradictory data, then the evidence is not sufficient. You are filling gaps with reasoning instead of data.

Notified Bodies look for the same thing. They do not ask how much literature you reviewed. They ask whether the evidence supports the claims made in the intended use, the instructions for use, and the risk management file. If those claims exceed what the evidence demonstrates, you have a gap.

The planning problem

This is where the confusion creates real problems in practice. Clinical evaluation planning often begins with the wrong question.

Teams ask: what data can we access? What studies are published? What can we collect from post-market sources?

They should ask: what do we need to demonstrate? What clinical questions must be answered to show conformity? What level of evidence is required given the risk class, the novelty, and the intended use?

The clinical evaluation plan should define sufficiency criteria before data collection. It should specify what endpoints matter, what level of evidence is acceptable, and what gaps would be considered critical. This is the appraisal and analysis plan that MDCG 2020-5 requires.

But most plans I review do not define sufficiency. They define search strategies. They outline data sources. They do not state what threshold of evidence will allow the conclusion that the device is safe and performs as intended.

Without that definition, sufficiency becomes subjective. It becomes a matter of opinion whether the available data is enough. And when opinions differ between the manufacturer and the reviewer, the file stalls.

Common Deficiency
The clinical evaluation plan lists data sources but does not define acceptance criteria. When do you have enough data? What endpoints are mandatory? What evidence quality is acceptable? Without answers, you cannot plan the evaluation. You can only react to what you find.

Equivalence claims and the sufficiency trap

Equivalence claims make this problem worse.

A manufacturer claims equivalence to a device with substantial clinical evidence. They argue that equivalence allows them to rely on that evidence to demonstrate their own device’s safety and performance. This is allowed under Annex XIV Section 3.

But here is the trap. The available evidence for the equivalent device may not be sufficient for your device.

Equivalence requires technical, biological, and clinical similarity. Even when equivalence is valid, the clinical evidence must still address your specific intended use, your patient population, and your risk profile. If the equivalent device was used differently, in a different population, or with different claims, the evidence may not transfer.

I see this repeatedly. The equivalence is technically sound. But the clinical evidence for the equivalent device does not answer the clinical questions relevant to the new device. The manufacturer has access to substantial data. But that data is not sufficient to support their specific claims.

Notified Bodies will challenge this. They will ask which clinical questions remain unanswered after equivalence. If significant questions remain, you have a gap. Access to data through equivalence does not eliminate the obligation to demonstrate conformity.

How to bridge the gap

Bridging the gap requires explicit planning and honest appraisal.

First, define what sufficient means for your device before you begin the evaluation. Specify the clinical questions that must be answered. Define what level of evidence is acceptable. Make this definition part of the clinical evaluation plan. This gives you a measurable target.

Second, appraise the available data against that target. Do not summarize data and assume sufficiency. Map each piece of evidence to the clinical questions it addresses. Identify which questions remain unanswered or inadequately answered.

Third, state the gaps explicitly. If the available evidence does not meet the sufficiency criteria, say so. Do not rationalize the gap. Acknowledge it. Then address it.

Addressing the gap means generating additional data. This may require a clinical investigation under MDR Article 62 and Annex XV. It may require a PMCF study designed to answer specific clinical questions. It may require additional literature review with refined search criteria.

What it does not mean is lowering the sufficiency threshold to match what is available. Sufficiency is defined by the regulation and the risk profile, not by convenience or feasibility.

Key Insight
PMCF is not a backup plan when you lack sufficient evidence. It is a continuous requirement. But when pre-market evidence is insufficient, PMCF must be designed to close specific gaps. The PMCF plan must state what questions it will answer and when sufficiency will be achieved.

What reviewers look for

When I review a clinical evaluation report for a manufacturer or when I see findings from Notified Bodies, the pattern is consistent.

Reviewers look for a clear statement of what needed to be demonstrated. They look for explicit sufficiency criteria. They look for an honest appraisal that maps evidence to clinical questions. They look for a conclusion that is justified by the evidence presented, not by assumptions or favorable interpretations.

If the report presents available data without defining what was needed, the reviewer will ask: how do you know this is sufficient?

If the report concludes that evidence is sufficient without addressing obvious gaps, the reviewer will list those gaps and ask how they were resolved.

If the report relies on equivalence without demonstrating that the equivalent device’s evidence addresses the relevant clinical questions, the reviewer will challenge the equivalence or request additional data.

The review is not adversarial. It is target-driven. The target is conformity. The evidence must demonstrate it. If it does not, the evaluation is incomplete.

The consequence of confusing the two

Confusing available with sufficient leads to submission delays, major findings, and sometimes market withdrawal.

A manufacturer submits a technical file believing the clinical evaluation is complete. The Notified Body issues a finding: insufficient clinical evidence to support the intended use. The manufacturer argues that they reviewed all available literature. The Notified Body responds that the available literature does not answer the relevant clinical questions.

The submission stalls. The manufacturer must now design a clinical investigation or PMCF study. The timeline extends by months or years. If the device was already on the market under a grandfather clause, the lack of sufficient evidence may trigger restrictions or suspension.

This is not a paperwork issue. It is a conformity issue. The device may be safe and perform as intended. But without sufficient evidence, conformity is not demonstrated. The regulatory system requires demonstration, not belief.

Conclusion

Sufficient clinical evidence is not the same as available clinical evidence.

Available evidence is what you can gather. Sufficient evidence is what you need to demonstrate conformity. The gap between them defines your clinical evaluation strategy and your PMCF obligations.

When you plan a clinical evaluation, define sufficiency first. State what must be demonstrated. Specify the clinical questions and the level of evidence required. Then gather and appraise the data. If the available data is not sufficient, acknowledge the gap and generate the missing evidence.

This approach aligns with MDR requirements, satisfies Notified Body expectations, and reflects the reality of clinical evaluation work. It treats sufficiency as a target, not as a flexible opinion.

The next time you write or review a clinical evaluation report, ask the question: is this evidence sufficient to demonstrate conformity, or is it simply all the evidence we have?

The answer will determine whether your file moves forward or becomes another delayed submission.

Frequently Asked Questions

What is a Clinical Evaluation Report (CER)?

A CER is a mandatory document under MDR 2017/745 that demonstrates the safety and performance of a medical device through systematic analysis of clinical data. It must be updated throughout the device lifecycle based on PMCF findings.

How often should the CER be updated?

The CER should be updated whenever significant new clinical data becomes available, after PMCF activities, when there are changes to the device or intended purpose, and at minimum during annual reviews as part of post-market surveillance.

What causes CER rejection by Notified Bodies?

Common reasons include inadequate equivalence demonstration, insufficient clinical data for claims, poorly structured SOTA analysis, missing gap analysis, and lack of clear benefit-risk determination. Structure and logical flow are as important as the data itself.

Which MDCG guidance documents are most relevant for clinical evaluation?

Key documents include MDCG 2020-5 (Equivalence), MDCG 2020-6 (Sufficient Clinical Evidence), MDCG 2020-13 (CEAR Template), MDCG 2020-7 (PMCF Plan), and MDCG 2020-8 (PMCF Evaluation Report).

Need Expert Help with Your Clinical Evaluation?

Get personalized guidance on MDR compliance, CER writing, and Notified Body preparation.

Peace, Hatem

Your Clinical Evaluation Partner

Follow me for more insights and practical advice.

References:
– MDR 2017/745 Article 61 (Clinical Evaluation)
– MDR 2017/745 Annex XIV (Clinical Evaluation and Post-Market Clinical Follow-up)
– MDCG 2020-5 (Clinical Evaluation Assessment Report Template)
– MDR 2017/745 Annex XV (Clinical Investigations)

Deepen Your Knowledge

Read Complete Guide to Clinical Evaluation under EU MDR for a comprehensive overview of clinical evaluation under EU MDR 2017/745.