When ‘sufficient evidence’ becomes the reason for your rejection
You submitted a clinical evaluation with thirty studies, two meta-analyses, and a detailed PMCF plan. The Notified Body comes back: ‘Insufficient clinical evidence.’ You read it again. How can thirty studies be insufficient? The answer reveals one of the most misunderstood concepts in MDR compliance.
In This Article
- The MDR Framework for Sufficient Evidence
- What Sufficiency Actually Requires
- The Role of Appraisal in Determining Sufficiency
- The Gap Between Data and Demonstration
- What Happens When Evidence is Genuinely Insufficient
- How PMCF Relates to Sufficiency
- Practical Steps to Demonstrate Sufficiency
- What Reviewers Actually Look For
- When You Receive an Insufficiency Objection
- The Long View on Sufficiency
The problem is not the volume of evidence. The problem is that sufficiency is not a counting exercise.
I see this confusion in almost every rejected file I review. Manufacturers collect studies. They add more references. They expand literature tables. Then they wonder why reviewers still find gaps.
The reason is structural. Sufficiency under MDR Article 61 is not about having enough studies. It is about demonstrating that the evidence addresses all relevant aspects of safety and performance for the specific intended use, patient population, and risk profile of your device.
This is not semantic precision. This is the difference between approval and rejection.
The MDR Framework for Sufficient Evidence
MDR Annex XIV Part A defines clinical evidence as the data supporting the safety and performance of a device under normal conditions of use. Sufficiency means that this data is adequate to demonstrate conformity with the relevant general safety and performance requirements in Annex I.
MDCG 2020-5 and MDCG 2020-6 clarify what adequacy means in practice. Evidence must be:
- Current
- Relevant to the intended purpose and target population
- Sufficient in scope and depth
- Methodologically sound
- Apprised critically
This sounds clear. But in real submissions, manufacturers often miss the structural logic that connects these criteria.
They treat sufficiency as a threshold. Reach a certain number of patients or studies, and you are done. This is the first mistake.
Sufficiency is not a numerical target. It is a demonstration that the evidence base covers all clinically relevant questions for your specific device.
What Sufficiency Actually Requires
When a reviewer assesses sufficiency, they are not counting studies. They are checking whether the evidence answers the clinical questions raised by the device.
These questions come from three sources:
The intended use. What clinical benefit does the device claim? What is the target condition? What alternative treatments exist?
The risk profile. What are the residual risks after mitigation? What are the known and foreseeable hazards? What severity and probability classifications apply?
The patient population. Who will use this device? What are their baseline characteristics? Are there vulnerable groups or off-label risks?
Sufficient evidence means you have data that addresses each of these questions with appropriate depth.
If your device is indicated for elderly patients with comorbidities, but your clinical data comes from young healthy volunteers, your evidence is not sufficient. Not because the study is bad. Because it does not address the relevant clinical question.
If your device has a residual risk of vascular injury, but your literature search excludes studies reporting complications, your evidence is not sufficient. Not because you need more studies. Because you avoided the question.
This is where most gaps appear.
Manufacturers present evidence that is abundant but irrelevant. The studies exist. The data is published. But none of it addresses the actual clinical questions raised by the device’s specific use and risk profile.
The Role of Appraisal in Determining Sufficiency
Even when the evidence is relevant, sufficiency depends on its quality. This is where critical appraisal becomes essential.
A weak study that barely addresses a key clinical question does not constitute sufficient evidence. A strong study that addresses a tangential question does not either.
Appraisal under MDCG 2020-13 evaluates:
- Study design appropriateness
- Risk of bias
- Generalizability to the target population
- Consistency with other evidence
- Statistical robustness
When appraisal reveals limitations, sufficiency decreases. If the only available study on a critical safety endpoint has high bias and small sample size, you do not have sufficient evidence for that endpoint. You have a signal that requires further investigation.
Many manufacturers skip this reasoning. They appraise studies in isolation. They note limitations but do not aggregate the impact on sufficiency.
Reviewers do aggregate. They look at the totality of evidence and ask: given the limitations identified in appraisal, can we still conclude that the device is safe and performs as intended?
If the answer is no, the evidence is insufficient.
The Gap Between Data and Demonstration
Here is where it gets practical. You can have relevant, well-appraised data and still fail to demonstrate sufficiency.
Because demonstration requires explicit reasoning.
You must show how the evidence answers the clinical questions. You must show how the data supports each claim in your intended use. You must show how the evidence addresses each residual risk.
This is not implied. This is not obvious to the reviewer. This must be written out.
In Annex XIV Part A Section 1, MDR requires the clinical evaluation report to contain a demonstration that the device meets the relevant general safety and performance requirements. Demonstration means explicit connection between evidence and requirement.
When this connection is missing, reviewers cannot determine sufficiency. They see data. They see claims. They do not see the logical bridge.
So they reject.
Sufficiency is not proven by the existence of data. It is demonstrated by showing how that data systematically addresses every relevant clinical question for your device.
What Happens When Evidence is Genuinely Insufficient
Sometimes the evidence base is genuinely limited. The device is novel. The indication is narrow. The population is rare. Published data does not exist or does not apply.
In these cases, sufficiency cannot be reached through literature alone. MDR anticipates this in Annex XIV Part A Section 1. When equivalence is not possible and literature is insufficient, clinical investigations become necessary.
But even here, manufacturers make a mistake. They assume that running a clinical investigation automatically solves the sufficiency problem.
It does not.
The investigation must be designed to generate the missing evidence. It must address the unanswered clinical questions. It must include the relevant population and endpoints.
A poorly designed investigation that collects data on secondary outcomes while ignoring the primary safety concern does not generate sufficient evidence. It generates more data. But data is not the same as evidence.
This is why the clinical evaluation report must identify specific evidence gaps and explain how the clinical investigation plan addresses them. This connection is mandatory under Annex XIV Part A Section 4.
Reviewers check this. When the CIP does not align with the identified gaps, they conclude that even post-investigation, evidence will remain insufficient.
How PMCF Relates to Sufficiency
Post-market clinical follow-up is not a substitute for pre-market sufficiency. It is a mechanism for maintaining sufficiency over time.
Under MDR Article 61(11) and Annex XIV Part B, PMCF generates ongoing evidence about safety and performance in real-world conditions. This evidence confirms assumptions, detects emerging risks, and updates clinical evaluation.
But PMCF cannot fix insufficient pre-market evidence. If you cannot demonstrate conformity at the time of assessment, PMCF will not change that.
Reviewers see this confusion regularly. Manufacturers propose extensive PMCF plans as a way to compensate for weak pre-market data. They assume that committing to future data collection will satisfy current evidence requirements.
It does not work that way.
PMCF is evaluated based on what it will measure and how those measurements maintain evidence sufficiency. If the starting evidence is insufficient, PMCF cannot be properly designed because the baseline is unclear.
Using PMCF to defer answering clinical questions that should be addressed pre-market. Reviewers interpret this as acknowledgment that current evidence is insufficient, which often leads to rejection or major objections.
Practical Steps to Demonstrate Sufficiency
When you structure your clinical evaluation, the demonstration of sufficiency should be explicit and systematic.
First, define the clinical questions. What must be proven for this specific device, indication, population, and risk profile?
Second, map your evidence to those questions. For each question, identify which studies or data sources provide answers. Be specific. Reference exact sections of studies, not entire papers.
Third, appraise the quality of each answer. Does the study design support the conclusion? Are there biases? Is the population generalizable?
Fourth, assess whether the totality of answers is sufficient. Are all critical questions addressed? Are the answers robust enough given the risk level?
Fifth, document gaps. If questions remain unanswered, state this explicitly. Explain whether gaps are acceptable given the device classification and risk, or whether additional evidence is required.
This process is not optional. It is the structure that MDCG 2020-13 describes for clinical evaluation.
When you follow it, reviewers can trace your reasoning. They can verify that your conclusion of sufficiency is justified. They can challenge specific points without rejecting the entire file.
When you skip it, reviewers cannot verify anything. They see a pile of studies and a conclusion. The connection is invisible. So they reject.
What Reviewers Actually Look For
From the reviewer’s perspective, determining sufficiency is a verification exercise. They are not trying to find more studies. They are checking whether your demonstration holds.
They look for:
- Explicit identification of clinical questions
- Clear mapping of evidence to questions
- Transparent appraisal of evidence quality
- Logical reasoning from evidence to conclusions
- Honest acknowledgment of limitations and gaps
When these elements are present, even limited evidence can be sufficient if the reasoning is sound.
When these elements are missing, even abundant evidence appears insufficient because the demonstration is incomplete.
This explains why some files with ten studies get approved while others with fifty studies get rejected. The number does not matter. The demonstration does.
Reviewers assess sufficiency by evaluating your demonstration, not by counting your references. A complete demonstration with limited evidence is stronger than an incomplete demonstration with abundant evidence.
When You Receive an Insufficiency Objection
If a reviewer raises an insufficiency objection, the response must address the structural issue, not add more studies.
First, identify what question the reviewer believes is unanswered. Read the objection carefully. What specific aspect of safety or performance is in doubt?
Second, check whether your existing evidence actually addresses that question. Often it does, but the connection was not made explicit in the report.
Third, if evidence exists, revise the demonstration to show the connection clearly. Add reasoning. Reference specific data points. Explain why the evidence is adequate for that question.
Fourth, if evidence does not exist, acknowledge the gap. Explain whether the gap is acceptable given risk-benefit analysis, or propose how additional evidence will be generated.
Do not respond by dumping more studies into the literature review. That does not address the objection. It confirms that you did not understand it.
Reviewers are not asking for more data. They are asking for better demonstration.
The Long View on Sufficiency
Sufficiency is not static. As the state of the art evolves, as new evidence emerges, as real-world use reveals unexpected patterns, what was sufficient yesterday may not be sufficient tomorrow.
This is why clinical evaluation is continuous under MDR Article 61(12). You must update your evaluation as new information becomes available. You must reassess sufficiency in light of new evidence.
PMCF feeds this process. Post-market surveillance feeds this process. Literature monitoring feeds this process.
The goal is not to reach sufficiency once and declare victory. The goal is to maintain sufficiency throughout the device lifecycle.
Manufacturers who understand this build living clinical evaluation systems. They do not treat sufficiency as a one-time hurdle. They treat it as an ongoing verification.
Manufacturers who misunderstand this treat clinical evaluation as a document to be completed and filed. When reviewers or auditors raise questions, they are surprised. The file was approved. Why are we revisiting sufficiency?
Because the regulatory framework assumes that your understanding of safety and performance deepens over time. If it does not, something is wrong.
Sufficiency is not a finish line. It is a standard that your evidence must meet at every point in the device lifecycle. The evidence base grows. The demonstration evolves. The conclusion of sufficiency is re-confirmed.
When you see it this way, the initial question changes. It is not about collecting enough studies to pass review. It is about building a clinical evidence base that continuously supports your claims.
That mindset shift is what separates files that get approved from files that get rejected repeatedly. It is not the volume of data. It is the quality of reasoning.
Reviewers can see the difference immediately. So can auditors. So can anyone who reads the clinical evaluation report with a critical eye.
Sufficient evidence is evidence that answers the right questions with appropriate depth and transparency. Everything else is noise.
Peace,
Hatem
Clinical Evaluation Expert for Medical Devices
Follow me for more insights and practical advice.
Frequently Asked Questions
What is a Clinical Evaluation Report (CER)?
A CER is a mandatory document under MDR 2017/745 that demonstrates the safety and performance of a medical device through systematic analysis of clinical data. It must be updated throughout the device lifecycle based on PMCF findings.
How often should the CER be updated?
The CER should be updated whenever significant new clinical data becomes available, after PMCF activities, when there are changes to the device or intended purpose, and at minimum during annual reviews as part of post-market surveillance.
What causes CER rejection by Notified Bodies?
Common reasons include inadequate equivalence demonstration, insufficient clinical data for claims, poorly structured SOTA analysis, missing gap analysis, and lack of clear benefit-risk determination. Structure and logical flow are as important as the data itself.
Which MDCG guidance documents are most relevant for clinical evaluation?
Key documents include MDCG 2020-5 (Equivalence), MDCG 2020-6 (Sufficient Clinical Evidence), MDCG 2020-13 (CEAR Template), MDCG 2020-7 (PMCF Plan), and MDCG 2020-8 (PMCF Evaluation Report).
Need Expert Help with Your Clinical Evaluation?
Get personalized guidance on MDR compliance, CER writing, and Notified Body preparation.
✌
Peace, Hatem
Your Clinical Evaluation Partner
Follow me for more insights and practical advice.
– Regulation (EU) 2017/745 (MDR) Article 61, Annex XIV, Annex I
– MDCG 2020-5: Clinical evaluation – Equivalence
– MDCG 2020-6: Sufficient clinical evidence for legacy devices
– MDCG 2020-13: Clinical evaluation assessment report template





