The gap analysis chapter that saves your CER from rejection
Most CER rejections I review trace back to one missing element: a gap analysis that actually guides the rest of the clinical evaluation. Reviewers look at the gap analysis first because it reveals whether the manufacturer understands what evidence is needed. When that chapter is weak, everything downstream collapses.
In This Article
The gap analysis is where most manufacturers reveal whether they truly understand their regulatory obligations or are simply going through the motions. It is not a formality. It is the strategic core of the CER.
When a Notified Body opens your CER, the gap analysis tells them if you know what you are doing. It shows whether you identified the right evidence gaps, understood the clinical questions that matter, and planned how to address them. If this chapter is vague or ritualistic, the reviewer knows immediately that the rest of the file will be problematic.
What the gap analysis must actually accomplish
Under MDR Annex XIV Part A, the clinical evaluation must demonstrate sufficient clinical evidence for safety and performance. The gap analysis is the mechanism that shows you understand what “sufficient” means for your device.
It starts with the clinical questions derived from your risk analysis and intended purpose. These are not generic questions like “Is the device safe?” They are specific questions about particular risks, patient populations, clinical outcomes, and use conditions.
The gap analysis then compares the available data against these questions. It identifies what is missing, what is incomplete, what is outdated, and what requires further investigation. This is not a yes/no checklist. It is a reasoned assessment of adequacy.
The gap analysis is not a summary of what data you have. It is a critical assessment of what data you need and whether what you have is sufficient to answer the clinical questions that matter.
When this is done correctly, the gap analysis becomes the foundation for your entire clinical strategy. It justifies your equivalence claim if you make one. It defines your PMCF plan. It explains why certain studies were conducted and what questions remain open.
Where manufacturers go wrong
I see the same structural errors repeatedly. Manufacturers list data sources, summarize what they found, and declare the evidence sufficient. But they skip the critical step: explaining why the available data addresses the specific clinical questions raised by their device.
This happens because the gap analysis is often written backwards. Instead of starting with clinical questions and assessing data against those questions, manufacturers start with the data they have and retrofit questions around it.
The result is a chapter that feels like a justification exercise rather than a genuine evaluation. Reviewers notice this immediately because the logic does not hold.
Gap analysis that lists data availability by category but never explains how that data addresses the clinical questions derived from the risk analysis. The connection between risk, clinical question, and evidence is missing.
Another frequent problem is the treatment of literature data. Manufacturers conduct a literature search, find some relevant publications, and conclude the gap is closed. But they do not assess whether the study populations, endpoints, follow-up durations, or use conditions match their own device context.
This is where equivalence claims often break down. The gap analysis assumes data from the equivalent device can be transferred without critically examining whether the clinical questions for both devices are truly the same.
The equivalence trap
If you claim equivalence, the gap analysis must demonstrate that the equivalent device has sufficient data to answer your clinical questions. This requires two steps that are often conflated.
First, you must show that the clinical questions for your device and the equivalent device are the same. This depends on demonstrating technical, biological, and clinical equivalence as required by MDCG 2020-5.
Second, you must show that the equivalent device has adequate clinical data to answer those questions. If the equivalent device itself has data gaps, those gaps transfer to your device. Equivalence does not create evidence. It only allows you to reference existing evidence if that evidence is sufficient.
Many CERs fail here because the gap analysis for the equivalent device is never conducted. The manufacturer assumes that because the equivalent device is on the market, its data must be adequate. That assumption is not defendable.
How to structure the gap analysis chapter
The structure should follow a clear logical sequence that a reviewer can audit step by step.
Start by listing the clinical questions derived from your risk analysis and intended purpose. These should be specific, prioritized, and traceable back to your risk management file. If you cannot show where each clinical question came from, the reviewer will question whether you actually performed a risk-based evaluation.
Next, for each clinical question, identify the data sources that could address it. This includes your own clinical data, equivalence data, literature data, and post-market data. Be explicit about the type and quality of each data source.
Then assess the adequacy of the available data for each question. This is the critical step. You must explain not just what data exists, but whether it is sufficient in terms of study design, patient population, endpoints, follow-up duration, statistical power, and relevance to your intended use.
Adequacy is not binary. Data can be partially adequate, requiring additional evidence. The gap analysis must explain the degree of adequacy and what remains uncertain.
Finally, identify the gaps. Be honest about what is missing or insufficient. Explain how you plan to address each gap through PMCF, additional studies, or ongoing literature surveillance. This is where the gap analysis connects to your PMCF plan.
The connection to PMCF
The PMCF plan should be a direct response to the gaps identified in your gap analysis. Every PMCF objective should trace back to a specific gap. If your PMCF plan includes activities that do not address identified gaps, reviewers will question why those activities are necessary.
Conversely, if you identify significant gaps but your PMCF plan does not address them, the reviewer will issue a deficiency. The gap analysis and PMCF plan must be logically consistent.
This is also where manufacturers often reveal that their PMCF plan is generic. If the PMCF objectives could apply to any device in the same class, it suggests the gap analysis was not specific enough.
What reviewers look for in a gap analysis
Reviewers assess the gap analysis for three things: completeness, logic, and honesty.
Completeness means you addressed all clinical questions relevant to your device. If your risk analysis identified a particular use condition or patient population, the gap analysis must address it. Skipping difficult questions is a common deficiency.
Logic means the reasoning is transparent and defendable. A reviewer should be able to follow your thought process from clinical question to data assessment to gap identification. If the logic has jumps or unsupported conclusions, the chapter fails.
Honesty means you acknowledge genuine gaps. Claiming that all clinical questions are fully answered when the evidence is thin signals to the reviewer that the evaluation is not critical. It is better to identify gaps and explain how you will address them than to pretend they do not exist.
Gap analysis that concludes no significant gaps exist despite limited clinical data or reliance on literature from different patient populations. This undermines the credibility of the entire CER.
Reviewers also look for consistency across the CER. If the gap analysis concludes that certain data is adequate, but later chapters raise uncertainties or limitations, the file is internally contradictory. This often happens when different consultants write different chapters without coordinating.
Practical approach to writing the gap analysis
Write the gap analysis before you finalize the rest of the CER. It should guide the appraisal of clinical data, not summarize it. If you write the gap analysis last, it becomes a retrospective justification rather than a strategic tool.
Use a structured table format that maps clinical questions to data sources, adequacy assessments, identified gaps, and planned actions. This makes the logic visible and auditable. Reviewers appreciate tables because they can quickly verify completeness and logic.
Be specific about what constitutes adequate evidence for each clinical question. For safety questions, this might require long-term follow-up data. For performance questions, it might require data from representative use conditions. Do not use vague terms like “sufficient evidence exists.” Explain what sufficient means in context.
If you claim equivalence, include a separate section that assesses the adequacy of data for the equivalent device. Do not assume the equivalent device has complete data. Demonstrate it.
Documentation and traceability
Every statement in the gap analysis should be traceable to a source. Clinical questions trace to the risk analysis. Data sources trace to literature searches, clinical study reports, or PMCF data. Adequacy assessments trace to appraisal chapters.
This traceability is essential during audits. If a reviewer questions your gap assessment, you must be able to show exactly how you reached that conclusion. If the reasoning is not documented, the assessment is not defendable.
Why this chapter determines CER acceptance
The gap analysis is the first place where a reviewer sees whether the manufacturer understands clinical evaluation as a regulatory discipline or treats it as a documentation exercise.
A strong gap analysis shows that you identified the right clinical questions, critically assessed the available evidence, and have a plan to address what is missing. It signals competence and regulatory maturity.
A weak gap analysis shows that you are going through the motions. It suggests that the rest of the CER is likely to have similar problems. Reviewers know from experience that if the gap analysis is superficial, the appraisal chapters will lack critical thinking, the PMCF plan will be generic, and the conclusions will be unjustified.
This is why deficiencies in the gap analysis often cascade into deficiencies across the entire file. Fixing the gap analysis retrospectively requires reworking multiple chapters because the logic is interconnected.
The gap analysis is not just another chapter in the CER. It is the strategic framework that determines whether the clinical evaluation is coherent, complete, and credible.
I have seen manufacturers invest significant resources in literature reviews and clinical studies, only to have the CER rejected because the gap analysis did not demonstrate why that evidence was relevant or sufficient. The data was there, but the reasoning was missing.
Moving forward
If your CER has been rejected or challenged on the gap analysis, the correction is not a minor revision. It requires rethinking the entire clinical evaluation strategy. You must go back to the clinical questions, reassess the data critically, and rebuild the logic.
This takes time, but it is the only path to a defensible CER. Shortcuts here do not save time. They create rework cycles that are far more expensive and delay market access.
For manufacturers preparing a new CER, invest the time to get the gap analysis right from the beginning. It is the foundation. Everything else depends on it.
Peace,
Hatem
Clinical Evaluation Expert for Medical Devices
Follow me for more insights and practical advice.
Frequently Asked Questions
What is a Clinical Evaluation Report (CER)?
A CER is a mandatory document under MDR 2017/745 that demonstrates the safety and performance of a medical device through systematic analysis of clinical data. It must be updated throughout the device lifecycle based on PMCF findings.
How often should the CER be updated?
The CER should be updated whenever significant new clinical data becomes available, after PMCF activities, when there are changes to the device or intended purpose, and at minimum during annual reviews as part of post-market surveillance.
What causes CER rejection by Notified Bodies?
Common reasons include inadequate equivalence demonstration, insufficient clinical data for claims, poorly structured SOTA analysis, missing gap analysis, and lack of clear benefit-risk determination. Structure and logical flow are as important as the data itself.
Which MDCG guidance documents are most relevant for clinical evaluation?
Key documents include MDCG 2020-5 (Equivalence), MDCG 2020-6 (Sufficient Clinical Evidence), MDCG 2020-13 (CEAR Template), MDCG 2020-7 (PMCF Plan), and MDCG 2020-8 (PMCF Evaluation Report).
Need Expert Help with Your Clinical Evaluation?
Get personalized guidance on MDR compliance, CER writing, and Notified Body preparation.
✌
Peace, Hatem
Your Clinical Evaluation Partner
Follow me for more insights and practical advice.
– MDR 2017/745 Annex XIV Part A
– MDCG 2020-5 Clinical Evaluation – Equivalence
– MDCG 2020-6 Sufficient Clinical Evidence for Legacy Devices
Related Resources
Read our complete guide to CER under EU MDR: Clinical Evaluation Report (CER) under EU MDR
Or explore Complete Guide to Clinical Evaluation under EU MDR





