Why Your CEAR Gets Rejected Before Chapter 3
Most Clinical Evaluation Assessment Reports fail at the Notified Body not because of poor data analysis, but because the template structure itself was misunderstood from the start. The MDCG 2020-13 template is not a form to fill. It is a framework for demonstrating regulatory reasoning.
In This Article
- What MDCG 2020-13 Actually Demands
- Chapter 1: Device Description and Intended Purpose
- Chapter 2: Clinical Background and State of the Art
- Chapter 3: Scope of the Clinical Evaluation
- Chapter 4: Methods of Literature Search
- Chapter 5: Appraisal of Clinical Data from Literature
- Chapter 6: Demonstration of Equivalence
- Chapter 7: Appraisal of Clinical Data from Other Sources
- Chapter 8: Analysis and Conclusions
- Chapter 9: PMCF Plan
- Why the Template Structure Matters More Than You Think
- What Happens Next
I see CEAR submissions every month that contain solid clinical data, proper literature searches, and genuine effort. Yet they come back with major deficiencies before the reviewer even reaches the equivalence discussion or PMCF section.
The reason is always the same. The manufacturer treated MDCG 2020-13 as a checklist instead of understanding what the template is designed to reveal.
What MDCG 2020-13 Actually Demands
The MDCG 2020-13 template was published to harmonize how manufacturers present clinical evaluation outcomes to Notified Bodies. It follows the logic of MDR Annex XIV Part A and translates it into a structured report format.
But here is what most teams miss. The template is not neutral. It forces you to state claims early, defend them continuously, and prove them systematically. Every chapter builds on the previous one. If your intended purpose in Chapter 1 does not align with your claims of equivalence in Chapter 6, the entire report collapses under its own contradictions.
This is not a theoretical risk. I have reviewed CERs where the device description in Chapter 1 included features that were later ignored in the equivalence assessment. The Notified Body flagged it immediately. Not because the data was wrong, but because the logic was inconsistent.
The MDCG 2020-13 template is designed to expose gaps in reasoning. If your narrative is not coherent from Chapter 1 to Chapter 9, the structure itself will reveal that.
Chapter 1: Device Description and Intended Purpose
This is where most problems begin. Manufacturers write device descriptions that are either too vague or too detailed in the wrong areas.
The Notified Body does not need a marketing brochure. They need a precise technical and clinical description that defines what the device does, how it does it, and what patient population it serves. This description must match the intended purpose exactly.
I have seen reports where the intended purpose states the device is for short-term use, but the device description includes features designed for long-term implantation. The inconsistency is immediate. The reviewer does not continue.
Here is what works. State the intended purpose first. Then describe the device in terms that support that purpose. Include technical characteristics only if they are clinically relevant. If a feature does not affect clinical outcomes or safety, it does not belong in this chapter.
Device descriptions that include technical specifications without explaining their clinical relevance. Reviewers cannot assess clinical performance if they do not understand why a specification matters clinically.
Chapter 2: Clinical Background and State of the Art
This chapter is not a literature review. It is a demonstration that you understand the clinical problem your device addresses and the current standard of care.
The SOTA section must establish what is already known, what treatment options exist, and where your device fits within that landscape. If your device is equivalent to an existing device, the SOTA must explain the clinical role of that type of device. If your device is novel, the SOTA must define the unmet need.
Most deficiencies here come from generic summaries copied from textbooks or clinical guidelines without connecting them to the specific device under evaluation. The Notified Body needs to see that you understand not just the disease, but the clinical context in which your device will be used.
What does that mean practically? If your device is a surgical mesh, the SOTA must cover hernia repair techniques, mesh classifications, material considerations, and complication rates. Then it must explain where your specific mesh type sits within that context. Without that connection, the SOTA is just background noise.
Chapter 3: Scope of the Clinical Evaluation
This is the chapter where most reports either succeed or fail structurally. Chapter 3 defines what clinical questions you will answer and how you will answer them.
The scope is not a summary. It is a declaration of your evaluation strategy. Will you rely on equivalence? Will you generate clinical data? Will you combine both? Each choice has implications for the chapters that follow.
If you declare equivalence as your primary route, Chapter 6 becomes the critical section. If you generate your own data, Chapter 7 carries the weight. The scope must align with the evidence you actually have.
I have reviewed CERs where the scope promised a full equivalence demonstration, but the manufacturer had only limited technical similarity and no clinical data from the equivalent device. The gap was obvious. The scope overpromised what the evidence could deliver.
Your scope must match your evidence. If your data cannot support the strategy you define in Chapter 3, the rest of the report will fail no matter how well it is written.
Chapter 4: Methods of Literature Search
This chapter is procedural, but that does not make it simple. The Notified Body expects a systematic, reproducible search strategy that follows recognized standards.
MDCG 2020-13 does not mandate a specific search protocol, but it implicitly requires a level of rigor consistent with systematic reviews. That means defined databases, explicit search terms, inclusion and exclusion criteria, and a transparent selection process.
Most deficiencies here are about traceability. The reviewer must be able to replicate your search and reach the same set of articles. If your search strategy is described in vague terms or your selection criteria are subjective, the entire literature appraisal becomes questionable.
What works is specificity. State the exact databases used. List the search strings verbatim. Define inclusion and exclusion criteria before screening. Document the number of articles at each stage. Provide a PRISMA-style flow diagram if possible.
When the Notified Body sees a clear method, they trust the results. When the method is unclear, they question everything that follows.
Chapter 5: Appraisal of Clinical Data from Literature
This is where you demonstrate clinical reasoning. The appraisal is not a summary of articles. It is a critical evaluation of the relevance, quality, and weight of each source.
MDCG 2020-13 expects you to assess each study for its contribution to the clinical evaluation. Is the patient population comparable? Are the outcomes measured clinically relevant? Is the study design appropriate for the conclusions drawn?
Most manufacturers fail here by listing studies without evaluating them. They present abstracts or conclusions without assessing whether the data is applicable to their device.
I have seen appraisals where the manufacturer cited a study on a different device class, with a different mechanism of action, in a different patient population, and treated it as equivalent evidence. The Notified Body rejected it immediately. Not because the study was flawed, but because the relevance was not justified.
Appraisals that describe studies without assessing their relevance or limitations. Simply listing studies is not an appraisal. The reviewer needs to see your clinical judgment.
Chapter 6: Demonstration of Equivalence
If your clinical evaluation relies on equivalence, this chapter is the foundation of your entire submission. MDR Article 61(5) and MDCG 2020-5 define what equivalence means and how to demonstrate it.
Equivalence requires three elements: technical similarity, biological similarity, and clinical similarity. All three must be demonstrated with objective evidence. If any one fails, equivalence fails.
The problem is that most manufacturers present technical comparisons without addressing biological or clinical aspects. They show that materials are similar or that dimensions are comparable, but they do not explain why those similarities translate into equivalent clinical performance.
Here is what the Notified Body expects. For each claimed similarity, you must provide evidence. If you claim similar materials, provide material specifications and biocompatibility data. If you claim similar clinical outcomes, provide comparative clinical data or a robust justification for why existing data from the equivalent device applies to yours.
I have reviewed equivalence claims where the manufacturer stated that devices were “substantially similar” without defining what substantial meant or providing measurable criteria. The Notified Body issued a major non-conformity. Not because equivalence was impossible, but because it was not demonstrated.
Chapter 7: Appraisal of Clinical Data from Other Sources
This chapter covers clinical investigations, registries, post-market data, and any evidence not captured in published literature. For many devices, this is the most important evidence available.
The appraisal must follow the same rigor as Chapter 5. Each source must be evaluated for relevance, quality, and contribution to the clinical evaluation. If you include data from your own clinical investigation, you must present it transparently and address its limitations.
Most deficiencies here come from incomplete reporting. The manufacturer references a clinical study but does not provide the full study report or does not explain how the results support the safety and performance claims.
If you cite post-market surveillance data, the Notified Body expects details. How many devices were implanted? What was the follow-up period? What adverse events were reported? How were they analyzed? Vague references to “favorable clinical experience” are not acceptable under MDR.
Chapter 8: Analysis and Conclusions
This is where you synthesize everything. Chapter 8 must integrate all previous chapters into a coherent clinical evaluation outcome.
The analysis is not a repetition of findings. It is a demonstration that the totality of evidence supports the safety and performance claims. You must address each intended purpose claim, each identified risk, and each clinical benefit.
If gaps exist in the data, Chapter 8 must acknowledge them and explain how they will be addressed through PMCF. If uncertainties remain, they must be stated clearly.
The Notified Body expects honesty here. A report that concludes “all risks are acceptable” without justification is less credible than one that acknowledges residual uncertainties and proposes a monitoring strategy.
Chapter 8 is your clinical conclusion. It must reflect the actual evidence, not the desired outcome. If the data is strong, the conclusion will be strong. If the data has gaps, the conclusion must address them transparently.
Chapter 9: PMCF Plan
The PMCF plan is not an afterthought. It is a structured commitment to continue monitoring the device in real-world use and updating the clinical evaluation as new data emerges.
MDCG 2020-8 provides detailed guidance on PMCF requirements. The plan must define specific objectives, methods, and timelines. It must address residual risks and data gaps identified in the clinical evaluation.
Most manufacturers fail here by writing generic PMCF plans that could apply to any device. The Notified Body expects a plan tailored to the specific device, its intended use, and its risk profile.
If your clinical evaluation identified uncertainty about long-term performance, your PMCF plan must include methods to collect long-term data. If specific adverse events were flagged as potential risks, your PMCF must include surveillance for those events.
The plan must also define criteria for updating the CER. How often will you review PMCF data? What findings would trigger a CER update? What thresholds would require corrective action?
Without these details, the PMCF plan is just a procedural placeholder.
Why the Template Structure Matters More Than You Think
The MDCG 2020-13 template is not arbitrary. Each chapter builds on the previous one. Each section forces you to defend a claim, justify a choice, or address a gap.
When you complete the template correctly, the Notified Body can follow your reasoning from device description to clinical conclusion. When you treat it as a form to fill, the logic breaks down and the deficiencies appear.
This is not about writing skill. It is about clinical evaluation discipline. The template works when you understand what it is designed to reveal.
Most manufacturers complete the CEAR after the clinical evaluation is done. That is backwards. The CEAR structure should guide the clinical evaluation from the beginning. If you know what questions Chapter 8 will ask, you know what evidence to gather in Chapters 5 and 7.
The manufacturers who succeed with MDCG 2020-13 are the ones who use it as a planning tool, not a reporting tool.
What Happens Next
The CEAR is reviewed by the Notified Body as part of the technical documentation. If the structure is sound and the evidence is coherent, the review proceeds. If gaps exist, deficiencies are issued.
Most deficiencies are structural. They arise because the template was not understood or because the chapters were completed in isolation without ensuring consistency.
The manufacturers who avoid those deficiencies are the ones who treat the CEAR as a single narrative, not nine separate sections.
That narrative is your clinical evaluation story. Make sure it holds together from the first sentence to the last.
Peace,
Hatem
Clinical Evaluation Expert for Medical Devices
Follow me for more insights and practical advice.
Frequently Asked Questions
What is a Clinical Evaluation Report (CER)?
A CER is a mandatory document under MDR 2017/745 that demonstrates the safety and performance of a medical device through systematic analysis of clinical data. It must be updated throughout the device lifecycle based on PMCF findings.
How often should the CER be updated?
The CER should be updated whenever significant new clinical data becomes available, after PMCF activities, when there are changes to the device or intended purpose, and at minimum during annual reviews as part of post-market surveillance.
What causes CER rejection by Notified Bodies?
Common reasons include inadequate equivalence demonstration, insufficient clinical data for claims, poorly structured SOTA analysis, missing gap analysis, and lack of clear benefit-risk determination. Structure and logical flow are as important as the data itself.
Which MDCG guidance documents are most relevant for clinical evaluation?
Key documents include MDCG 2020-5 (Equivalence), MDCG 2020-6 (Sufficient Clinical Evidence), MDCG 2020-13 (CEAR Template), MDCG 2020-7 (PMCF Plan), and MDCG 2020-8 (PMCF Evaluation Report).
Need Expert Help with Your Clinical Evaluation?
Get personalized guidance on MDR compliance, CER writing, and Notified Body preparation.
✌
Peace, Hatem
Your Clinical Evaluation Partner
Follow me for more insights and practical advice.
– Regulation (EU) 2017/745 (MDR), Annex XIV Part A
– MDCG 2020-13 Clinical Evaluation Assessment Report Template
– MDCG 2020-5 Clinical Evaluation – Equivalence
– MDCG 2020-8 Post-Market Clinical Follow-up Plan Template





