MDCG 2020-13: When the CEAR Template Becomes Your Interrogation
You fill out the Clinical Evaluation Assessment Report template thinking you are documenting your work. The Notified Body reviewer reads it as a confession. Every box you complete gives them a structured path to challenge your clinical evaluation. This is not a documentation tool. It is a review protocol.
In This Article
Most manufacturers treat MDCG 2020-13 like a checklist. They assign someone to fill in the sections. They paste summaries from the CER. They attach the final page to the technical file and move on.
Then the Notified Body comes back with ten pages of questions. Every section you thought was clear becomes a starting point for deeper interrogation. The manufacturer feels blindsided.
But this is not a surprise attack. This is the template working exactly as designed.
What MDCG 2020-13 Actually Is
MDCG 2020-13 provides the harmonized template for the Clinical Evaluation Assessment Report. This is the document the Notified Body creates to assess whether your clinical evaluation meets MDR requirements.
The guidance requires you, the manufacturer, to complete Part A of the template. You document the scope of your evaluation, your methodology, your conclusions. You explain your equivalence claims. You summarize your clinical data.
Then the Notified Body completes Part B. They assess whether your work is sufficient. They identify gaps. They document deficiencies. They decide if your clinical evidence supports your intended use and claims.
On paper, this looks like collaboration. In practice, it is an adversarial review.
The CEAR template is not neutral documentation. It is a structured interrogation tool. Every question you answer gives the reviewer a specific angle to challenge your clinical evaluation.
The Structure Works Against Incomplete Thinking
The template forces you to be explicit about things you might prefer to leave vague. Section 1.3 asks you to define the clinical benefits. Section 2.2 asks you to justify your literature search strategy. Section 3.1 requires you to declare equivalence and state the technical, biological, and clinical basis.
If your CER is generic, the CEAR exposes it. If your equivalence claim rests on weak reasoning, the template reveals it. If your PMCF plan is disconnected from your gaps, the structure makes that obvious.
I see manufacturers copy text from the CER into the CEAR without rethinking it. They treat it like an administrative requirement. Then the Notified Body responds with specific challenges to the exact statements they copied.
Why does this happen?
Because the CER allows you to control the narrative. You write it in paragraphs. You frame the data your way. You choose what to emphasize and what to bury in appendices.
The CEAR removes that control. It breaks your narrative into discrete claims. Each claim becomes a target for scrutiny.
Section 1: Scope and Context
Section 1 looks straightforward. Device description. Intended purpose. Claims. Most manufacturers repeat what is in the IFU.
But the reviewer uses this section to lock you into your claims. If you state that your device is for long-term use, they will ask for long-term clinical data. If you claim superiority over existing treatments, they will ask for comparative evidence.
Every word you write here becomes a reference point for Part B. The reviewer will return to these statements repeatedly to check if your evidence actually supports them.
Manufacturers list broad claims in Section 1 but provide narrow evidence in Section 4. The disconnect is obvious in the CEAR structure, even if it was hidden in the CER narrative.
Section 2: Clinical Evaluation Methodology
This section asks you to describe your approach. Literature search. Appraisal criteria. Selection process. Data synthesis.
Most manufacturers describe what they did. The reviewer evaluates whether what you did is sufficient.
If you searched only one database, they will ask why you excluded others. If you limited your search to the last five years, they will question whether older foundational studies were considered. If you excluded studies without full text access, they will challenge whether your evidence base is complete.
The template does not ask you to justify your choices. But by documenting your choices, you create the obligation to justify them.
I have seen manufacturers write that they conducted a literature search according to MDCG 2020-13. This tells the reviewer nothing. The guidance describes what a search should achieve, not how you achieved it. When you write this, the reviewer assumes you did not understand the requirement.
Section 3: Demonstration of Equivalence
This is where most deficiencies concentrate.
Section 3.1 requires a clear declaration: is equivalence claimed, yes or no? If yes, you must identify the equivalent device and state the basis for equivalence.
Many manufacturers write long explanations here. They describe similarities. They mention the same intended use. They reference shared standards.
But the reviewer is looking for three specific elements: technical equivalence, biological equivalence, and clinical equivalence. If you do not address all three explicitly, your explanation is incomplete.
Then Section 3.2 asks whether the equivalent device is legally on the market. This is not a technicality. If your equivalent device was withdrawn, or if its legal status is uncertain, your entire equivalence claim collapses.
I see manufacturers reference devices that are still under MDD certification. They assume that because the device is currently sold, it qualifies as equivalent. But MDR Article 61(5) requires that equivalence is demonstrated to a device that complies with MDR. If your equivalent device has not yet transitioned, your equivalence claim may not be valid.
Manufacturers declare equivalence in Section 3.1 but fail to confirm the MDR compliance status of the equivalent device in Section 3.2. The Notified Body will reject the equivalence claim immediately.
Section 4: Clinical Data Analysis
Section 4 requires you to summarize the clinical data. Device-specific data. Equivalent device data. General literature.
Most manufacturers list studies here. They include publication details. They summarize outcomes.
But the reviewer wants to see critical appraisal. Not just what the studies showed, but whether the studies are relevant, reliable, and sufficient.
If you include a study with 15 patients and no control group, you need to explain why this study contributes to your clinical evidence. If you exclude studies that report adverse events, you need to justify the exclusion.
The CEAR structure makes this visible. In the CER, you might bury weak studies in a table. In the CEAR, every study you include becomes a specific target for questioning.
The Weight of Data
Section 4.3 asks you to explain how you weighed the different data sources. Did you prioritize device-specific data over general literature? Did you assign higher weight to randomized controlled trials?
This is where many manufacturers write nothing. They list the data but do not explain the reasoning.
The reviewer interprets silence as absence of reasoning. If you did not explicitly weigh the data, the reviewer assumes you treated all data equally. This leads to questions about whether weaker studies were given undue influence.
The CEAR asks not just what data you used, but how you used it. If you cannot explain your reasoning in Section 4.3, your analysis will be challenged.
Section 5: Benefit-Risk Analysis
Section 5.1 asks whether the clinical benefits outweigh the risks. Most manufacturers write yes.
But the reviewer reads the rest of Section 5 to see if this conclusion is justified.
Section 5.2 requires you to demonstrate that residual risks are acceptable when weighed against benefits. If your risk analysis identifies a 2% risk of device migration, and your clinical data shows no functional benefit over existing options, the reviewer will question whether the risk is justified.
Many manufacturers write that residual risks are mitigated through warnings. This is not a benefit-risk conclusion. This is risk management. The reviewer wants to see that the clinical benefit is large enough to justify accepting the residual risk, even after mitigation.
Risk Acceptability and State of the Art
Section 5.3 asks whether the benefit-risk profile is favorable compared to the state of the art. This ties your clinical evaluation to your SOTA analysis.
If your SOTA analysis says that current devices achieve 95% success rates, and your clinical data shows 85%, you have a problem. The CEAR structure makes this comparison explicit.
I see manufacturers write that their device meets the state of the art without comparing outcomes. They reference standards compliance. They mention design improvements.
But clinical performance is not the same as technical compliance. The reviewer wants evidence that your device performs at least as well as what is currently available. If you cannot demonstrate this, your benefit-risk balance is questioned.
Manufacturers claim favorable benefit-risk in Section 5.1 but provide no comparative analysis in Section 5.3. The conclusion appears unsupported when read in sequence.
Section 6: PMCF and Clinical Evidence Gaps
Section 6 links your clinical evaluation to your PMCF plan. The template asks whether you identified gaps in clinical evidence and whether your PMCF plan addresses those gaps.
Most manufacturers list generic PMCF activities. Complaint monitoring. Literature review. Survey distribution.
The reviewer looks for specificity. If Section 4 identified that long-term data is missing, Section 6 must explain how the PMCF plan will generate that data. If Section 5 noted uncertainty about use in a specific population, the PMCF plan must include targeted data collection for that population.
When the link is missing, the reviewer concludes that the PMCF plan is not fit for purpose. The plan exists to check a regulatory box, not to address actual evidence gaps.
Endpoints and Timelines
Section 6.2 asks for the expected timeline for gap closure. Many manufacturers write that gaps will be addressed in the next CER update.
This is not a timeline. This is postponement.
The reviewer wants to see when data collection will be complete, when analysis will occur, and when conclusions will be drawn. If you cannot provide this, it signals that your PMCF is reactive, not planned.
Section 6 tests whether your PMCF plan is a real data generation strategy or just a regulatory placeholder. The CEAR structure makes the difference visible.
How Reviewers Use the Template
Notified Body reviewers do not read the CEAR in isolation. They read it alongside your CER, your risk management file, and your PMCF plan.
The CEAR is their roadmap. It tells them where to look for inconsistencies.
If Section 1 of the CEAR lists claims that are not supported in Section 4, the reviewer goes back to the CER to check if the evidence was omitted or if it never existed. If Section 3 declares equivalence but the risk analysis shows different failure modes, the reviewer questions the equivalence basis.
The template creates a logical flow. Each section depends on the previous ones. When one section does not align, the entire structure becomes suspect.
The Assessment Report (Part B)
After you complete Part A, the Notified Body completes Part B. This is where they document their findings.
Part B mirrors Part A. For every section you completed, they write whether your documentation is sufficient, whether additional data is needed, and whether your conclusions are justified.
If they write that additional data is required in Part B, you receive a nonconformity. The certification process stops until you resolve the gap.
The CEAR makes this process predictable. If you completed Part A without critically reviewing your own work, Part B will reflect that weakness.
What This Means for Your Work
The CEAR is not a summary document. It is a self-assessment tool that the Notified Body uses as a review protocol.
When you complete Part A, you are not just documenting your evaluation. You are creating the structure through which your evaluation will be challenged.
This means you must complete the CEAR as if you are the reviewer. Every claim you make, ask yourself how it will be questioned. Every gap you acknowledge, explain how you will address it. Every conclusion you state, ensure the evidence in previous sections supports it.
If you cannot defend a statement in the CEAR, do not write it. If you cannot explain a methodological choice, reconsider whether that choice was appropriate.
The template is adversarial by design. It works best when you treat it as your own interrogation before the Notified Body does.
Completing the CEAR is not documentation. It is preparation for cross-examination. Write every section assuming the reviewer will challenge every sentence.
The Template Exposes What the CER Hides
The difference between the CER and the CEAR is narrative control.
In the CER, you write in flowing paragraphs. You control the emphasis. You integrate weak points into broader discussions. You guide the reader through your reasoning.
The CEAR removes that control. It isolates each element of your evaluation. It forces you to state your claims, your methods, your conclusions in discrete sections. It makes every weak point visible.
This is why manufacturers who write strong CERs still struggle with the CEAR. The CER looked convincing because the narrative held together. The CEAR reveals that the narrative was covering gaps.
If your clinical evaluation is solid, the CEAR confirms it. If your evaluation has weaknesses, the CEAR exposes them.
There is no way to hide behind structure when the structure itself is the interrogation.
Frequently Asked Questions
What is a Clinical Evaluation Report (CER)?
A CER is a mandatory document under MDR 2017/745 that demonstrates the safety and performance of a medical device through systematic analysis of clinical data. It must be updated throughout the device lifecycle based on PMCF findings.
How often should the CER be updated?
The CER should be updated whenever significant new clinical data becomes available, after PMCF activities, when there are changes to the device or intended purpose, and at minimum during annual reviews as part of post-market surveillance.
What causes CER rejection by Notified Bodies?
Common reasons include inadequate equivalence demonstration, insufficient clinical data for claims, poorly structured SOTA analysis, missing gap analysis, and lack of clear benefit-risk determination. Structure and logical flow are as important as the data itself.
Which MDCG guidance documents are most relevant for clinical evaluation?
Key documents include MDCG 2020-5 (Equivalence), MDCG 2020-6 (Sufficient Clinical Evidence), MDCG 2020-13 (CEAR Template), MDCG 2020-7 (PMCF Plan), and MDCG 2020-8 (PMCF Evaluation Report). MDCG 2020-13
Need Expert Help with Your Clinical Evaluation?
Get personalized guidance on MDR compliance, CER writing, and Notified Body preparation.
✌
Peace, Hatem
Your Clinical Evaluation Partner
Follow me for more insights and practical advice.
– Regulation (EU) 2017/745 (MDR) Articles 61, 83, Annex XIV
– MDCG 2020-13 Clinical Evaluation Assessment Report Template
Deepen Your Knowledge
Read Complete Guide to Clinical Evaluation under EU MDR for a comprehensive overview of clinical evaluation under EU MDR 2017/745.





