Why your CER template is the first thing reviewers reject
Every week, I review clinical evaluation reports that fail not because of poor clinical data, but because the template structure itself prevents coherent argumentation. The company thinks they filled in the blanks. The reviewer sees a document that cannot be followed.
In This Article
The problem is not that you use a template. The problem is that most templates are designed around comfort, not logic.
They group information by source type rather than clinical reasoning. They separate safety from performance. They treat equivalence as an isolated chapter instead of an argument woven through the entire report.
When a Notified Body reviewer opens your CER, they are not looking for compliance with a checklist. They are looking for a clinical argument that can be followed from question to conclusion.
If your template does not support that flow, no amount of data will save the submission.
What MDR Actually Requires From Structure
Annex XIV of MDR 2017/745 does not prescribe a template. It describes the elements that must be present and the logic that must be demonstrated.
MDCG 2020-13 goes further. It defines a structure based on clinical questions, not document sections.
The clinical evaluation must answer:
- What are the intended clinical benefits?
- What are the residual risks after risk management?
- Is there sufficient clinical evidence to support safety and performance claims?
- Is post-market surveillance confirming what you predicted?
These are not four chapters. These are four reasoning threads that must run through the entire report.
A CER that passes review is structured as a progressive clinical argument, not as a collection of sections filled with data.
Most templates fail this test because they separate clinical data from safety analysis, and both from performance claims. The reviewer has to jump between chapters to reconstruct the argument themselves.
That is when the deficiency list arrives.
The Hidden Fractures in Standard Templates
I see the same structural deficiencies across companies, consultancies, and internal teams. They come from templates that prioritize ease of writing over clarity of reasoning.
Fracture One: The Equivalence Chapter That Stands Alone
Many templates include a dedicated section titled “Equivalence Demonstration” or “Substantial Equivalence Analysis.”
It appears early in the document. The team fills it with technical comparisons, material lists, and test reports. Then they move on to clinical data as if equivalence was already settled.
But equivalence is not a section. It is a claim that must be continuously validated through clinical evidence, risk analysis, and performance data.
When reviewers read an equivalence chapter in isolation, they see tables and assertions. They do not see the ongoing confirmation that equivalence holds when clinical use varies, when patient populations differ, or when adverse events appear.
Equivalence claimed in Chapter 3, then contradicted by clinical data in Chapter 7, with no reconciliation or update to the equivalence argument.
The template created this fracture. It separated the equivalence claim from the evidence that should either support or challenge it.
Fracture Two: Safety and Performance Treated as Parallel Tracks
Another common structure presents safety analysis in one chapter and performance analysis in another.
This makes sense administratively. You assign one person to adverse events and another to clinical outcomes.
But clinically, safety and performance are inseparable. An adverse event impacts clinical benefit. A performance limitation creates a safety concern.
When your CER separates them structurally, the reviewer sees contradictions you did not intend.
You report low complication rates in the safety chapter, but in the performance chapter, you describe limited efficacy in a subgroup without explaining whether that limitation creates harm.
The template forced you to write as if these were independent analyses.
Fracture Three: PMCF as a Checklist Afterthought
Most templates place post-market clinical follow-up at the end of the CER, often as a summary of the PMCF plan.
This reflects the old mindset: clinical evaluation happens before market access, and PMCF happens after.
Under MDR, this is wrong. PMCF is not a compliance activity. It is the mechanism that confirms or refutes your pre-market clinical conclusions.
If your CER structure treats PMCF as a final section, the reviewer sees a gap in reasoning. How does post-market data feed back into your safety and performance conclusions? How will you update the risk-benefit assessment when real-world evidence arrives?
The template gave you no place to connect these elements.
The Structure That Actually Works
After years of reviewing CERs and preparing submissions that pass, I have learned that structure must follow clinical logic, not document convenience.
Here is what passes review:
Section 1: Clinical Context and Scope
Start with the clinical need, the intended patient population, and the specific clinical claims you are making.
This is not the “device description” section. This is where you define what success looks like clinically.
What benefit does the patient receive? What risks remain after mitigation? What alternatives exist?
Every sentence in this section should prepare the reader for the clinical argument that follows.
Section 2: Clinical Development Plan and Evidence Sources
Describe how you identified, selected, and weighted the clinical evidence.
If you are using equivalence, explain the equivalence strategy here, not as a conclusion, but as a method.
Equivalence is how you generate clinical evidence, not a fact you assert. The reader needs to understand your reasoning before they see the data.
Equivalence should be introduced as a methodology in the evidence plan, then validated continuously through the clinical data sections.
This section also defines inclusion and exclusion criteria for literature, explains weighting decisions, and describes how you handled conflicting data.
Transparency here prevents deficiencies later.
Section 3: Integrated Clinical Evidence Analysis
This is where most templates fail, and where your CER must succeed.
Do not separate safety from performance. Do not divide evidence by source type.
Instead, organize evidence by clinical question:
- Does the device achieve the intended clinical benefit in the target population?
- What adverse events occur, at what frequency, and with what severity?
- How does the benefit-risk profile compare to alternatives?
- Are there subgroups where the profile changes?
For each question, integrate all evidence sources. Clinical studies, literature, equivalence data, complaint trends, real-world use.
The reviewer should see a single clinical argument, not a collection of data summaries.
Section 4: Risk-Benefit Analysis and Clinical Conclusions
Synthesize the evidence into a clear risk-benefit determination.
This is not a restatement of what you already presented. This is where you make the clinical judgment.
Given the benefits observed, the risks identified, and the alternatives available, is the device acceptable for the intended use?
Be explicit. Be quantitative where possible. Show the reasoning.
Section 5: PMCF and Evidence Continuity
Describe how post-market data will confirm or challenge your conclusions.
What hypotheses are you testing? What evidence gaps remain? What would trigger a reassessment?
Connect this directly to the clinical questions in Section 3. The reviewer should see PMCF as the continuation of the same clinical reasoning, not as a separate activity.
PMCF plan listed without connection to specific evidence gaps or clinical uncertainties identified in the CER.
Why This Structure Prevents Deficiencies
When your CER follows clinical logic instead of administrative convenience, several problems disappear.
First, equivalence contradictions vanish. You introduce equivalence as a method, then validate it with every piece of clinical evidence. If the evidence challenges equivalence, you address it immediately in the same section.
Second, safety-performance gaps close. You analyze both together for each clinical question, so the reviewer sees the full picture at once.
Third, PMCF connects naturally. You are not adding a plan at the end. You are showing how post-market evidence completes the clinical evaluation cycle.
Fourth, the reviewer can follow your reasoning. They do not have to reconstruct your argument by jumping between chapters. The logic flows from question to evidence to conclusion.
This does not mean your CER will have no deficiencies. But the deficiencies will be about evidence quality or clinical interpretation, not about structural confusion.
The Template You Should Build
If you are creating or revising a CER template, resist the temptation to organize by convenience.
Ask instead: what clinical questions must this report answer?
Structure the template around those questions. Let the evidence serve the argument, not the other way around.
Train your team to think in clinical logic, not section filling.
When someone asks, “Where do I put this data?” the answer should be, “What clinical question does it address?”
That is the only question that matters.
A good template guides clinical reasoning. A bad template fragments it. Choose structure that serves logic, not workflow.
Your CER will never be perfect. But if the structure supports coherent argumentation, the reviewer can engage with your clinical conclusions instead of getting lost in document archaeology.
That is what passes review.
Peace,
Hatem
Clinical Evaluation Expert for Medical Devices
Follow me for more insights and practical advice.
Frequently Asked Questions
What is a Clinical Evaluation Report (CER)?
A CER is a mandatory document under MDR 2017/745 that demonstrates the safety and performance of a medical device through systematic analysis of clinical data. It must be updated throughout the device lifecycle based on PMCF findings.
How often should the CER be updated?
The CER should be updated whenever significant new clinical data becomes available, after PMCF activities, when there are changes to the device or intended purpose, and at minimum during annual reviews as part of post-market surveillance.
What causes CER rejection by Notified Bodies?
Common reasons include inadequate equivalence demonstration, insufficient clinical data for claims, poorly structured SOTA analysis, missing gap analysis, and lack of clear benefit-risk determination. Structure and logical flow are as important as the data itself.
Which MDCG guidance documents are most relevant for clinical evaluation?
Key documents include MDCG 2020-5 (Equivalence), MDCG 2020-6 (Sufficient Clinical Evidence), MDCG 2020-13 (CEAR Template), MDCG 2020-7 (PMCF Plan), and MDCG 2020-8 (PMCF Evaluation Report).
Need Expert Help with Your Clinical Evaluation?
Get personalized guidance on MDR compliance, CER writing, and Notified Body preparation.
✌
Peace, Hatem
Your Clinical Evaluation Partner
Follow me for more insights and practical advice.
– Regulation (EU) 2017/745 (MDR), Annex XIV
– MDCG 2020-13: Clinical Evaluation Assessment Report Template





