Your CER passes internal review but fails audit. Why?

Hatem Rabeh

Written by HATEM RABEH, MD, MSc Ing

Your Clinical Evaluation Expert And Partner

in
S

I’ve seen perfectly complete CERs—every section filled, every template box checked—get torn apart in the first review cycle. The content was there. The data was solid. But the reviewer couldn’t follow the logic. The structure didn’t support the conclusion. The traceability was broken. A CER isn’t just a collection of sections. It’s a legal and scientific argument that must hold up under scrutiny.

The Notified Body reviewer opens your CER with one question in mind: Can I trace the clinical evidence to the safety and performance claims? If they can’t follow your reasoning within the first 30 minutes, you’re already in trouble.

Most manufacturers focus on content completeness. They fill every section. They include all the studies. They write the conclusions. But they miss the fundamental requirement: the document must guide the reviewer from device description to clinical conclusion without forcing them to jump back and forth or reconstruct your logic.

This isn’t about making the document “easy to read.” It’s about building a structure that supports regulatory scrutiny.

Why Structure Matters More Than You Think

When a Notified Body reviewer opens your CER, they’re not reading it like a novel. They’re auditing it. They have a checklist derived from MDR Annex XIV and MDCG 2020-13. They’re looking for specific elements in specific places. They’re checking dependencies between sections.

If your device scope in Section 1 doesn’t match the scope of your literature search in Section 4, they stop. If your equivalence claim in Section 3 isn’t traceable to specific clinical data in Section 5, they stop. If your risk-benefit conclusion in Section 7 references safety data that isn’t clearly presented in Section 6, they stop.

Each stop is a deficiency. Each deficiency is a delay. Some delays cost months.

Common Deficiency
“The clinical data presented in Section 5 does not correspond to the device specifications and intended use described in Section 1. Clarify which clinical evidence applies to the specific device under evaluation.”

This deficiency doesn’t mean your clinical data is wrong. It means the reviewer couldn’t connect it to your device. The structure failed.

How Reviewers Navigate Your CER

Reviewers don’t read linearly. They jump. They cross-reference. They verify consistency across sections. Understanding their navigation pattern helps you structure the document correctly.

First Pass: The Scope and Claims Check

The reviewer starts with your device description and intended use. They note every claim you make about safety, performance, and clinical benefit. They identify the patient population, the clinical condition, and the intended user.

Then they jump to your conclusions. They want to see if your final clinical evaluation supports every claim you made at the beginning. If you claim “reduced infection risk,” they expect to see infection data in your analysis and a clear statement about infection outcomes in your conclusion.

Most gaps are found here. The device description includes claims that are never substantiated. Or the conclusions address outcomes that were never part of the intended use.

Second Pass: The Evidence Trail

Now they go to your clinical data sections. They check whether your literature search strategy actually targeted the claims you made. They verify that your appraisal criteria align with the device characteristics. They trace your equivalence demonstration—if applicable—to specific data points.

This is where section dependencies become critical. If you claim equivalence in Section 3, your literature search in Section 4 must be scoped accordingly. Your appraisal in Section 5 must address whether the equivalent device data is transferable. Your analysis in Section 6 must quantify the clinical outcomes for the equivalent device.

If any link in this chain is weak, the entire equivalence argument collapses.

Key Insight
Every claim in Section 1 must have a traceable path through your evidence sections to your conclusion in Section 7. If the reviewer has to search for this path, your structure is inadequate.

Third Pass: The Risk-Benefit Verification

The reviewer checks your risk analysis. They compare the risks you identified in your risk management file against the risks discussed in the CER. They verify that every residual risk is addressed with clinical evidence or a justification.

Then they look at your benefit claims. They check whether your clinical data actually demonstrates those benefits in the target population. They assess whether your risk-benefit balance is reasonable given the clinical data you presented.

This is where many CERs fail. The risk section is copied from the risk management file without integration. The benefit claims are stated but not quantified. The balance is asserted but not demonstrated with data.

Section Dependencies You Cannot Ignore

A CER is not a modular document. Each section depends on the sections before it. Break one dependency and the entire argument weakens.

Section 1 → All Sections

Your device description and intended use define the scope of everything that follows. If you describe a surgical device for laparoscopic procedures, your literature search must target laparoscopic use. Your clinical data must come from laparoscopic settings. Your risk-benefit analysis must address laparoscopic risks.

If you later include data from open surgery, the reviewer will question why. You’ll need to justify the relevance. This creates unnecessary back-and-forth.

Section 3 → Sections 4, 5, 6

Your equivalence claim—if you make one—dictates your evidence strategy. If you claim full equivalence, your literature search should focus on the equivalent device. Your appraisal should validate the equivalence. Your analysis should demonstrate that the equivalent device data supports your device’s safety and performance.

If your equivalence is partial, you must clearly state what is equivalent and what is not. Then your literature search must address the non-equivalent aspects with additional data. Your analysis must integrate both sets of evidence.

I’ve reviewed CERs where equivalence is claimed in Section 3, but the literature search in Section 4 ignores the equivalent device entirely and searches for general device category data. The reviewer asks: Why are you claiming equivalence if you’re not using equivalent device data?

There’s no good answer to that question.

Sections 4, 5 → Section 6

Your literature search defines the evidence pool. Your appraisal determines which studies are valid. Your analysis must be limited to the appraised studies.

If you appraise 15 studies and only 8 pass your quality criteria, you cannot include data from the other 7 in your analysis. If you do, the reviewer will flag it. They’ll question your appraisal process or your analysis integrity.

Common Deficiency
“The analysis includes data from studies marked as ‘not adequate’ in the appraisal table. Justify the inclusion or remove the data.”

Section 6 → Section 7

Your data analysis must directly support your conclusion. Every claim in Section 7 must be traceable to a data point in Section 6. Every risk addressed in Section 7 must have been quantified in Section 6.

This sounds obvious, but it’s frequently violated. Conclusions include statements like “the device is safe for long-term use” when the analysis only presented short-term data. Or “the device reduces complication rates” when no comparative analysis was performed.

The reviewer sees this immediately.

Building Traceability Into the Structure

Traceability means the reviewer can verify your reasoning without hunting through the document. You build this into the structure intentionally.

Use Consistent Identifiers

When you reference a clinical study, give it an identifier. Use that same identifier in your appraisal table, your analysis section, and your reference list. Don’t call it “Smith 2020” in one place and “Smith et al.” in another. The reviewer shouldn’t have to guess if these are the same study.

Cross-Reference Explicitly

When you make a claim in Section 7, reference the specific analysis in Section 6 that supports it. Don’t make the reviewer search. Write: “As shown in Section 6.2.3, infection rates were 2.1% in the device group versus 5.4% in the control group.”

This isn’t hand-holding. This is demonstrating traceability.

Summarize Before You Conclude

Before your conclusion, include a summary of key findings. List the safety outcomes, the performance outcomes, and the benefit outcomes. Show the data. Then write your conclusion based on that summary.

This gives the reviewer a checkpoint. They can verify that your conclusion aligns with your data before they finish reading.

Key Insight
A well-structured CER allows the reviewer to audit your reasoning, not reconstruct it. Every claim is traceable. Every dependency is clear. Every conclusion is supported.

Logical Flow vs. Template Compliance

MDCG 2020-13 provides a structure. Many manufacturers treat it as a rigid template. Fill the boxes, check the sections, submit the CER. But the guidance is a framework, not a formula.

A good CER follows the framework but adapts it to the device and the evidence. If you’re claiming equivalence, your document flow should emphasize the equivalence demonstration. If you’re relying on clinical investigation data, your flow should integrate that data clearly.

The reviewer isn’t checking if you followed the template. They’re checking if your argument is coherent, logical, and defensible.

I’ve seen CERs that rigidly follow the MDCG structure but fail to build a logical narrative. Section 5 appraises studies, but Section 6 doesn’t clearly explain how those studies support the device claims. The sections exist. The logic doesn’t.

The opposite problem also occurs. Some manufacturers create custom structures that ignore the MDCG framework. The reviewer has to work harder to find the required elements. This irritates them and increases scrutiny.

The right approach: follow the MDCG structure but ensure every section flows into the next. Each section should prepare the reader for what’s coming. Each transition should feel natural.

What Happens When Structure Fails

A poorly structured CER doesn’t just create deficiencies. It undermines confidence. The reviewer starts questioning your clinical evaluation process. If you can’t structure the document logically, can you structure your evidence assessment logically?

I’ve been in audit meetings where the Notified Body reviewer says: “We can’t follow your reasoning. We need a complete revision.” Not more data. Not additional studies. A complete structural revision.

This isn’t a content problem. This is a communication problem. And it costs time, money, and certification delays.

The manufacturers who avoid this understand one thing: the CER is a legal document that must withstand regulatory scrutiny. It’s not an internal report. It’s not a technical summary. It’s a structured argument that demonstrates compliance with MDR Annex XIV.

Structure it accordingly.

Key Insight
Your CER structure is part of your regulatory strategy. It determines how quickly the reviewer can verify your compliance and how easily they can identify gaps. Treat it as seriously as the data itself.

Final Consideration

The next time you prepare a CER, open it as if you’re the Notified Body reviewer. Can you trace every claim to supporting data? Can you verify every conclusion without searching? Can you understand the device scope and see how the evidence applies to it?

If you have to hunt for answers, so will the reviewer. And they won’t be as patient as you are.

Structure your CER to guide the reviewer from device description to clinical conclusion. Build in traceability. Respect section dependencies. Ensure logical flow.

The content is important. But if the structure fails, the content won’t save you.

Peace,
Hatem
Clinical Evaluation Expert for Medical Devices
Follow me for more insights and practical advice.

Frequently Asked Questions

What is a Clinical Evaluation Report (CER)?

A CER is a mandatory document under MDR 2017/745 that demonstrates the safety and performance of a medical device through systematic analysis of clinical data. It must be updated throughout the device lifecycle based on PMCF findings.

How often should the CER be updated?

The CER should be updated whenever significant new clinical data becomes available, after PMCF activities, when there are changes to the device or intended purpose, and at minimum during annual reviews as part of post-market surveillance.

What causes CER rejection by Notified Bodies?

Common reasons include inadequate equivalence demonstration, insufficient clinical data for claims, poorly structured SOTA analysis, missing gap analysis, and lack of clear benefit-risk determination. Structure and logical flow are as important as the data itself.

Which MDCG guidance documents are most relevant for clinical evaluation?

Key documents include MDCG 2020-5 (Equivalence), MDCG 2020-6 (Sufficient Clinical Evidence), MDCG 2020-13 (CEAR Template), MDCG 2020-7 (PMCF Plan), and MDCG 2020-8 (PMCF Evaluation Report). MDCG 2020-13

Need Expert Help with Your Clinical Evaluation?

Get personalized guidance on MDR compliance, CER writing, and Notified Body preparation.

Peace, Hatem

Your Clinical Evaluation Partner

Follow me for more insights and practical advice.

References:
– Regulation (EU) 2017/745 (MDR), Annex XIV
– MDCG 2020-13: Clinical Evaluation Assessment Report Template

Bridge the gap between internal and external review standards with our comprehensive CER guide under MDR.

Deepen Your Knowledge

Read Complete Guide to Clinical Evaluation under EU MDR for a comprehensive overview of clinical evaluation under EU MDR 2017/745.