What notified body clinical experts see that you missed

Hatem Rabeh

Written by HATEM RABEH, MD, MSc Ing

Your Clinical Evaluation Expert And Partner

in
S

You submitted a clinical evaluation report that passed internal review. Legal signed off. Quality approved it. Two months later, the notified body clinical expert sends back seventeen deficiencies. Most of them reference gaps you never anticipated. This happens more often than manufacturers admit.

The disconnect is not about competence. It is about perspective.

Internal reviewers work within the framework you built. They check boxes against templates. They verify references exist. They confirm sections are present.

Notified body clinical experts work differently. They reconstruct your clinical reasoning from scratch. They question every logical step. They test whether your conclusions hold under regulatory scrutiny.

Understanding this difference changes how you prepare submissions.

The First Thing They Check Is Not What You Think

Most manufacturers expect clinical experts to start with the literature review. They assume the depth of the search matters most.

That comes later.

The first thing a clinical expert evaluates is whether you defined the device correctly. Not the intended purpose from your IFU. The actual clinical claim embedded in your documentation.

They extract this from multiple sources: the scope of the CER, the GSPR checklist, the risk management file, the clinical investigation protocol if present. Then they compare these definitions.

If inconsistencies appear, everything that follows is questioned. Because if you cannot define what the device does clinically, you cannot evaluate it.

Common Deficiency
The intended purpose states the device “monitors cardiac rhythm” but the clinical evaluation focuses on arrhythmia detection and classification. The expert sees two different clinical functions. Which one is being evaluated? Which one needs clinical evidence?

This is not pedantic. Article 61(1) of the MDR requires that clinical evaluation demonstrates safety and performance for the intended purpose. If the intended purpose shifts between documents, the evaluation is invalid.

I have seen manufacturers spend six months reworking literature searches when the real issue was a misaligned device definition that took two paragraphs to fix.

How They Read Your Literature Review

Clinical experts do not count papers. They trace your clinical reasoning through the evidence.

They start by checking whether your search strategy could have found contradictory evidence. If your search terms are too narrow, they note it. If you excluded databases where negative results are published, they question why.

Then they look at what you did with the results.

A common pattern: manufacturers include studies that support their device and briefly mention limitations. The expert reads those same studies and sees methodological flaws that invalidate the conclusions. The manufacturer cited the abstract. The expert read the full paper.

When this happens, the deficiency is not that you missed a study. The deficiency is that you demonstrated appraisal without critical thinking.

Key Insight
Clinical experts expect you to argue against your own device. They want to see that you identified the weakest points in your evidence and addressed them directly. Defensive documentation raises more questions than it answers.

This is where MDCG 2020-6 becomes critical. The guidance on sufficient clinical evidence is not a checklist. It is a framework for reasoning. Experts trained in this framework can immediately identify when a manufacturer followed the form but missed the substance.

They look for three things in every appraisal:

First, did you assess relevance correctly? Not just whether the study population matches your intended users, but whether the clinical outcomes measured in the study correspond to the clinical benefits you claim.

Second, did you assess reliability? This means study design, bias, statistical power. If you cited a study with thirty patients and no control group to support a safety claim, the expert will ask why you considered that sufficient.

Third, did you synthesize across studies or just list them? Synthesis means identifying patterns, contradictions, and gaps. Listing means you copied abstracts into a table.

The Equivalence Trap

If you claim equivalence under Article 61(5) and Annex XIV, the clinical expert will reverse-engineer your entire logic chain.

They start at the end. You claim your device is equivalent to a device already on the market. Therefore, you rely primarily on that device’s clinical data.

Now they work backward.

They verify the equivalent device is legally on the market under the MDR. Not the MDD. Not grandfathered. Actually compliant.

They check the technical comparison. Not just the summary table. They want to see raw specifications, materials, design drawings, manufacturing processes. They look for any difference that could affect clinical performance.

Then they examine the clinical comparison. Do both devices have the same intended purpose? Same clinical claims? Same patient population? Same use conditions?

Any gap in this chain invalidates the equivalence claim. And I do mean any gap.

Common Deficiency
The manufacturer claims equivalence but the equivalent device has three published clinical studies while their device has none. The expert asks: if the devices are equivalent, why does one require clinical investigation and the other does not? What is the clinical justification for this difference?

This is not theoretical. Notified bodies have been rejecting equivalence claims at increasing rates since 2021. The reason is not stricter interpretation. The reason is that many equivalence claims were never technically valid.

Clinical experts trained in MDR requirements can identify these invalid claims in the first read. They do not need to dig through references. The logical structure reveals the problem.

What They Expect From Your Risk-Benefit Analysis

Article 61(1) requires that clinical evaluation demonstrates a favorable benefit-risk ratio. Most manufacturers interpret this as a section in the CER where you state that benefits outweigh risks.

Clinical experts expect actual analysis.

This means quantification where possible. This means comparison against alternative treatments or devices. This means acknowledging residual risks that cannot be mitigated and explaining why the clinical benefit justifies accepting them.

They look at your risk management file first. They identify the residual risks after mitigation. Then they return to the CER and check whether you addressed each one clinically.

If your risk file lists a residual risk of infection and your CER does not include infection data from clinical evidence, you have a deficiency. The expert does not need to tell you what data to provide. You should have anticipated this.

Key Insight
The benefit-risk analysis is not a standalone section. It is the thread that connects your risk management file, your clinical evidence, and your GSPR compliance. If these three documents tell different stories about the same risks, the expert will reject the entire evaluation.

I have reviewed CERs where the benefit-risk section was well-written, referenced multiple studies, and concluded favorably. But the risk management file listed hazards that were never mentioned in the clinical evaluation. The notified body clinical expert identified this in the first review cycle.

The deficiency was not missing data. The deficiency was missing integration.

How They Assess Your PMCF Plan

Article 61(11) requires ongoing clinical evaluation through post-market clinical follow-up. Every manufacturer knows this. Most submit PMCF plans.

Clinical experts evaluate whether your plan can actually answer clinical questions.

They check whether the plan is specific to your device or generic. If you can replace your device name with any other device and the plan still reads the same, it is generic. Generic plans get rejected.

They verify the clinical questions are linked to evidence gaps identified in the CER. If your CER concludes you have sufficient evidence but your PMCF plan investigates the same endpoints, the expert asks why you are investigating what you already know.

They assess whether the methods can produce usable data. A PMCF plan that states “we will collect complaint data and conduct literature surveillance” is not a plan. It is a description of vigilance obligations you already have under Article 83.

Common Deficiency
The PMCF plan proposes a survey to collect patient satisfaction data but does not specify validated instruments, sample size calculation, or analysis methods. The expert cannot determine if the survey will produce evidence that supports continued conformity.

MDCG 2020-7 provides the framework clinical experts use. If your plan does not address the elements in that guidance, you will receive deficiencies. Not because the expert is being difficult. Because the plan is incomplete.

The State of the Art Problem

Annex I, Chapter I requires that devices meet the state of the art. The clinical evaluation must demonstrate this.

Clinical experts look for comparative analysis. Not just references to standards. Actual comparison between your device and what else exists clinically.

This creates tension with equivalence claims. If you claim equivalence, you are arguing your device performs similarly to an existing device. But state of the art requires you to consider whether better solutions exist.

Both can be true, but you must address both.

I see manufacturers cite the equivalent device as proof they meet state of the art. The logic is: our device is equivalent to that device, that device is on the market, therefore we meet state of the art.

The expert asks: what about devices that perform better? What about clinical practices that have evolved? What about new evidence published after the equivalent device was evaluated?

State of the art is not static. Your evaluation must reflect current clinical knowledge, not the knowledge available when the equivalent device was approved.

Key Insight
The state of the art section is where clinical experts assess whether you understand the clinical environment your device enters. If your references are five years old and you claim nothing has changed, the expert will question your surveillance process.

What Happens During the Review Meeting

After the written review, many notified bodies schedule a discussion with the clinical expert. This is not an interrogation. It is an opportunity to clarify ambiguities.

But manufacturers often misunderstand what can be clarified.

You can clarify your reasoning. You can explain why you interpreted evidence a certain way. You can point to supporting information that exists but was not emphasized in the report.

You cannot introduce new evidence that should have been in the original submission. You cannot change your clinical claims to avoid deficiencies. You cannot argue that the expert is being too strict.

Clinical experts appreciate when you engage with their questions directly. They do not appreciate when you defend a position without addressing the underlying concern.

The most productive meetings I have participated in were those where the manufacturer listened to the expert’s reasoning first, then responded to the actual question being asked. Not the question they assumed was being asked.

The Pattern They See Across Submissions

Clinical experts review hundreds of CERs annually. They see patterns.

They see when a manufacturer worked with a consultant who uses the same template for every device. The structure is identical. The phrasing is identical. Only the device name changes.

They see when a manufacturer copied sections from a previous submission without updating references. Publication dates cluster around one year. Nothing recent appears.

They see when a clinical evaluation was written to satisfy an internal quality requirement rather than to answer clinical questions. The report checks every box but provides no insight.

These patterns do not automatically cause rejection. But they increase scrutiny. Because if the form is present but the thinking is absent, deeper problems usually exist.

Clinical experts are not trying to fail submissions. They are trying to verify that devices entering the market have been properly evaluated. When documentation demonstrates rigorous clinical thinking, the review proceeds efficiently.

When documentation demonstrates process compliance without clinical substance, the review expands.

What This Means For Your Next Submission

Before you submit, read your clinical evaluation as if you were trying to disprove it.

Identify the weakest claim. Find the thinnest evidence. Locate the logical gap between your data and your conclusion.

If you can see these vulnerabilities, the clinical expert will definitely see them.

The question is whether you addressed them proactively or left them for the expert to discover.

One approach leads to targeted questions you can answer. The other approach leads to deficiencies that require resubmission.

The difference is not luck. The difference is anticipating the perspective of someone who questions clinical reasoning for a living.

Peace,
Hatem
Clinical Evaluation Expert for Medical Devices
Follow me for more insights and practical advice.

Frequently Asked Questions

What is a Clinical Evaluation Report (CER)?

A CER is a mandatory document under MDR 2017/745 that demonstrates the safety and performance of a medical device through systematic analysis of clinical data. It must be updated throughout the device lifecycle based on PMCF findings.

How often should the CER be updated?

The CER should be updated whenever significant new clinical data becomes available, after PMCF activities, when there are changes to the device or intended purpose, and at minimum during annual reviews as part of post-market surveillance.

What causes CER rejection by Notified Bodies?

Common reasons include inadequate equivalence demonstration, insufficient clinical data for claims, poorly structured SOTA analysis, missing gap analysis, and lack of clear benefit-risk determination. Structure and logical flow are as important as the data itself.

Which MDCG guidance documents are most relevant for clinical evaluation?

Key documents include MDCG 2020-5 (Equivalence), MDCG 2020-6 (Sufficient Clinical Evidence), MDCG 2020-13 (CEAR Template), MDCG 2020-7 (PMCF Plan), and MDCG 2020-8 (PMCF Evaluation Report).

Need Expert Help with Your Clinical Evaluation?

Get personalized guidance on MDR compliance, CER writing, and Notified Body preparation.

Peace, Hatem

Your Clinical Evaluation Partner

Follow me for more insights and practical advice.

References:
– Regulation (EU) 2017/745 (MDR), Article 61, Annex XIV, Annex I
– MDCG 2020-6: Regulation (EU) 2017/745: Sufficient clinical evidence for legacy devices
– MDCG 2020-7: Post-market clinical follow-up (PMCF) Plan Template
– MDCG 2020-13: Clinical evaluation assessment report template

Deepen Your Knowledge

Read Complete Guide to Clinical Evaluation under EU MDR for a comprehensive overview of clinical evaluation under EU MDR 2017/745.