Most CERs Fail at the Gap Analysis. Here’s Why.

Hatem Rabeh

Written by HATEM RABEH, MD, MSc Ing

Your Clinical Evaluation Expert And Partner

in
S

You finish the literature search. You compile the data. You write the clinical evaluation report. Then the Notified Body stops at Section 6 and asks: “What about the gaps?” Most manufacturers discover at that moment they never actually performed a gap analysis. They just listed what they found.

The gap analysis is not an afterthought. It is not a summary of what your literature search revealed. It is the systematic identification of what your evidence does not cover—and what that means for your device’s clinical safety and performance.

Yet in most CER reviews I conduct, the gap analysis section is either missing, superficial, or confused with the data appraisal. The manufacturer describes what the studies show. But they never answer what the studies don’t show.

This creates a regulatory blind spot. Because if you don’t identify the gaps, you can’t justify them. And if you can’t justify them, the Notified Body will.

What the MDR Actually Requires

The gap analysis is embedded in the clinical evaluation framework under MDR Article 61 and Annex XIV Part A. The regulation does not use the term “gap analysis” explicitly. But it requires manufacturers to demonstrate that the clinical evidence is sufficient to support the safety and performance claims.

Sufficiency is not a binary state. It is a reasoned judgment. And that judgment must address what is known, what is unknown, and whether the unknowns are acceptable given the device’s risk profile and intended use.

MDCG 2020-5 reinforces this. It states that the clinical evaluation must identify “any residual risks and uncertainties” and explain how they will be addressed. The gap analysis is where this identification happens.

Key Insight
The gap analysis is not about describing your evidence. It is about describing the absence of evidence—and whether that absence creates a regulatory or clinical problem.

Where Most Gap Analyses Go Wrong

I see the same patterns in almost every deficient CER.

The first mistake is treating the gap analysis as a summary. The manufacturer lists the studies, restates the conclusions, and moves on. There is no structured comparison between what the evidence covers and what the device actually does.

The second mistake is confusing data quality with data coverage. A study can be high quality and still irrelevant to a specific claim. The appraisal tells you if the study is reliable. The gap analysis tells you if the study is applicable.

The third mistake is assuming that if you found studies, there are no gaps. This is the most dangerous assumption. Evidence exists on a spectrum. You may have data on short-term safety but none on long-term outcomes. You may have data on one population but none on pediatric use. These are gaps.

Common Deficiency
Manufacturers often write: “No gaps were identified.” This is almost never true. What they mean is: “We didn’t look for gaps.” Every device has evidence limitations. The question is whether those limitations are justified.

What a Real Gap Analysis Looks Like

A functional gap analysis starts with a reference point. You need to define what constitutes sufficient evidence for your device. This means mapping your claims, your risk profile, your intended use, and your clinical context.

Then you compare that map to the evidence you actually collected.

For each clinical claim, ask: Do I have direct evidence? Do I have equivalent evidence? Do I have surrogate markers? Do I have nothing?

For each risk, ask: Is this risk addressed in clinical data? Is the follow-up duration adequate? Is the population representative? Are the endpoints relevant?

For each intended use scenario, ask: Is there evidence in this specific population? In this indication? With this frequency of use?

The gaps emerge from these comparisons. And once you identify them, you classify them.

Classifying the Gaps

Not all gaps are equal. Some are acceptable. Some require action. The classification depends on the severity of the gap and the risk it represents.

Acceptable gaps: These are evidence limitations that do not compromise the benefit-risk balance. For example, you may have limited long-term data on a low-risk device where short-term safety is well established and the mechanism of harm is understood. You document the gap, explain why it is acceptable, and move on.

Gaps requiring PMCF: These are uncertainties that cannot be resolved pre-market but must be monitored post-market. For example, real-world performance in a broader population, rare adverse events, or long-term durability. These gaps feed directly into your PMCF plan.

Gaps requiring additional evidence: These are critical evidence deficits that must be addressed before the device can be considered safe. For example, no data on a key safety endpoint, no evidence in the target population, or insufficient data to support a clinical claim. These gaps require clinical investigations or additional literature.

The classification is not arbitrary. It is based on the risk-benefit analysis and the clinical context. And it must be justified.

Key Insight
The gap analysis is the bridge between your evidence and your PMCF plan. If you identify gaps correctly, your PMCF plan writes itself. If you miss the gaps, your PMCF plan will be generic and unconvincing.

The Relationship Between Gaps and Equivalence

For devices relying on equivalence, the gap analysis becomes even more critical. Because equivalence is never perfect. There are always differences between your device and the equivalent device. The question is whether those differences matter clinically.

The gap analysis must explicitly address these differences. If your device has a different coating, a different size range, or a different delivery mechanism, you need evidence that these differences do not affect safety or performance. If you don’t have that evidence, you have a gap.

Many manufacturers assume that demonstrating technical equivalence is enough. It is not. Technical equivalence shows similarity. Clinical equivalence shows that the similarity is clinically meaningful. The gap analysis reveals where that clinical meaning is missing.

I have reviewed equivalence-based CERs where the gap analysis never mentioned the differences between devices. The manufacturer simply stated that the devices were equivalent and referenced the equivalent device data. The Notified Body rejected the file. Because the differences were never addressed.

How Notified Bodies Evaluate the Gap Analysis

Notified Bodies do not expect perfection. They expect transparency. They want to see that you looked for gaps, that you understood what you found, and that you made a reasoned judgment about what to do.

When I review a CER, I look for three things in the gap analysis:

Structure: Is there a clear method for identifying gaps? Or is it just a narrative?

Honesty: Does the manufacturer acknowledge limitations? Or do they claim the evidence is complete?

Justification: Are the gaps explained and classified? Or are they ignored?

If these elements are missing, the gap analysis is deficient. And the entire CER becomes questionable. Because if you cannot identify what your evidence is missing, how can you claim that your evidence is sufficient?

Common Deficiency
Manufacturers often write: “All claims are supported by the literature.” But they never explain how the literature supports the claims. The gap analysis should map each claim to specific evidence—and identify where the mapping breaks down.

Practical Steps to Perform a Gap Analysis

Start by listing your clinical claims. These come from your instructions for use, your labeling, and your intended use statement. Every claim must be supported by evidence.

Next, list your significant risks. These come from your risk management file. Every risk must be addressed by clinical data.

Then, create a matrix. On one axis, list your claims and risks. On the other axis, list your evidence sources. Mark where the evidence addresses each claim or risk. The empty cells are your gaps.

For each gap, ask:

– Is this gap a regulatory requirement? (e.g., MDR Annex XIV requires demonstration of safety and performance)

– Is this gap a clinical concern? (e.g., no data in a vulnerable population)

– Is this gap acceptable given the device risk class and the available evidence?

Document your reasoning. The gap analysis is not just a list. It is a justification.

Finally, link your gaps to your PMCF plan. Every gap that requires post-market monitoring should appear in your PMCF objectives. If it doesn’t, the gap is either resolved or ignored. And ignored gaps become major non-conformities.

What Happens When the Gap Analysis Is Missing

The absence of a gap analysis does not go unnoticed. Notified Bodies flag it immediately. Because without a gap analysis, there is no way to assess whether the clinical evidence is sufficient.

The result is predictable. The Notified Body issues a major non-conformity. The manufacturer must revise the CER. The timeline extends. The approval delays.

But the regulatory consequence is only part of the problem. The clinical consequence is worse. Because if you don’t know what your evidence is missing, you don’t know where your device might fail. You don’t know what to monitor. You don’t know where the risks are.

The gap analysis is not a regulatory formality. It is a clinical safety tool. It forces you to confront the limits of your knowledge. And those limits define your post-market obligations.

Key Insight
A good gap analysis makes the Notified Body’s job easier. It shows that you understand your evidence, that you are transparent about its limitations, and that you have a plan to address them. This builds confidence.

Final Thought

The gap analysis is where clinical evaluation stops being a document exercise and becomes a critical thinking exercise. It is where you admit what you don’t know. And where you decide what to do about it.

Most manufacturers avoid this step because it feels uncomfortable. They prefer to focus on what the evidence shows. But the evidence gaps are just as important as the evidence itself. Because they define the boundaries of your clinical knowledge.

If you don’t define those boundaries, the Notified Body will. And they will be less generous.

Peace,
Hatem
Clinical Evaluation Expert for Medical Devices
Follow me for more insights and practical advice.

Frequently Asked Questions

What is a Clinical Evaluation Report (CER)?

A CER is a mandatory document under MDR 2017/745 that demonstrates the safety and performance of a medical device through systematic analysis of clinical data. It must be updated throughout the device lifecycle based on PMCF findings.

How often should the CER be updated?

The CER should be updated whenever significant new clinical data becomes available, after PMCF activities, when there are changes to the device or intended purpose, and at minimum during annual reviews as part of post-market surveillance.

What causes CER rejection by Notified Bodies?

Common reasons include inadequate equivalence demonstration, insufficient clinical data for claims, poorly structured SOTA analysis, missing gap analysis, and lack of clear benefit-risk determination. Structure and logical flow are as important as the data itself.

Which MDCG guidance documents are most relevant for clinical evaluation?

Key documents include MDCG 2020-5 (Equivalence), MDCG 2020-6 (Sufficient Clinical Evidence), MDCG 2020-13 (CEAR Template), MDCG 2020-7 (PMCF Plan), and MDCG 2020-8 (PMCF Evaluation Report).

Need Expert Help with Your Clinical Evaluation?

Get personalized guidance on MDR compliance, CER writing, and Notified Body preparation.

Peace, Hatem

Your Clinical Evaluation Partner

Follow me for more insights and practical advice.

References:
– MDR 2017/745 Article 61 and Annex XIV Part A
– MDCG 2020-5: Clinical Evaluation Assessment Report Template
– MDCG 2020-6: Regulation (EU) 2017/745: Sufficient Clinical Evidence for Legacy Devices