Why most gap analyses miss the actual gaps

Hatem Rabeh

Written by HATEM RABEH, MD, MSc Ing

Your Clinical Evaluation Expert And Partner

in
S

I’ve seen gap analyses that list 47 separate items, yet miss the core deficiency that leads to rejection. The document claims completeness. The manufacturer believes they have a roadmap. Then the Notified Body stops at page 12 of the CER and asks a question that invalidates the entire equivalence claim. The gap analysis didn’t find it because it wasn’t designed to find it.

Gap analysis is treated as a checkbox exercise. Manufacturers run a compliance matrix. They map MEDDEV 2.7/1 sections to CER chapters. They confirm that Annex XIV requirements are covered. The document looks thorough. It gets approved internally.

Then it reaches someone who actually evaluates clinical data.

That’s when the real gaps emerge. Not formatting issues. Not missing headers. The fundamental deficiencies that make the clinical evaluation insufficient under MDR Article 61.

What Gap Analysis Actually Means Under MDR

MDR requires clinical evaluation to demonstrate safety and performance through clinical data. Article 61(1) states this clearly. MDCG 2020-6 provides the methodology. But neither document tells you how to identify when your current evaluation fails to meet that standard.

That’s what gap analysis should do. It should assess whether the clinical evidence currently available can answer the questions the regulation requires you to answer.

Most gap analyses don’t do this. They assess structure. They assess documentation. They verify that procedures exist. But they don’t assess sufficiency of evidence against the actual burden of proof.

Common Deficiency
Gap analysis that checks whether CER sections exist, but doesn’t evaluate whether the clinical data in those sections can actually support the claims made in the IFU and technical documentation.

This is why manufacturers are surprised by Notified Body questions. The gap wasn’t in the document. The gap was in the reasoning.

The Questions That Reveal Real Gaps

A proper gap analysis starts with the claims. What does the device claim to do? What patient population? What clinical condition? What outcomes?

Then it asks: What evidence would a qualified reviewer need to accept those claims as demonstrated?

Not what evidence exists. What evidence is necessary.

I run this exercise with manufacturers before we even open the existing CER. I ask them to articulate what would convince an independent expert that their device is safe and performs as intended for the specified use.

The gap becomes visible immediately.

They realize the literature search was never designed to answer that question. The equivalence comparison was based on technical similarity, not clinical similarity. The PMCF plan collects data, but not the data that would address the uncertainty identified in the appraisal.

The Core Questions Every Gap Analysis Must Address

Does the current evidence base allow you to characterize the residual risk profile for your device in your intended population?

Can you demonstrate performance for each intended purpose stated in the IFU based on clinical data, not just bench testing?

If you’re claiming equivalence, can you demonstrate that the clinical performance and safety profile of the equivalent device are valid for your device?

If you’re using clinical data from different populations, indications, or use conditions, have you justified the applicability?

These are the questions Notified Bodies ask. These are the questions reviewers think through when they read your CER.

If your gap analysis didn’t ask them, it didn’t find the gaps that matter.

Key Insight
The gap analysis methodology must be driven by the burden of proof, not by the structure of the document. You are identifying what is missing from the reasoning, not what is missing from the table of contents.

How to Structure a Gap Analysis That Finds Real Deficiencies

I use a three-layer approach. Each layer addresses a different type of deficiency.

Layer 1: Claim-Evidence Alignment

Start with the intended purpose. List every claim made in the IFU, the technical documentation, and any promotional material.

For each claim, identify what clinical evidence would be needed to support it under MDR. Not what you have. What you need.

Then assess whether the current evidence base meets that standard. If not, that’s a gap.

This is where you discover that the literature search retrieved papers on similar devices but not on your specific indication. Or that the PMCF data tracks implant survival but not functional outcomes. Or that equivalence was claimed based on materials and design, but clinical performance was never compared.

Layer 2: Appraisal Completeness

The second layer asks whether the appraisal process was sufficient. Did you identify all relevant data? Did you assess it using appropriate quality criteria? Did you extract the right endpoints?

I see gap analyses that confirm a literature search was conducted. But they don’t assess whether the search strategy could have found the relevant evidence. They don’t check whether the appraisal criteria align with the hierarchy of evidence in MDCG 2020-6. They don’t verify that the data extraction captured all safety signals.

This is where you discover that the search excluded case reports, so you missed rare adverse events. Or that the appraisal didn’t consider study design, so you gave equal weight to a case series and an RCT. Or that you extracted mean values but not the variability, so you can’t assess the clinical significance.

Layer 3: Synthesis and Justification

The third layer is the hardest. It asks whether the conclusions in your CER are actually supported by the evidence presented.

This is not about whether you followed MEDDEV structure. It’s about whether a reasonable expert, reading your analysis, would reach the same conclusions.

I’ve reviewed CERs where the appraisal section lists ten studies with mixed results. Then the synthesis section concludes that safety is demonstrated. No discussion of the conflicting findings. No explanation of why certain studies were given more weight. No acknowledgment of limitations.

That’s a gap. And it’s the kind of gap that stops a submission.

Common Deficiency
Conclusions in the CER that are not explicitly linked to the evidence presented in the appraisal. The synthesis section reads like a summary of the intended use, not an evidence-based demonstration of safety and performance.

What Happens When You Miss the Real Gaps

Manufacturers submit thinking the clinical evaluation is complete. The Notified Body begins document review. Questions come back.

The questions are not about formatting. They’re about sufficiency.

Why did you conclude equivalence when the predicate device was used in a different patient population?

How do you justify performance claims when the clinical data shows high variability?

What is your evidence for the claimed reduction in adverse events compared to previous generation devices?

These questions reveal that the gap analysis failed. It didn’t identify that the evidence base was insufficient for the claims being made.

Now the manufacturer is in reactive mode. They have to conduct additional literature reviews. They have to revise equivalence justifications. They have to expand PMCF to collect missing data.

All of this could have been identified before submission if the gap analysis had asked the right questions.

The Practical Method I Use

I don’t start with a template. I start with the regulation and the claims.

Step 1: I list every claim in the IFU. Every intended purpose. Every stated benefit. Every comparison to other treatments.

Step 2: For each claim, I write down what a Notified Body reviewer would need to see to accept that claim as demonstrated.

Step 3: I assess whether the current CER provides that evidence. Not whether it mentions the topic. Whether it provides sufficient evidence.

Step 4: I identify the gaps. These are specific evidence deficiencies, not general statements like “more data needed.”

Step 5: I assess whether those gaps can be closed through additional appraisal, through PMCF, or whether they require a change in claims.

This process is uncomfortable. It often reveals that the clinical evaluation strategy was flawed from the start. It shows that literature can’t support certain claims. It exposes equivalence arguments that don’t hold up under scrutiny.

But this is exactly what needs to happen before submission, not after.

Key Insight
A gap analysis that makes you uncomfortable is doing its job. If the output is reassuring, you’re probably not looking at the evidence with the same rigor a Notified Body reviewer will apply.

How to Document the Gap Analysis

The output should be specific and actionable. Not a compliance matrix. A clear statement of what is missing and what is needed.

For each identified gap, document:

The claim or requirement that is not sufficiently supported.

The evidence that currently exists.

Why that evidence is insufficient.

What additional evidence would be required.

Whether that evidence can be obtained, and how.

This becomes your clinical evaluation strategy. It tells you what literature to search for. It defines your PMCF objectives. It determines whether equivalence is viable or whether you need device-specific clinical data.

It also protects you during audit. If a Notified Body asks why certain data is missing, you can show that you identified the gap, assessed the options, and made a justified decision.

When Gap Analysis Reveals Fundamental Issues

Sometimes the gap analysis shows that the intended purpose cannot be supported by available evidence. The literature doesn’t exist. Equivalence isn’t valid. PMCF alone won’t be sufficient.

This is the result no one wants. But it’s the result that prevents a failed submission or a post-market safety issue.

I’ve been in meetings where the gap analysis led to a change in claims. The device was repositioned for a narrower indication where evidence existed. Or the IFU was revised to remove performance claims that couldn’t be supported.

These are hard decisions. They affect market access. They affect commercial strategy.

But they’re the right decisions. Because the alternative is submitting a clinical evaluation that will not pass review, or worse, submitting a device to market without adequate evidence of safety and performance.

MDR requires manufacturers to demonstrate, not claim. Gap analysis is the tool that tells you whether you can actually demonstrate what you’re claiming.

Most gap analyses don’t do this because they’re not designed to challenge the submission. They’re designed to confirm it.

The gap analyses that work are the ones built to find problems before someone else does.

Peace,
Hatem
Clinical Evaluation Expert for Medical Devices
Follow me for more insights and practical advice.

Frequently Asked Questions

What is a Clinical Evaluation Report (CER)?

A CER is a mandatory document under MDR 2017/745 that demonstrates the safety and performance of a medical device through systematic analysis of clinical data. It must be updated throughout the device lifecycle based on PMCF findings.

How often should the CER be updated?

The CER should be updated whenever significant new clinical data becomes available, after PMCF activities, when there are changes to the device or intended purpose, and at minimum during annual reviews as part of post-market surveillance.

What causes CER rejection by Notified Bodies?

Common reasons include inadequate equivalence demonstration, insufficient clinical data for claims, poorly structured SOTA analysis, missing gap analysis, and lack of clear benefit-risk determination. Structure and logical flow are as important as the data itself.

Which MDCG guidance documents are most relevant for clinical evaluation?

Key documents include MDCG 2020-5 (Equivalence), MDCG 2020-6 (Sufficient Clinical Evidence), MDCG 2020-13 (CEAR Template), MDCG 2020-7 (PMCF Plan), and MDCG 2020-8 (PMCF Evaluation Report).

Need Expert Help with Your Clinical Evaluation?

Get personalized guidance on MDR compliance, CER writing, and Notified Body preparation.

Peace, Hatem

Your Clinical Evaluation Partner

Follow me for more insights and practical advice.

References:
– Regulation (EU) 2017/745 (MDR), Article 61
– MDCG 2020-6: Regulation (EU) 2017/745: Clinical evidence needed for medical devices previously CE marked under Directives 93/42/EEC or 90/385/EEC
– MDCG 2020-13: Clinical Evaluation Assessment Report Template