Why Your PMCF Plan Keeps Failing Notified Body Review

Hatem Rabeh

Written by HATEM RABEH, MD, MSc Ing

Your Clinical Evaluation Expert And Partner

in
S

I see the same pattern in every second CER audit. The PMCF plan exists. It follows the template. It lists activities and timelines. Yet the Notified Body reviewer marks it insufficient within minutes. The manufacturer is confused. They followed the guidance. They filled every section. What went wrong?

The problem is rarely what manufacturers think it is.

Most teams approach the PMCF plan as a documentation exercise. They open the template. They describe methods. They set timelines. They submit. Then they wait for the deficiency letter that always comes.

The disconnect happens because reviewers do not assess whether you have a plan. They assess whether your plan addresses the actual clinical knowledge gaps for your specific device. And most plans do not.

The Core Misunderstanding About PMCF Requirements

MDR Article 61 and Annex XIV Part B establish that PMCF is not optional documentation. It is the continuous process that feeds the clinical evaluation throughout the device lifecycle.

MDCG 2020-7 on PMCF states clearly: the plan must be based on the clinical evaluation itself. It must address residual risks and uncertainties identified in the CER. It must target the specific clinical questions that remain open after market authorization.

But here is what I observe in real submissions.

The PMCF plan lists generic activities: literature review, registry participation, surveys to users. The timelines are reasonable. The methods are described. Everything looks complete.

Then the reviewer asks: which specific clinical uncertainty does each activity address? How do the planned methods match the evidence gaps identified in your CER? What will change in your risk-benefit conclusion based on PMCF results?

The manufacturer cannot answer. Because the plan was written independently from the clinical evaluation.

Common Deficiency
The PMCF plan describes methods but fails to connect each planned activity to a specific clinical question, residual risk, or knowledge gap identified in the clinical evaluation report.

What Reviewers Actually Look For

When a Notified Body reviews your PMCF plan, they work backwards from your clinical evaluation.

They open your CER. They read the conclusions. They note which claims are supported by direct clinical evidence versus equivalence. They identify where you relied on older studies or limited datasets. They mark every place where you stated uncertainty or acknowledged data limitations.

Then they turn to the PMCF plan. And they check one thing: does this plan systematically address what the CER identified as incomplete?

If your CER states that long-term performance data beyond two years is limited, the PMCF plan must specify how you will collect that data. Not generically. Specifically. What registry? What follow-up intervals? What endpoints?

If your CER relies on equivalence to another device, the PMCF plan must outline how you will generate direct clinical evidence for your device. Not someday. With a timeline. With interim analysis points.

If your CER acknowledges sparse data for a specific patient population, the PMCF plan must show targeted enrollment or subgroup analysis. Not as an idea. As a committed activity with resources allocated.

The gap I see most often is that teams write these documents separately. Different people. Different timelines. The CER author identifies gaps. The PMCF plan author fills a template. No one maps one to the other.

So the reviewer sees disconnected documents. And flags it immediately.

Key Insight
A passing PMCF plan is essentially a work plan that responds directly to the open questions in your clinical evaluation. If a gap exists in the CER, a specific PMCF activity must address it. If no gap exists, no activity is justified.

The Trap of Generic Activities

Many PMCF plans list ongoing literature review. Annual updates. Periodic searches. All required by MDR.

But this is baseline. This is what every manufacturer must do regardless of device type or risk class.

Reviewers expect more. They expect you to identify what specific clinical questions your device raises, and how you will answer them through active data collection.

Let me give you a real pattern I see in failed plans.

A manufacturer submits a Class IIb implantable device. The CER is based on equivalence to a predicate device plus limited clinical data from a pilot study with six-month follow-up. The clinical evaluation concludes the device is safe and performs as intended, but acknowledges limited long-term data.

The PMCF plan lists: annual literature review, complaint monitoring, customer feedback surveys.

The deficiency comes back: insufficient. Your CER identified limited long-term data as a key gap. Your PMCF plan does not specify how you will generate long-term data. Customer surveys will not provide implant performance at three years. Complaint monitoring is reactive, not proactive evidence generation.

The manufacturer is frustrated. They thought they had a plan. They described activities. They set timelines.

But they missed the fundamental requirement: active collection of the missing clinical evidence.

What the reviewer wanted to see: enrollment in a post-market registry with minimum three-year follow-up, or a defined post-market clinical follow-up study with specified endpoints matching the identified long-term data gaps, or at minimum a commitment to systematic surgeon feedback tied to implant performance at defined intervals.

Common Deficiency
PMCF plans that rely solely on passive data collection (literature monitoring, complaint analysis, general surveys) when the CER has identified active evidence gaps requiring proactive data generation.

The Problem With Timelines That Mean Nothing

Another recurring issue: timelines that do not align with clinical realities.

A PMCF plan states that a post-market study will enroll 50 patients over 12 months, with results reported annually.

The reviewer looks at your clinical evaluation. Your device is used in a specialized surgical procedure performed at maybe 20 centers across Europe. Average case volume per center is five procedures per year.

The math does not work. You cannot enroll 50 patients in 12 months unless you have contracts with multiple centers already in place. And even then, enrollment rates in post-market studies are notoriously slower than projected.

The reviewer sees this. They flag it. Because unrealistic timelines signal that the PMCF plan is theoretical, not operational.

When you set a timeline, the reviewer expects to see it supported by operational feasibility. Do you have sites identified? Do you have ethics approval processes started? Do you have budget allocated?

If the timeline is aspirational, it reads as if the PMCF plan is documentation compliance rather than actual commitment to evidence generation.

And that triggers rejection.

The Missing Link: Interim Analysis and Triggers

MDR requires that PMCF is not just data collection. It is continuous analysis with defined triggers for action.

Most plans I review describe what data will be collected. Few describe how that data will be analyzed, at what intervals, and what findings would trigger updates to the clinical evaluation or risk management.

This is a critical gap.

The reviewer wants to see that you have thought through what happens with the data. Not just that you will collect it. But how you will use it.

If you plan to monitor adverse events through a registry, what event rate would prompt a safety signal review? What deviation from expected performance would trigger a clinical evaluation update? Who reviews interim data and how often?

Without these elements, the PMCF plan looks like data collection for its own sake. The reviewer cannot assess whether your surveillance system will actually detect safety issues or performance problems in time.

So they ask. And when the answer is not in the plan, they flag it.

Key Insight
A robust PMCF plan includes interim analysis points, predefined thresholds for action, and clear responsibilities for data review. The goal is not data accumulation but continuous assessment that feeds back into clinical evaluation and risk management.

The Equivalence Problem That Haunts PMCF Plans

When your clinical evaluation relies on equivalence, your PMCF plan must address a specific expectation: transition from equivalence to direct evidence.

MDCG 2020-5 on clinical evaluation makes clear that equivalence is a pathway to market, not a permanent state. The expectation is that you will generate your own clinical data post-market to confirm what you claimed through equivalence.

Yet many PMCF plans for equivalence-based devices describe only passive monitoring. Literature. Complaints. Feedback.

The reviewer sees this as insufficient. If you entered the market on equivalence, your PMCF plan must show how you will build direct clinical evidence for your specific device over time.

This does not mean you need a full clinical investigation immediately. But you need a credible path. Registry enrollment. Systematic case series. Structured clinical follow-up.

Without this, the equivalence claim becomes permanent reliance on someone else’s data. And that contradicts the MDR expectation of manufacturer-specific post-market evidence.

So the deficiency comes: your PMCF plan does not demonstrate how you will generate direct clinical data for your device. Revise to include active data collection that supports independent confirmation of safety and performance claims.

What Passing PMCF Plans Actually Do

The PMCF plans that pass review share common characteristics. They are not longer. They are not more complex. They are simply more precise.

They start by summarizing the key clinical uncertainties from the CER. Explicitly. In a table or bullet list. This evidence gap corresponds to CER section X. This residual risk was identified in the risk management file.

Then they map each gap to a specific PMCF activity. Not categories of activities. Specific ones.

For each activity, they describe the method in enough detail that a reviewer can judge feasibility. If it is a registry, which registry and what data fields. If it is a follow-up study, what endpoints and what analysis plan. If it is surgeon feedback, what structured questions and what response rate target.

They include realistic timelines with interim milestones. First enrollment. Interim analysis. Final report. Each milestone tied to the PMS and clinical evaluation update cycle.

They specify what will trigger action. What finding would prompt a safety alert. What performance deviation would require clinical evaluation revision. What data threshold would change the risk-benefit conclusion.

And they show that the activities are already in motion or have clear start conditions. Ethics submissions planned. Registry participation agreements being negotiated. Budget approved.

The plan reads like an operational document, not a regulatory formality. And that is what passes review.

Key Insight
Reviewers can distinguish between PMCF plans written for compliance and those written as actual operational roadmaps. The difference is in specificity, feasibility, and the visible connection between identified evidence gaps and planned activities.

The Update Cycle That No One Plans For

One last issue that consistently triggers deficiencies: no clear connection between PMCF execution and CER updates.

MDR requires periodic updates to the clinical evaluation based on PMCF results. Annex XIV Part B states this explicitly. PMCF generates data. That data informs the clinical evaluation. The clinical evaluation is updated accordingly.

But many PMCF plans treat this as automatic. They say results will be integrated into the CER. How? When? Based on what criteria?

Reviewers want to see the mechanism. Not just the statement that it will happen.

Will PMCF results trigger a CER update annually? Or only when predefined thresholds are met? Who reviews PMCF data to determine if an update is needed? What findings would mandate an immediate revision versus waiting for the scheduled update?

Without this process described, the PMCF plan and CER remain disconnected. The PMCF becomes a parallel workstream that produces reports no one integrates into the actual clinical evaluation.

And reviewers see this pattern. They know that plans without defined feedback loops rarely result in meaningful updates. So they flag it.

What This Means For Your Next Submission

If your PMCF plan keeps getting rejected, the issue is likely not the format or the template. It is the thinking behind it.

Before you write the plan, review your CER with one question: what clinical questions remain unanswered? Where did we rely on limited data? Where did we make assumptions? Where did we use equivalence? Where did we acknowledge uncertainty?

Write those down. Explicitly. Those are your PMCF objectives.

Then for each objective, define what evidence would answer that question. What data? From what source? Over what timeframe? With what analysis?

That becomes your PMCF plan. Not a template. A work plan.

And when the reviewer reads it, they will see that you understand what clinical evidence you need and how you will get it. That is what passes review.

The PMCF plan is not a hurdle to clear. It is the roadmap for closing the evidence gaps that remain after your initial clinical evaluation. When you write it that way, reviewers recognize it. And the deficiencies stop.

Peace,
Hatem
Clinical Evaluation Expert for Medical Devices
Follow me for more insights and practical advice.

Frequently Asked Questions

What is a Clinical Evaluation Report (CER)?

A CER is a mandatory document under MDR 2017/745 that demonstrates the safety and performance of a medical device through systematic analysis of clinical data. It must be updated throughout the device lifecycle based on PMCF findings.

How often should the CER be updated?

The CER should be updated whenever significant new clinical data becomes available, after PMCF activities, when there are changes to the device or intended purpose, and at minimum during annual reviews as part of post-market surveillance.

What causes CER rejection by Notified Bodies?

Common reasons include inadequate equivalence demonstration, insufficient clinical data for claims, poorly structured SOTA analysis, missing gap analysis, and lack of clear benefit-risk determination. Structure and logical flow are as important as the data itself.

Which MDCG guidance documents are most relevant for clinical evaluation?

Key documents include MDCG 2020-5 (Equivalence), MDCG 2020-6 (Sufficient Clinical Evidence), MDCG 2020-13 (CEAR Template), MDCG 2020-7 (PMCF Plan), and MDCG 2020-8 (PMCF Evaluation Report).

Need Expert Help with Your Clinical Evaluation?

Get personalized guidance on MDR compliance, CER writing, and Notified Body preparation.

Peace, Hatem

Your Clinical Evaluation Partner

Follow me for more insights and practical advice.

References:
– Regulation (EU) 2017/745 (MDR), Annex XIV Part B
– MDR Article 61 (Post-market surveillance)
– MDCG 2020-7: Post-Market Clinical Follow-up (PMCF) Plan Template
– MDCG 2020-5: Clinical Evaluation Assessment Report Template
– MDCG 2020-13: Clinical Evaluation – Equivalence