Why Your PMCF Plan Produces Data Nobody Can Use

Hatem Rabeh

Written by HATEM RABEH, MD, MSc Ing

Your Clinical Evaluation Expert And Partner

in
S

Most PMCF plans I review contain all the right sections. The structure looks correct. The tables are filled in. Yet six months after approval, the manufacturer realizes the collected data cannot answer the clinical questions that matter. The Notified Body flags gaps during surveillance. The plan becomes a compliance burden instead of a source of insight.

This is not a failure of intent. Most teams genuinely want to collect meaningful data. But somewhere between the MDR requirement and the executed plan, something breaks. The plan becomes a document written to pass review rather than a framework designed to generate evidence.

The result is predictable. Data arrives, but it does not reduce uncertainty. Reports are written, but they do not inform decision-making. The manufacturer collects information but learns nothing actionable.

Let me walk you through what actually separates a performative PMCF plan from one that produces useful data.

The Starting Point Most Teams Miss

According to MDR Article 61 and Annex XIV Part B, PMCF is a continuous process. It is not an optional add-on. It is the mechanism through which clinical evaluation remains current throughout the device lifecycle.

MDCG 2020-7 reinforces this. PMCF must address residual risks and gaps in clinical evidence identified during the initial clinical evaluation. It must also monitor safety, performance, and emerging risks over time.

Here is where the disconnect begins.

Many manufacturers start by asking: What does the Notified Body want to see? This is a procedural question. It leads to a procedural answer. You fill in templates, match section titles to guidance documents, and submit something acceptable.

But acceptability is not the same as functionality.

Key Insight
A functional PMCF plan starts with clinical questions, not template sections. If you cannot articulate what you need to know, the plan will not help you learn it.

Before writing a single objective, sit down and list every uncertainty that remains after your clinical evaluation. What aspects of performance are assumed but not fully demonstrated? What patient subgroups lack direct evidence? What long-term outcomes remain unknown?

These uncertainties are your actual PMCF questions. Everything else flows from there.

Objectives That Sound Good But Mean Nothing

I see the same objectives repeated across plans:

“Monitor the safety and performance of the device in routine clinical practice.”

“Confirm the benefit-risk profile remains acceptable.”

“Collect real-world data to support ongoing clinical evaluation.”

These are not objectives. They are restatements of the MDR requirement. They provide no operational direction. A clinical operations team reading this objective cannot design a study. A data analyst cannot define what success looks like.

Now compare that to a functional objective:

“Estimate the incidence of device-related infection within 30 days of implantation in patients with diabetes, targeting a sample size sufficient to detect a rate above 2% with 80% power.”

This is specific. It is measurable. It ties directly to a known gap in the clinical evaluation. It tells everyone involved what data to collect and how to interpret it.

Common Deficiency
Objectives that do not specify the parameter being measured, the patient population, or the decision threshold cannot guide data collection. Notified Bodies increasingly flag this during plan review.

Go back to those clinical questions you identified. For each one, define:

  • What specific parameter will you measure?
  • In which patient population?
  • What threshold or trend would change your understanding of the device?
  • How will this data feed into your next clinical evaluation update?

If you cannot answer these questions, the objective is not ready.

Choosing Methods That Match the Question

Once objectives are clear, the method becomes obvious. Yet many plans describe methods that were chosen before the objectives were defined.

The manufacturer wants to run a registry, so the PMCF plan includes a registry. The distributor in Germany offers access to a database, so that becomes the data source. The decision is logistical, not scientific.

Then the data arrives and it does not answer the question. The registry collects follow-up at 12 months, but your residual risk relates to performance at 5 years. The database captures diagnoses but not the functional outcomes you need to assess benefit.

Method selection must follow the clinical question.

If your question is about long-term durability, short-term surveys will not help. If your question is about rare adverse events, a 50-patient case series will not provide statistical power. If your question is about real-world effectiveness compared to alternatives, you need a comparator group.

MDCG 2020-7 Appendix 1 provides a useful decision tree. It does not prescribe a single method. It shows how different methods address different evidence needs.

Do not force every question into the same method. A well-designed PMCF plan may include multiple activities: a registry for long-term outcomes, targeted literature surveillance for emerging safety signals, and a focused study for a specific subgroup where evidence is weak.

Key Insight
The best PMCF plans are modular. Each clinical question gets its own method, timeline, and analysis plan. This avoids trying to extract answers from data that was never designed to provide them.

Sample Size and Statistical Thinking

Here is a question I ask during plan reviews:

How many patients do you need to answer this question?

The answer I often get: “We will collect data from all patients in the registry.”

That is not an answer. That is an evasion.

If you want to confirm a complication rate is below a certain threshold, you need a sample size calculation. If you want to detect a difference between subgroups, you need statistical power. If you want to identify rare events, you need to understand detection limits.

This does not mean every PMCF activity requires a formal power calculation. Qualitative objectives exist. But for any quantitative claim, you must demonstrate that your planned sample can actually support the conclusion you intend to draw.

I have reviewed plans where the manufacturer commits to collecting 100 patients over two years to “confirm safety.” When I ask what safety conclusion 100 patients can support, the answer is uncertain. If the goal is to rule out a 5% complication rate, 100 patients may suffice. If the goal is to detect a 0.5% rate, it does not come close.

This is not theoretical. Notified Bodies reject plans on this basis. They ask: What will you conclude from this data? What decision will you make? If your sample size cannot support that decision, the activity is not fit for purpose.

Common Deficiency
Plans that commit to data collection without defining what conclusions that data can support. Reviewers see this as a box-ticking exercise, not a genuine evidence-generation strategy.

Work with someone who understands clinical study design. Define your endpoint. Define your decision rule. Then calculate the sample size needed. Only then commit to a timeline and resource plan.

Building in Feedback Loops

PMCF is described as continuous, but many plans treat it as a one-time event. The plan is written. The study is launched. Data is collected. A report is written. Done.

This is a missed opportunity.

Useful PMCF includes planned interim analyses. Not just for safety monitoring, but for learning. If early data shows an unexpected trend, you want to know before the study ends. If a parameter shows no variation, you may decide to stop collecting it and focus resources elsewhere.

Your plan should specify decision points. At 6 months, review enrollment and data completeness. At 12 months, conduct a preliminary analysis of primary endpoints. At 18 months, assess whether the collected data is on track to answer your clinical questions.

These reviews feed back into your clinical evaluation. They inform risk management updates. They may trigger protocol amendments or new PMCF objectives.

This is what MDR Annex XIV Part B means by “continuous process.” It is not just continuous data collection. It is continuous learning and adaptation.

Key Insight
The PMCF plan should describe not only what data you will collect, but when and how you will review it, and what decisions those reviews will trigger. This turns PMCF from a reporting obligation into a strategic tool.

Linking PMCF Back to Clinical Evaluation

This is where the loop closes.

Every PMCF objective should trace back to a specific section or gap in your clinical evaluation report. When PMCF data arrives, it should update that section. The clinical evaluation becomes a living document, refined continuously by incoming evidence.

But I see plans where this linkage is absent. The PMCF plan exists as a separate document. The clinical evaluation references the plan, but does not specify what will change based on PMCF findings.

This makes the clinical evaluation static again. You are back to periodic updates driven by regulatory timelines, not by evidence.

Build the connection explicitly. In your clinical evaluation, mark the sections that rely on assumptions, equivalence reasoning, or limited data. In your PMCF plan, link each objective to those sections. In your PMCF report, reference the clinical evaluation sections that the new data updates.

Notified Bodies look for this linkage during surveillance audits. They check whether PMCF data is genuinely integrated into clinical evaluation, or whether the two processes run in parallel without informing each other.

What Happens When Plans Fail

Let me describe what I see in audits.

The manufacturer submits a PMCF plan. It is approved. The registry launches. Two years later, the PMCF report is due. The team sits down to write it and realizes the data does not address the original clinical questions.

Maybe the questions were too vague. Maybe the method was mismatched. Maybe the sample size was too small. Maybe the follow-up period was too short.

Now the manufacturer is in a difficult position. The PMCF report must still be submitted. So the team writes around the data they have. They describe what was collected rather than what was learned. The report becomes descriptive rather than analytical.

The Notified Body reads this and flags it. The PMCF plan had objectives. The report does not demonstrate that those objectives were met. This becomes a finding.

The manufacturer scrambles to extend the study, collect more data, or launch a supplemental activity. The timeline extends. The cost increases. The uncertainty that PMCF was supposed to resolve remains unresolved.

This is avoidable. It starts with a plan that was designed to generate useful data, not just to meet a regulatory checkbox.

A Practical Checklist

Before you finalize your next PMCF plan, go through this:

  • Can you list the specific clinical questions this plan will answer?
  • Are your objectives measurable and tied to decision thresholds?
  • Does each method match the type of evidence gap it is meant to address?
  • Have you defined sample sizes or data collection targets based on statistical reasoning?
  • Do you have planned interim reviews to check progress and adapt?
  • Is there a clear linkage between PMCF objectives and sections of your clinical evaluation?
  • If you collected no data beyond what this plan describes, would your clinical evaluation remain current and complete?

If the answer to any of these is unclear, the plan is not ready. Revise it now, before data collection begins. Once the study is running, fixing a poorly designed plan is exponentially harder.

Final Thought

PMCF is not a compliance exercise. It is the mechanism through which clinical evaluation stays relevant after the device enters the market. When done well, it reduces risk, informs design improvements, and strengthens your evidence base for future regulatory submissions.

But this only happens if the plan is designed to generate useful data, not just to generate a report.

The difference is in the thinking that happens before the first word is written. Start with uncertainty. Define what you need to know. Design activities that will actually teach you something. The rest follows naturally.

Peace,
Hatem
Clinical Evaluation Expert for Medical Devices
Follow me for more insights and practical advice.

Frequently Asked Questions

What is a Clinical Evaluation Report (CER)?

A CER is a mandatory document under MDR 2017/745 that demonstrates the safety and performance of a medical device through systematic analysis of clinical data. It must be updated throughout the device lifecycle based on PMCF findings.

How often should the CER be updated?

The CER should be updated whenever significant new clinical data becomes available, after PMCF activities, when there are changes to the device or intended purpose, and at minimum during annual reviews as part of post-market surveillance.

What causes CER rejection by Notified Bodies?

Common reasons include inadequate equivalence demonstration, insufficient clinical data for claims, poorly structured SOTA analysis, missing gap analysis, and lack of clear benefit-risk determination. Structure and logical flow are as important as the data itself.

Which MDCG guidance documents are most relevant for clinical evaluation?

Key documents include MDCG 2020-5 (Equivalence), MDCG 2020-6 (Sufficient Clinical Evidence), MDCG 2020-13 (CEAR Template), MDCG 2020-7 (PMCF Plan), and MDCG 2020-8 (PMCF Evaluation Report).

Need Expert Help with Your Clinical Evaluation?

Get personalized guidance on MDR compliance, CER writing, and Notified Body preparation.

Peace, Hatem

Your Clinical Evaluation Partner

Follow me for more insights and practical advice.

References:
– Regulation (EU) 2017/745 (MDR), Article 61 and Annex XIV Part B
– MDCG 2020-7: Post-Market Clinical Follow-up (PMCF) Plan Template
– MDCG 2020-5: Clinical Evaluation Assessment Report Template