Why your PMCF plan keeps collecting unusable data

Hatem Rabeh

Written by HATEM RABEH, MD, MSc Ing

Your Clinical Evaluation Expert And Partner

in
S

I’ve reviewed hundreds of PMCF plans over the years. Most of them share a common problem: they collect data that no one knows how to use. The plan gets approved because it ticks the regulatory boxes, but six months later, the clinical team is drowning in disconnected data points that don’t answer any meaningful question about safety or performance.

This is not about compliance theater. This is about designing a PMCF system that actually informs your clinical evaluation.

The difference between a compliant PMCF plan and an effective one is not complexity. It’s clarity of purpose. And that clarity starts with understanding what actionable data means in the context of MDR Article 61 and Annex XIV Part B.

The regulatory expectation

MDR Article 61 requires manufacturers to actively collect and analyze data from the post-market phase to confirm safety and performance. Annex XIV Part B describes the PMCF plan as the structured method to do this.

But the regulation does not define what “actionable” means. That gap is where most plans fail.

When you read MDCG 2020-7 and MDCG 2020-8, the guidance is clear about what PMCF should achieve: continuous confirmation of the benefit-risk profile, detection of emerging risks, identification of previously unknown side effects, and confirmation that the device performs as intended in routine use.

These are not checkbox activities. They are evaluation activities. And evaluation requires data that can be interrogated, compared, and synthesized.

Key Insight
Actionable data is data that allows you to make a conclusion or change a decision. If the data you collect does not influence your clinical evaluation or your risk management, it is not actionable.

Design principle one: Start with the clinical evaluation gaps

Most PMCF plans are designed backwards. They start with available data sources and then try to justify what can be collected. That approach almost always leads to irrelevant data collection.

The correct starting point is the clinical evaluation report.

Your CER identifies gaps in clinical evidence. It identifies areas where uncertainty remains. It identifies assumptions that need validation. These are your PMCF objectives.

If your CER states that long-term complication rates beyond two years are not well documented in the literature, your PMCF plan should target exactly that question. If your CER relies on equivalence but acknowledges differences in patient population, your PMCF should monitor outcomes in the actual population you serve.

This is not theoretical. I have seen audit findings where the Notified Body rejected the entire PMCF program because the collected data did not address the gaps identified in the CER. The plan was compliant. The data was real. But the two documents did not speak to each other.

Common Deficiency
PMCF plan objectives are generic and do not map back to the clinical evaluation gaps. Reviewers immediately notice when the PMCF plan reads like a template that could apply to any device.

So before you write a single PMCF objective, go back to your CER. Identify every statement that includes words like “limited data,” “unclear,” “assumed,” “extrapolated,” or “based on equivalent device.”

Those are your PMCF targets.

Design principle two: Define measurable endpoints before choosing methods

Once you know what question you need to answer, the next step is defining what data would actually answer it.

This is where specificity matters.

If your objective is to confirm safety in routine use, you need to define what “confirmed” means. Is it the absence of new adverse events? Is it a complication rate below a defined threshold? Is it alignment with published benchmarks?

Without a measurable endpoint, you end up collecting narratives that sound reassuring but cannot be compared or quantified.

I worked on a file where the PMCF plan stated the objective as “monitor patient satisfaction.” The method was a survey with open-ended questions. After one year, the clinical team had dozens of positive comments but no numerical data, no scale, and no comparison to baseline or benchmarks. The Notified Body asked: What does this tell us about performance?

The answer was: Nothing measurable.

Compare that to a plan that defines the endpoint as “mean patient satisfaction score on a validated scale, compared to published outcomes for the same indication.” Now the data becomes interpretable. Now it informs your clinical evaluation.

Key Insight
A well-defined endpoint includes three elements: the parameter you are measuring, the threshold or benchmark for interpretation, and the timeframe for assessment.

This discipline forces you to think like an evaluator, not like a data collector.

Design principle three: Match data sources to the question, not to convenience

The third principle is about method selection.

MDR does not prescribe how you conduct PMCF. You can use registries, surveys, literature surveillance, targeted clinical investigations, or real-world data from electronic health records. The regulation gives you flexibility.

But that flexibility is not permission to choose the easiest option.

The method must be capable of generating the data you defined in principle two. If your endpoint is long-term implant survival, a survey will not work. If your endpoint is rare adverse events, a single-center observational study will not work.

I see this mistake most often with literature-based PMCF. Manufacturers assume that ongoing literature review satisfies the PMCF requirement. It does not.

Literature review is part of post-market surveillance and clinical evaluation maintenance, but it is not PMCF unless the published studies include data from your device in your target population. And even then, you need to actively monitor which studies are relevant and how they update your benefit-risk profile.

Passive literature search is not PMCF. Active use of registries, patient follow-up programs, or targeted investigations is.

Common Deficiency
The PMCF plan lists multiple methods but does not explain how each method addresses a specific objective. Reviewers see this as evidence that the methods were chosen for availability, not for relevance.

Another frequent issue is sample size. If your PMCF plan relies on data from 20 patients over three years, ask yourself: Can this sample size detect the outcomes I need to confirm?

If the answer is no, your plan is not fit for purpose.

Design principle four: Plan for analysis and decision-making upfront

This is the principle that separates data collection from clinical evaluation.

Your PMCF plan must describe how the data will be analyzed and how the results will feed back into your CER and risk management file. This is required by Annex XIV Part B.

But most plans treat this section as an afterthought. They say something like: “Data will be reviewed annually and included in the PMCF report.”

That is not an analysis plan. That is a compliance statement.

A real analysis plan answers these questions:

Who will perform the analysis? What statistical or qualitative methods will be used? What thresholds trigger a review of the benefit-risk profile? What thresholds trigger a change in labeling or risk mitigation measures? How will the findings be integrated into the next CER update?

If you cannot answer these questions when you write the plan, your PMCF will generate reports, not decisions.

I have reviewed PMCF evaluation reports that summarize data beautifully but never draw a conclusion. The report states: “No unexpected adverse events were observed.” But it does not state whether this confirms the safety profile, whether the sample size was sufficient to detect rare events, or whether the data changes any assumption in the clinical evaluation.

That disconnect happens because the analysis framework was never defined.

Key Insight
An actionable PMCF plan includes decision rules. If X happens, we will do Y. If the complication rate exceeds Z, we will initiate corrective action. Without decision rules, the plan is observational, not evaluative.

Design principle five: Build feedback loops between PMCF and clinical evaluation

The final principle is about system design.

Your PMCF plan is not a standalone document. It is part of a closed loop that includes your CER, your risk management file, your post-market surveillance system, and your periodic safety update reports.

The loop works like this:

Your CER identifies gaps. Your PMCF plan targets those gaps. Your PMCF activities generate data. Your PMCF evaluation report interprets that data. The interpretation updates your CER, which may identify new gaps or close existing ones.

If this loop is broken at any point, your PMCF becomes a documentation exercise instead of a clinical evaluation tool.

The most common break point is the connection between the PMCF evaluation report and the CER update. I have seen files where the PMCF report was submitted to the Notified Body, but the CER was never revised to reflect the new data.

Why does this happen?

Because the PMCF plan did not specify how and when the findings would be integrated. The plan treated PMCF as a parallel activity, not as an input to clinical evaluation.

To avoid this, your PMCF plan should state clearly: “Findings from the PMCF evaluation report will be reviewed by the clinical evaluation team within 30 days and incorporated into the next CER revision. Any finding that affects the benefit-risk conclusion will trigger an immediate CER update.”

That single sentence turns your PMCF into a living system.

What this means in practice

When you design a PMCF plan using these principles, the result is different from what most manufacturers submit.

Your plan is shorter, because you only collect data that matters. Your objectives are precise, because they map to CER gaps. Your methods are justified, because they match your endpoints. Your analysis plan is clear, because it includes decision rules. And your plan connects to the rest of your clinical evaluation system, because it was designed as part of that system from the start.

This takes more thought upfront. But it saves enormous effort downstream.

You don’t end up with spreadsheets full of data that no one knows how to interpret. You don’t end up writing PMCF reports that restate your plan without drawing conclusions. And you don’t end up in deficiency loops with your Notified Body because the PMCF does not address the clinical evaluation gaps.

Instead, you end up with a system that does what MDR intended: continuous confirmation of safety and performance through structured evidence generation.

Key Insight
The quality of your PMCF plan is not measured by how much data you collect. It is measured by whether that data allows you to update your clinical evaluation with confidence.

That is the standard. And it is the standard that Notified Bodies apply when they review your file.

If your current PMCF plan was designed to satisfy a checklist, it is time to redesign it as an evaluation system. The regulatory expectation is no longer about having a plan. It is about having a plan that works.

Peace,
Hatem
Clinical Evaluation Expert for Medical Devices
Follow me for more insights and practical advice.

Frequently Asked Questions

What is a Clinical Evaluation Report (CER)?

A CER is a mandatory document under MDR 2017/745 that demonstrates the safety and performance of a medical device through systematic analysis of clinical data. It must be updated throughout the device lifecycle based on PMCF findings.

How often should the CER be updated?

The CER should be updated whenever significant new clinical data becomes available, after PMCF activities, when there are changes to the device or intended purpose, and at minimum during annual reviews as part of post-market surveillance.

What causes CER rejection by Notified Bodies?

Common reasons include inadequate equivalence demonstration, insufficient clinical data for claims, poorly structured SOTA analysis, missing gap analysis, and lack of clear benefit-risk determination. Structure and logical flow are as important as the data itself.

Which MDCG guidance documents are most relevant for clinical evaluation?

Key documents include MDCG 2020-5 (Equivalence), MDCG 2020-6 (Sufficient Clinical Evidence), MDCG 2020-13 (CEAR Template), MDCG 2020-7 (PMCF Plan), and MDCG 2020-8 (PMCF Evaluation Report).

Need Expert Help with Your Clinical Evaluation?

Get personalized guidance on MDR compliance, CER writing, and Notified Body preparation.

Peace, Hatem

Your Clinical Evaluation Partner

Follow me for more insights and practical advice.

References:
– Regulation (EU) 2017/745 (MDR), Article 61, Annex XIV Part B
– MDCG 2020-7: Post-Market Clinical Follow-up (PMCF) Plan Template
– MDCG 2020-8: Post-Market Clinical Follow-up (PMCF) Evaluation Report Template