Why your PMCF looks like surveillance with a new name
I reviewed a PMCF plan last month that listed adverse event monitoring, complaint analysis, and literature searches. The manufacturer called it proactive. The Notified Body called it surveillance dressed up. The difference is not semantic. It is the difference between demonstrating ongoing clinical benefit and reacting to problems once they emerge.
In This Article
- What reactive surveillance actually is
- What proactive PMCF actually means
- The specific expectations in MDCG guidance
- Where the boundary gets blurred
- What this means for your technical documentation
- Why this matters more now than before
- What actually passes review
- How to adjust if your plan leans reactive
- The consequence of getting this wrong
- Final thought
Most manufacturers think they understand PMCF. They schedule literature reviews. They track complaints. They monitor vigilance reports. Then they label it as a PMCF plan and submit it with the technical file.
What they miss is that MDR expects something fundamentally different. Not surveillance activities renamed. Not passive monitoring labeled as follow-up. Something that actively generates clinical evidence to confirm the benefit-risk profile remains favorable throughout the lifetime of the device.
The distinction between proactive PMCF and reactive surveillance determines whether your submission passes or triggers a major non-conformity.
What reactive surveillance actually is
Reactive surveillance waits for signals. It monitors what comes in. Complaints filed by users. Adverse events reported through vigilance channels. Trends noticed in returned devices. Literature that surfaces through periodic searches.
All of this is necessary. MDR Article 83 requires post-market surveillance. You must have a PMS plan. You must analyze data. You must respond to safety signals.
But surveillance is about detection and response. It is about knowing when something goes wrong so you can take corrective action. It is retrospective by nature. It reacts to what has already happened.
Manufacturers submit PMCF plans that only list complaint analysis, vigilance monitoring, and annual literature reviews. Notified Bodies reject these because they describe surveillance activities, not clinical follow-up that generates new evidence.
The mistake happens when manufacturers assume that comprehensive surveillance fulfills PMCF requirements. It does not. Surveillance tells you when your assumptions might be wrong. PMCF tells you whether your assumptions were right in the first place.
What proactive PMCF actually means
Proactive PMCF generates evidence before problems appear. It seeks to confirm that clinical benefits remain consistent. That performance holds across different patient populations. That safety remains acceptable as usage expands beyond initial clinical investigation conditions.
MDR Article 61 and Annex XIV Part B describe PMCF as a continuous process. Not triggered by signals. Not activated when concerns arise. Continuous. The clinical evaluation must be updated throughout the lifetime of the device. PMCF provides the data that makes those updates possible.
MDCG 2020-7 clarifies this further. PMCF must address gaps in clinical evidence identified during the initial clinical evaluation. It must confirm that the benefit-risk profile remains favorable. It must detect emerging risks early, yes, but also confirm that expected benefits continue to materialize.
This is the part most manufacturers miss. They focus on the safety side. They monitor for problems. But they do not systematically collect evidence that benefits persist. That outcomes remain consistent. That performance holds when the device is used in routine practice by typical users.
Proactive PMCF fills evidence gaps before they become regulatory issues. Reactive surveillance responds to issues after they emerge. MDR expects both, but distinguishes clearly between them.
Here is what proactive looks like in practice. You identified during clinical evaluation that your data comes primarily from specialized centers. Your PMCF plan includes a registry or survey to collect outcomes from general practitioners. You do not wait for complaints. You actively seek evidence that performance translates to broader use.
You noted that your clinical data has limited long-term follow-up. Your PMCF includes structured follow-up with a defined cohort of patients to track performance and safety beyond the initial study period. You are not reacting to a safety signal. You are confirming that durability meets expectations.
You relied partly on equivalence for your clinical evaluation. Your PMCF plan specifies how you will collect device-specific data to strengthen the evidence base and reduce reliance on equivalent device data over time. Proactive. Structured. Targeted at known evidence gaps.
The specific expectations in MDCG guidance
MDCG 2020-7 provides the framework for PMCF plans and reports. It describes PMCF as methods and procedures to proactively collect and evaluate clinical data. Not reactive. Proactive.
The guidance lists specific objectives. Confirm safety and performance in routine use. Identify previously unknown side effects. Monitor identified side effects and contraindications. Ensure continued acceptability of the benefit-risk ratio. Support updates to instructions for use and training.
Each objective requires active data collection. You cannot confirm performance in routine use by waiting for complaints. You must collect data on outcomes. You cannot ensure continued acceptability of benefit-risk by monitoring adverse events alone. You must track both benefits and risks systematically.
MDCG 2020-8 on PMCF evaluation reports reinforces this. The report must demonstrate how the PMCF activities actually generated evidence. What data was collected. What it shows about clinical performance. How it addresses the gaps identified in the clinical evaluation.
Notified Bodies assess PMCF against these expectations. They look for methods that generate evidence, not just monitor signals. They expect data collection strategies that answer specific clinical questions.
PMCF plans that rely entirely on passive data collection fail to demonstrate how clinical evidence will be actively generated. Notified Bodies ask: “How will this plan address the evidence gaps identified in Section 6 of your CER?”
Where the boundary gets blurred
Some activities sit on the boundary. Literature surveillance, for example. Periodic literature review is part of PMS. It monitors published evidence. But it also updates the state of the art and can reveal new clinical data about your device or similar devices.
The difference is in how you use it. If literature review only serves to detect safety signals, it functions as surveillance. If it systematically evaluates new clinical evidence to update your benefit-risk assessment, it contributes to PMCF.
The same applies to complaint and vigilance data. Analyzing trends in complaints is surveillance. But structured analysis of use errors or performance issues to assess whether instructions for use remain adequate or whether certain patient groups experience different outcomes becomes part of clinical follow-up.
What matters is intent and method. Are you waiting for data to arrive and reacting to it? Or are you actively structuring data collection to answer clinical questions?
Notified Bodies make this distinction during assessment. They read the PMCF plan looking for active data generation methods. Surveys. Registries. Structured follow-up protocols. Targeted studies. They expect to see how each method addresses a specific evidence gap or confirms a specific aspect of clinical performance.
What this means for your technical documentation
Your PMS plan and PMCF plan must be distinct documents with distinct purposes. The PMS plan describes how you monitor the device once on the market. Sources of data. Responsibilities. Timelines. Methods for detecting signals and trends.
Your PMCF plan describes how you generate clinical evidence. What questions need answering. What evidence gaps exist. What methods you will use to collect data. What outcomes you will measure. How you will analyze and report results.
The two plans interact. PMS data feeds into PMCF evaluation. Trends identified through surveillance may trigger additional PMCF activities. But the starting point is different. PMS starts with monitoring. PMCF starts with clinical questions.
In the clinical evaluation report, Section 6 identifies evidence gaps and limitations. Your PMCF plan must explicitly address how each gap will be filled. Which methods. What timeline. What success looks like.
If your CER notes limited data on long-term performance, your PMCF plan must specify how long-term data will be collected. If your CER relies on equivalence, your PMCF plan must describe how device-specific data will be generated. If your CER identifies uncertainty about performance in a specific patient subgroup, your PMCF plan must include methods to collect data from that subgroup.
The link between CER Section 6 and your PMCF plan is where Notified Bodies assess whether your approach is truly proactive. Each evidence gap must map to a specific PMCF method designed to address it.
Why this matters more now than before
Under the previous directives, post-market surveillance was often a compliance exercise. Minimal monitoring. Reactive responses. Many manufacturers maintained market presence without actively confirming that clinical performance remained consistent.
MDR changed this. The regulation requires ongoing demonstration of compliance. Clinical evaluation is not a one-time event. It must be updated with PMCF data. The benefit-risk profile must be continuously confirmed, not assumed.
For devices relying on equivalence, the change is even more significant. Equivalence claims must be supported by robust clinical data from the equivalent device. But MDR also expects manufacturers to reduce reliance on equivalence over time by generating device-specific data through PMCF.
For legacy devices with limited historical clinical data, PMCF becomes the mechanism to build an evidence base that meets current standards. Notified Bodies expect to see PMCF plans that acknowledge evidence limitations and specify how data collection will strengthen the clinical evaluation over time.
This is not about perfection from day one. It is about demonstrating a credible plan to proactively address evidence gaps and confirm clinical performance throughout the device lifecycle.
What actually passes review
I see PMCF plans that pass review. They share common characteristics.
They start by listing the clinical questions that PMCF must answer. Specific questions tied to evidence gaps identified in the CER. Not vague statements about monitoring safety and performance. Clear questions about outcomes, patient populations, long-term durability, or usability in routine practice.
They specify methods that generate data. Registries with defined endpoints. Surveys with validated instruments. Structured follow-up protocols with clear data collection procedures. When literature review is included, it describes how new clinical data will be evaluated and integrated into the clinical evaluation.
They define timelines and milestones. When data collection begins. How often data is analyzed. When interim reports are generated. When the full evaluation will be completed.
They explain how PMCF data will feed back into the clinical evaluation and risk management. Not just collected and filed. Actively used to update the CER, inform design changes if needed, or revise instructions for use.
Plans that fail review list surveillance activities without specifying what clinical evidence they will generate. They describe complaint monitoring and literature searches without linking them to specific evidence gaps. They lack defined methods for active data collection.
PMCF plans that describe what the manufacturer will monitor but not what they will actively collect. The absence of proactive data generation methods signals that the plan is surveillance relabeled.
How to adjust if your plan leans reactive
If your PMCF plan currently lists surveillance activities, you need to refocus on evidence generation.
Start by reviewing your CER. What evidence gaps did you identify? What limitations did you acknowledge? What uncertainties remain about clinical performance or safety?
For each gap, define a specific clinical question. Not “monitor safety” but “confirm that complication rates in routine use remain consistent with clinical investigation data.” Not “track performance” but “evaluate whether device performance is maintained after two years of use.”
Then specify a method that generates data to answer each question. If you need long-term data, design a follow-up protocol. If you need data from broader patient populations, establish a registry or survey. If you need to confirm that benefits persist, identify outcome measures and data sources.
Link each method to a timeline. When does data collection start? How long does it continue? When will you have sufficient data to update your clinical evaluation?
Finally, integrate surveillance into this framework. Complaint and vigilance data still matter. But position them as supporting data that can trigger additional PMCF activities or inform ongoing evaluation, not as the sole basis for clinical follow-up.
The consequence of getting this wrong
Notified Bodies flag PMCF plans that look like surveillance relabeled. The deficiency is often coded as failure to comply with MDR Article 61 or Annex XIV Part B. The finding is major because it indicates that the manufacturer does not have a method to continuously confirm the clinical benefit-risk profile.
This delays certification. It requires rewriting the PMCF plan and often revising the CER to better articulate evidence gaps. It raises questions about whether the manufacturer understands ongoing clinical evaluation requirements.
But the bigger consequence is internal. If your PMCF is reactive, you learn about problems too late. You miss the opportunity to detect trends early. You do not build the evidence base that supports confident claims about long-term performance.
When you eventually need to update your clinical evaluation for a new certificate, a design change, or an indication extension, you discover you lack the data. You spent years monitoring but not collecting evidence. Now you face the choice between conducting a late-stage study or accepting limitations in your clinical evaluation that weaken your submission.
Proactive PMCF avoids this. It builds evidence continuously. It positions you to respond confidently to Notified Body questions. It supports lifecycle management and future development.
Final thought
The shift from reactive surveillance to proactive PMCF is not optional under MDR. It is a fundamental expectation embedded in the regulation and clarified in MDCG guidance.
Surveillance tells you when to respond. PMCF tells you whether your device continues to deliver the clinical benefits you claimed. Both are necessary. But only one actively generates the evidence that keeps your clinical evaluation valid throughout the device lifetime.
If your PMCF plan could be mistaken for a surveillance plan, it needs revision. Not because of semantics. Because the evidence base you need to maintain market access depends on proactive data collection, not reactive monitoring.
Next in this series, I will look at how to design PMCF methods that generate usable clinical data without requiring controlled trials. The methods that pass Notified Body review and actually deliver evidence you can integrate into your CER.
Peace,
Hatem
Clinical Evaluation Expert for Medical Devices
Follow me for more insights and practical advice.
Frequently Asked Questions
What is a Clinical Evaluation Report (CER)?
A CER is a mandatory document under MDR 2017/745 that demonstrates the safety and performance of a medical device through systematic analysis of clinical data. It must be updated throughout the device lifecycle based on PMCF findings.
How often should the CER be updated?
The CER should be updated whenever significant new clinical data becomes available, after PMCF activities, when there are changes to the device or intended purpose, and at minimum during annual reviews as part of post-market surveillance.
What causes CER rejection by Notified Bodies?
Common reasons include inadequate equivalence demonstration, insufficient clinical data for claims, poorly structured SOTA analysis, missing gap analysis, and lack of clear benefit-risk determination. Structure and logical flow are as important as the data itself.
Which MDCG guidance documents are most relevant for clinical evaluation?
Key documents include MDCG 2020-5 (Equivalence), MDCG 2020-6 (Sufficient Clinical Evidence), MDCG 2020-13 (CEAR Template), MDCG 2020-7 (PMCF Plan), and MDCG 2020-8 (PMCF Evaluation Report). MDCG 2020-7, MDCG 2020-8
Need Expert Help with Your Clinical Evaluation?
Get personalized guidance on MDR compliance, CER writing, and Notified Body preparation.
✌
Peace, Hatem
Your Clinical Evaluation Partner
Follow me for more insights and practical advice.
– MDR 2017/745 Article 61, Article 83, Annex XIV Part B
– MDCG 2020-7 Post-Market Clinical Follow-up (PMCF) Plan Template
– MDCG 2020-8 Post-Market Clinical Follow-up (PMCF) Evaluation Report Template
Related Resources
Read our complete guide to PMCF under EU MDR: PMCF Plan & Report under EU MDR
Or explore Complete Guide to Clinical Evaluation under EU MDR





