The PMCF Evaluation Report: Where Most Companies Stop Too Soon
You ran your PMCF activities. You collected data. You archived it in the technical file. Then you moved on to the next project. Six months later, a Notified Body reviewer asks: ‘Where is the PMCF Evaluation Report?’ And you realize the loop was never actually closed.
In This Article
This happens more often than manufacturers expect. PMCF becomes a collection exercise. Data comes in from surveys, literature, registries, or complaint databases. It gets documented. It gets stored. But the actual evaluation of that data against the original objectives rarely happens with the rigor required under MDR.
The PMCF Evaluation Report is not an administrative formality. It is the structured assessment that confirms whether your device still meets its intended performance and safety profile in real-world use. Without it, your PMCF Plan remains an intention, not a closed regulatory loop.
What the Regulation Actually Requires
Article 61(11) of MDR 2017/745 states that manufacturers conducting PMCF must analyze the results in a PMCF Evaluation Report. This report must be part of the clinical evaluation documentation and updated as necessary.
Annex XIV Part B specifies that the PMCF Evaluation Report must include an analysis of the collected data and conclusions whether the clinical evaluation confirms the safety and performance of the device throughout its expected lifetime.
This is not about summarizing PMCF activities. It is about evaluating findings against pre-defined endpoints and updating risk-benefit conclusions accordingly.
The PMCF Evaluation Report must answer one question clearly: Do the post-market data still support the conclusions in your Clinical Evaluation Report, or do they require revision?
Why Companies Struggle with This Step
The difficulty is not in collecting data. The difficulty is in making that data answer specific clinical questions.
Many PMCF Plans list activities: a survey, a registry entry, literature monitoring. But they do not define measurable endpoints. They do not set thresholds. They do not specify what findings would trigger an update to the CER or a corrective action.
When you start writing the PMCF Evaluation Report, you realize the data was collected but not structured to answer the original questions. The survey results cannot be compared year-over-year because the questions changed. The registry data lacks the granularity needed to assess specific risks. The literature review found articles but did not synthesize them against your device’s clinical claims.
Without a clear analytical framework from the start, the evaluation becomes descriptive instead of conclusive.
PMCF Evaluation Reports that only describe what was done, without analyzing whether the findings confirm or challenge the existing clinical evaluation conclusions.
The Structure That Actually Works
A functional PMCF Evaluation Report follows a logic that mirrors the Clinical Evaluation Report. It is not a standalone document about post-market activities. It is the bridge between what was predicted pre-market and what was observed post-market.
Start with the objectives defined in your PMCF Plan. Each objective should correspond to a section in the Evaluation Report. If your objective was to confirm the incidence of a specific adverse event in routine use, the evaluation must state whether the observed incidence matches expectations or not.
Then assess each data source. Did the literature review identify new evidence that contradicts your claims? Did the survey reveal usability issues not captured in pre-market studies? Did complaint data suggest an emerging risk pattern?
For each finding, state the clinical implication. Does it confirm existing conclusions? Does it require a clarification in the IFU? Does it trigger a risk re-assessment? Does it invalidate a performance claim?
This is where most reports fail. They list findings but avoid interpretation. They describe data but stop short of judgment.
The Clinical Re-Assessment
The core of the PMCF Evaluation Report is the re-assessment of your risk-benefit profile. You must compare the benefit-risk balance stated in your CER with the benefit-risk balance observed in post-market data.
If residual risks were considered acceptable based on projected benefits, and post-market data shows those benefits are lower than expected, the balance shifts. If side effects occur more frequently than anticipated, the balance shifts again.
This re-assessment cannot be generic. It must reference the specific conclusions in your CER and confirm or revise them based on evidence.
MDCG 2020-8 on Clinical Evaluation Assessment emphasizes that post-market data must be integrated into the clinical evaluation in a way that updates the overall clinical evidence. This means the PMCF Evaluation Report feeds back into the CER, and the CER is revised accordingly.
If your CER never changes after PMCF data comes in, reviewers will question whether the evaluation was actually performed.
The PMCF Evaluation Report should directly trigger updates to your CER. If it does not, either the PMCF data was insufficient or the evaluation was incomplete.
Frequency and Timing
Annex XIV Part B states that the PMCF Evaluation Report shall be updated when necessary and at least as part of the CER update process. For implantable devices and Class III devices, that means at least annually.
But frequency is not the only challenge. Timing matters.
If you wait until the CER update deadline to start writing the PMCF Evaluation Report, you are already behind. The evaluation should happen continuously as data comes in. The formal report consolidates findings that were already assessed in real-time.
When a cluster of complaints suggests a trend, that assessment should happen immediately. When a new study challenges your equivalence rationale, the evaluation begins the moment you identify the study. The PMCF Evaluation Report documents these assessments, it does not replace them.
This is the mindset shift many companies need. PMCF evaluation is not a periodic event. It is an ongoing process with periodic documentation.
When PMCF Data Reveals Gaps
One of the most difficult situations is when PMCF data reveals that your original clinical evaluation was insufficient. Perhaps a safety signal appears that was not predicted. Perhaps your device performs differently in a specific patient subgroup. Perhaps your equivalence rationale breaks down under real-world conditions.
The PMCF Evaluation Report must acknowledge this. It cannot soften the findings to avoid triggering regulatory actions. If the data shows your device does not perform as claimed, the report must say so clearly and outline the corrective actions.
This is not about admitting failure. It is about demonstrating that your post-market surveillance system works and that you respond to evidence appropriately.
Reviewers look for this honesty. When they see PMCF Evaluation Reports that only confirm pre-existing conclusions without ever identifying new risks or performance limitations, they question whether the surveillance was rigorous enough.
PMCF Evaluation Reports that never identify any new finding requiring action, suggesting either inadequate data collection or reluctance to acknowledge real-world performance issues.
Integration with PMS and Vigilance
The PMCF Evaluation Report does not exist in isolation. It must integrate findings from your entire post-market surveillance system, including complaint handling, vigilance reporting, and trend analysis.
If your vigilance reports documented a series of incidents, those must appear in the PMCF Evaluation Report with an assessment of whether they represent an emerging risk pattern. If your complaint data shows repeated user errors, that must feed into the clinical evaluation of usability.
This integration is where many technical files show disconnection. The PMCF Evaluation Report mentions data sources but does not actually pull findings from them. The vigilance database is referenced but not analyzed. The complaint log is cited but not synthesized.
Reviewers will cross-check. If your PMCF Evaluation Report concludes that no new risks emerged, but your complaint log shows recurring issues, that inconsistency will be flagged.
The PMCF Evaluation Report is the point where all post-market data streams converge into a single clinical assessment. It must reflect that convergence visibly.
The Final Connection to the CER
The loop closes when the PMCF Evaluation Report feeds back into the Clinical Evaluation Report. This is not a one-way relationship. The CER sets the clinical claims that PMCF monitors. The PMCF Evaluation Report confirms or challenges those claims. The CER then updates to reflect the post-market evidence.
This is why many companies now write the PMCF Evaluation Report and CER update as connected documents. The CER includes a dedicated section that summarizes PMCF findings and explains how they were integrated into the overall clinical evaluation.
When a Notified Body reviews your CER, they should see clear evidence that post-market data influenced the conclusions. If the CER reads the same way it did at initial certification, despite years of PMCF data, that raises a red flag.
Closing the loop means changing your clinical evaluation when the evidence requires it. It means acknowledging that pre-market predictions were partially wrong and correcting them. It means showing that your device’s clinical story evolves with real-world evidence.
That evolution is not a weakness. It is the whole point of PMCF.
A clinical evaluation that never changes after years of PMCF data suggests either perfect prediction (unlikely) or insufficient evaluation (far more likely).
What Reviewers Actually Look For
When a Notified Body reviews your PMCF Evaluation Report, they look for three things: evidence that you collected relevant data, evidence that you analyzed that data against your clinical claims, and evidence that you acted on the findings.
They will check whether your report addresses the objectives in your PMCF Plan. They will verify that the conclusions match the data presented. They will cross-reference with your CER to confirm integration.
But most importantly, they will assess whether you demonstrated clinical judgment. Did you interpret findings in the context of clinical risk? Did you weigh evidence appropriately? Did you consider alternative explanations for unexpected data?
This is why the PMCF Evaluation Report must be written by someone with clinical expertise. It is not a regulatory compliance checklist. It is a clinical document that requires medical judgment and evidence synthesis.
When I review PMCF Evaluation Reports in preparation for audits, the weakest ones are those written purely from a regulatory perspective. They tick boxes but lack clinical reasoning. The strongest ones read like a clinical paper: they present evidence, interpret it critically, and draw conclusions that acknowledge uncertainty where it exists.
Moving Forward
The PMCF Evaluation Report is where intention becomes evidence. It is where your post-market surveillance system proves it actually works. It is where your clinical evaluation demonstrates that it is a living process, not a static document from pre-market approval.
If your current PMCF Evaluation Reports only describe activities without evaluating findings, that needs to change. If they never trigger updates to your CER, your process is incomplete. If they avoid acknowledging negative findings, you are creating a bigger problem for future audits.
The loop only closes when the evaluation happens. When the data is questioned. When the conclusions are tested. When the clinical story adjusts to what actually happened in the real world.
That is the discipline MDR demands. And that is what separates companies that maintain compliance from those that just document it.
Peace,
Hatem
Clinical Evaluation Expert for Medical Devices
Follow me for more insights and practical advice.
Frequently Asked Questions
What is a Clinical Evaluation Report (CER)?
A CER is a mandatory document under MDR 2017/745 that demonstrates the safety and performance of a medical device through systematic analysis of clinical data. It must be updated throughout the device lifecycle based on PMCF findings.
How often should the CER be updated?
The CER should be updated whenever significant new clinical data becomes available, after PMCF activities, when there are changes to the device or intended purpose, and at minimum during annual reviews as part of post-market surveillance.
What causes CER rejection by Notified Bodies?
Common reasons include inadequate equivalence demonstration, insufficient clinical data for claims, poorly structured SOTA analysis, missing gap analysis, and lack of clear benefit-risk determination. Structure and logical flow are as important as the data itself.
Which MDCG guidance documents are most relevant for clinical evaluation?
Key documents include MDCG 2020-5 (Equivalence), MDCG 2020-6 (Sufficient Clinical Evidence), MDCG 2020-13 (CEAR Template), MDCG 2020-7 (PMCF Plan), and MDCG 2020-8 (PMCF Evaluation Report).
Need Expert Help with Your Clinical Evaluation?
Get personalized guidance on MDR compliance, CER writing, and Notified Body preparation.
✌
Peace, Hatem
Your Clinical Evaluation Partner
Follow me for more insights and practical advice.
– MDR 2017/745 Article 61(11)
– MDR 2017/745 Annex XIV Part B
– MDCG 2020-8: Clinical Evaluation Assessment Guidance





