Why your PMCF Evaluation Report fails the first review
Most PMCF Evaluation Reports land on the reviewer’s desk looking like progress updates. They list what was done, how many patients enrolled, which events occurred. Then they get sent back with a simple note: “This is not an evaluation.” The confusion is understandable. MDCG 2020-8 provides a template structure, but many manufacturers treat it as a checklist rather than an analytical framework.
In This Article
The disconnect happens because teams confuse documentation with evaluation. They document PMCF activities. They summarize results. But they do not evaluate whether those results confirm or challenge the assumptions made in the Clinical Evaluation Report.
This matters because Article 61(11) MDR requires manufacturers to analyze PMCF data to confirm safety and performance throughout the device lifecycle. The PMCF Evaluation Report is not optional documentation. It is the formal record of that analysis.
When a Notified Body or authority reviews the report, they look for one thing: clinical judgment. Did you assess whether the new evidence supports your benefit-risk determination, or does it signal the need for action?
What MDCG 2020-8 Actually Requires
MDCG 2020-8 outlines the structure for the PMCF Evaluation Report. It mirrors the CER template in many ways, but with a critical difference. The CER establishes clinical safety and performance based on all available evidence at a point in time. The PMCF Evaluation Report updates that determination based on post-market data.
The template includes sections on device description, methods used, results from PMCF activities, and conclusions. What gets missed is the expectation embedded in every section: interpretation.
Describing your PMCF study design is necessary. But the reviewer wants to know if the design actually captured the risks you identified in your CER. Summarizing adverse events is required. But the analysis must address whether those events align with your predicted residual risk profile or reveal new hazards.
Reports present PMCF data as standalone findings without linking them back to the specific clinical claims, risk assessments, or benefit-risk conclusions in the CER. The evaluation becomes a data summary instead of a clinical judgment update.
The template is not the problem. The problem is treating it as a form to fill out rather than a reasoning framework to follow.
Starting with the Right Question
Before writing anything, you need to answer one question: What am I evaluating?
The answer is not “my PMCF study results.” The answer is “whether my existing clinical conclusions remain valid.”
This shifts the focus. You are not writing a clinical study report. You are writing an evaluation that uses PMCF data to test assumptions. Those assumptions live in your CER. They include your benefit-risk determination, your residual risk profile, your claimed performance outcomes, and your identified knowledge gaps.
If your CER states that the device reduces procedure time by 30% in expert hands, and your PMCF registry shows a 15% reduction across diverse users, that is not just a finding. That is evidence your clinical claim may need refinement or your instructions for use may need strengthening.
If your CER identifies infection as a known risk with an estimated incidence of 2%, and your PMCF surveillance detects a 4% rate, that is not just an adverse event tally. That is a signal your risk controls may be insufficient or your patient population assumptions may be incorrect.
The PMCF Evaluation Report is only meaningful if it directly engages with the clinical claims and risk conclusions in the CER. Every data section should answer: Does this confirm what we claimed, or does it challenge it?
This approach changes how you structure every section. You do not just present data. You interpret it against a baseline of existing conclusions.
Methods Section: Why It Gets Rejected
Most methods sections describe the PMCF plan. They explain study design, data sources, patient populations, and endpoints. This is necessary but not sufficient.
Reviewers reject methods sections when they cannot connect the described activities to the specific gaps or assumptions identified in the CER. If your CER states that long-term durability beyond two years is unknown, your methods section must explain how your PMCF design addresses that gap. If it does not, the reviewer will ask why you are collecting data that does not answer your stated questions.
The other common failure is lack of traceability. The methods section should reference the exact sections of the CER being evaluated. Which clinical claims are being tested? Which residual risks are being monitored? Which benefit-risk factors are being reassessed?
Without this linkage, the PMCF activities appear disconnected from clinical evaluation. They become generic post-market surveillance rather than targeted confirmation of safety and performance.
What Works in Practice
Effective methods sections start by listing the specific CER conclusions being evaluated. Then they explain which PMCF activities target each conclusion. Finally, they describe how the data will be analyzed to confirm or challenge those conclusions.
For example: “Section 7.3 of the CER concluded that Device X reduces surgical bleeding by 40% compared to standard technique, based on RCT data in controlled settings. The PMCF registry will capture real-world bleeding outcomes across varied user experience levels to confirm this benefit persists outside study conditions. Analysis will compare observed bleeding rates to the RCT benchmark and assess whether user training level affects outcomes.”
That paragraph does three things. It identifies the CER claim. It explains the PMCF approach. It defines the evaluation criteria. A reviewer reading that knows exactly what you are testing and why.
Results Section: Data Without Judgment
Results sections fail when they present numbers without context. You report enrollment figures, event rates, and outcome measures. But you do not interpret what those numbers mean for your clinical conclusions.
This happens because teams separate data presentation from clinical evaluation. They think results should be objective and interpretation should be saved for the conclusions. But in regulatory evaluation, every result needs immediate interpretation against the baseline claim.
If you report a complication rate, the next sentence should state whether that rate aligns with the CER prediction. If you present performance data, the next paragraph should assess whether it confirms the claimed benefit. The reviewer should never have to guess whether a finding is good news, bad news, or neutral.
Results are presented in tables and figures without narrative interpretation. The report reads like a clinical study results section rather than an evaluation of whether those results support or challenge existing clinical claims.
Another frequent issue is failure to address unexpected findings. If a PMCF study detects an adverse event not anticipated in the CER risk assessment, the results section must acknowledge it explicitly. Burying it in a table or omitting discussion signals to reviewers that the evaluation is incomplete.
How to Structure Results with Evaluation
Organize results by CER claim, not by data source. For each claim, present the PMCF data, then immediately evaluate whether it confirms or challenges the claim.
Example structure: “Claim: Device reduces recovery time by 50%. PMCF Data: Registry of 300 patients showed mean recovery reduction of 48% (95% CI: 43-53%). Evaluation: The observed reduction aligns with the claimed benefit and confirms performance across diverse clinical settings. The confidence interval overlaps the claimed value, supporting continued use of the 50% claim in labeling.”
That structure makes the evaluation explicit. The reviewer sees the claim, the evidence, and the clinical judgment in one flow.
If the data does not align, the structure forces you to address it: “Claim: Infection rate below 2%. PMCF Data: Surveillance detected 4% infection rate in 500 procedures. Evaluation: The observed rate exceeds the CER estimate. Further analysis shows 90% of infections occurred in sites with non-standard sterilization protocols. This suggests the device performs as expected when instructions are followed, but labeling must emphasize sterilization requirements more explicitly. A Field Safety Corrective Action is under preparation.”
That paragraph does not hide the discrepancy. It evaluates what it means and signals action. That is what reviewers expect.
Conclusions Section: The Make or Break Point
The conclusions section is where most reports collapse. They summarize findings but do not render a clinical judgment on whether the benefit-risk profile remains acceptable.
MDCG 2020-8 and Article 61(11) MDR require manufacturers to conclude whether PMCF data confirms the conclusions in the CER. This means you must state explicitly: Does the device remain safe and perform as intended? Does the benefit-risk profile remain positive? Are any changes to labeling, design, or risk management needed?
Vague statements like “PMCF data supports continued use” fail review. The conclusion must be specific to each claim and each risk. Which claims are confirmed? Which risks remain acceptable? Which findings require follow-up?
The conclusions section must update the benefit-risk determination from the CER. If the data confirms safety and performance, state it clearly. If the data reveals new risks or reduced benefits, describe the actions being taken. Ambiguity suggests incomplete evaluation.
Another critical element: the conclusions must tie to your next steps. If PMCF data identifies a new risk, what corrective action are you taking? If a claim is no longer supported, how are you updating labeling? If a knowledge gap persists, what additional PMCF activities are planned?
Reviewers look for this connection. The PMCF Evaluation Report is not a final document. It feeds into risk management, CER updates, and future PMCF planning. If those links are missing, the evaluation appears disconnected from the quality system.
Timing and Frequency: When to Write It
MDCG 2020-8 does not mandate a fixed reporting interval. The frequency depends on the device risk class, the maturity of the evidence, and the PMCF plan design.
In practice, most Class III and implantable Class IIb devices require annual PMCF Evaluation Reports. Lower-risk devices may justify longer intervals if the PMCF plan and risk profile support it.
But timing is not just about calendar intervals. The report should be updated when significant new data emerges. If a PMCF study completes enrollment, if surveillance detects a cluster of adverse events, if literature reveals a new risk, the PMCF Evaluation Report must be updated to evaluate that data.
Waiting for the annual cycle when new evidence challenges your clinical conclusions is not acceptable. The report is a living evaluation, not an anniversary document.
Integrating with CER Updates
The PMCF Evaluation Report should inform CER updates, not duplicate them. When the report concludes that PMCF data confirms existing claims, the CER may only need a brief reference. When the report identifies discrepancies or new risks, the CER must be updated to reflect the new evidence and revised conclusions.
Some manufacturers write separate PMCF Evaluation Reports and then copy sections into the CER. This creates version control issues and dilutes the evaluation. Better practice is to write the PMCF Evaluation Report as a standalone document that clearly states which CER sections require updating and why.
The CER then references the PMCF Evaluation Report and incorporates its conclusions into the overall benefit-risk determination. This keeps the documents distinct but connected.
What Reviewers Look For
When a Notified Body or competent authority reviews a PMCF Evaluation Report, they look for three things.
First, clinical judgment. Did you evaluate the data or just present it? Did you compare findings to CER claims? Did you assess whether the benefit-risk profile remains positive?
Second, traceability. Can they trace each PMCF finding back to a specific CER claim or risk? Can they see how the evaluation informed corrective actions or CER updates?
Third, honesty. Did you address unexpected findings, or did you only highlight favorable data? Did you acknowledge limitations in your PMCF activities? Did you identify remaining gaps?
Reports that check all three boxes move through review quickly. Reports that present data without judgment, lack traceability, or avoid uncomfortable findings get rejected.
Reports focus only on data that confirms existing claims and omit discussion of findings that challenge assumptions or reveal limitations. Reviewers interpret this as incomplete evaluation or lack of critical analysis.
The report does not need to be perfect. It needs to be honest and analytical. If your PMCF activities did not capture all the data you planned, explain why and what you will do differently. If the data raises questions, acknowledge them and describe next steps. Reviewers respect transparency far more than false confidence.
Final Thought
The PMCF Evaluation Report is not a compliance document. It is the mechanism through which you demonstrate that your device remains safe and effective after it enters the market.
Writing it requires clinical judgment, not just data summarization. It requires linking every finding back to the claims and risks in your CER. It requires honesty about what the data shows, even when it challenges your assumptions.
MDCG 2020-8 gives you the structure. Your job is to fill that structure with rigorous evaluation. When you do, the report becomes a credible demonstration of post-market vigilance. When you do not, it becomes another rejected submission.
Most manufacturers learn this after the first rejection. You can learn it now.
Peace,
Hatem
Clinical Evaluation Expert for Medical Devices
Follow me for more insights and practical advice.
Frequently Asked Questions
What is a Clinical Evaluation Report (CER)?
A CER is a mandatory document under MDR 2017/745 that demonstrates the safety and performance of a medical device through systematic analysis of clinical data. It must be updated throughout the device lifecycle based on PMCF findings.
How often should the CER be updated?
The CER should be updated whenever significant new clinical data becomes available, after PMCF activities, when there are changes to the device or intended purpose, and at minimum during annual reviews as part of post-market surveillance.
What causes CER rejection by Notified Bodies?
Common reasons include inadequate equivalence demonstration, insufficient clinical data for claims, poorly structured SOTA analysis, missing gap analysis, and lack of clear benefit-risk determination. Structure and logical flow are as important as the data itself.
Which MDCG guidance documents are most relevant for clinical evaluation?
Key documents include MDCG 2020-5 (Equivalence), MDCG 2020-6 (Sufficient Clinical Evidence), MDCG 2020-13 (CEAR Template), MDCG 2020-7 (PMCF Plan), and MDCG 2020-8 (PMCF Evaluation Report).
Need Expert Help with Your Clinical Evaluation?
Get personalized guidance on MDR compliance, CER writing, and Notified Body preparation.
✌
Peace, Hatem
Your Clinical Evaluation Partner
Follow me for more insights and practical advice.
– Regulation (EU) 2017/745 (MDR), Article 61(11)
– MDCG 2020-8 Rev.1: Template for PMCF Evaluation Report
– MDCG 2020-13: Clinical Evaluation Assessment Report Template
– MDCG 2020-7: Post-Market Clinical Follow-up (PMCF) Plan Template





