Your PMCF data exists. Your regulatory conclusions don’t.
I reviewed a PMCF evaluation report last month that contained 47 pages of clinical data, statistical breakdowns, and formatted tables. The manufacturer had collected everything. Patient outcomes. Device performance. Safety signals. Three years of follow-up. But when the Notified Body asked a simple question: What does this mean for your benefit-risk profile?, there was no answer. The data was there. The regulatory conclusion was not.
In This Article
- What the PMCF evaluation report is supposed to do
- The trap of presenting data without interpretation
- What regulatory conclusions actually look like
- The link between PMCF evaluation and CER updates
- The role of the clinical evaluator in PMCF evaluation
- How to structure your conclusions section
- What happens next
This is not a problem of missing data. It is a problem of interpretation failure.
Most manufacturers understand that PMCF data must be collected. They build plans. They execute studies. They gather evidence. But when the time comes to write the PMCF evaluation report, the transition from data to regulatory conclusion collapses.
The report becomes a data dump. Numbers are presented. Tables are shown. Safety events are listed. And then the report ends with a vague statement: “The device performs as expected.” That is not a conclusion. That is an avoidance.
What the PMCF evaluation report is supposed to do
According to MDCG 2020-8, the PMCF evaluation report is not a repetition of the PMCF plan or a summary of data. It is an analytical document that interprets the evidence collected and translates it into regulatory conclusions about the device’s ongoing benefit-risk profile.
The report must evaluate whether the clinical evidence supporting the device remains valid. It must identify any new risks. It must confirm that the assumptions made during pre-market evaluation still hold. And it must feed directly into the clinical evaluation report update cycle.
But here is where it breaks. Many manufacturers treat the PMCF evaluation report as a compliance exercise. A document that must be produced because the regulation says so. The focus shifts to completing it rather than making it meaningful.
The PMCF evaluation report is the mechanism that closes the loop between post-market data and pre-market claims. Without clear regulatory conclusions, this loop remains open and your clinical evaluation remains outdated.
The trap of presenting data without interpretation
I see this repeatedly. A manufacturer collects adverse event reports. They record incident rates. They track complaints. They include all of this in the PMCF evaluation report. And then they stop.
There is no analysis of what the data shows relative to the original clinical evaluation. No comparison to the assumptions in the residual risk analysis. No reflection on whether the device is still meeting its intended clinical benefit.
The Notified Body asks: Do these adverse events change your benefit-risk conclusion? The manufacturer responds: We have documented the events. But documentation is not interpretation. Listing events does not answer whether the risk remains acceptable.
Here is the question that separates compliant PMCF evaluation reports from deficient ones: What does this data tell you about the validity of your original clinical evidence?
If the data confirms your claims, state that explicitly and show why. If it raises concerns, state that and show how you addressed them. If it reveals gaps, acknowledge them and describe what you will do next.
Why this matters for Notified Body reviews
Notified Bodies do not review PMCF evaluation reports to check that data was collected. They review them to see if the manufacturer can think clinically about their own device.
When a reviewer sees data without conclusions, they start asking questions. And these questions reveal whether the manufacturer understands what the data means. Most of the time, it becomes clear that the data was collected but never analyzed in a regulatory context.
The result is a finding. Not because the data is missing. Because the thinking is missing.
PMCF evaluation reports that present data tables and safety summaries without explicitly stating whether the findings confirm, challenge, or modify the original benefit-risk assessment. The conclusion section says “monitoring will continue” but does not answer what the current evidence demonstrates.
What regulatory conclusions actually look like
A regulatory conclusion in a PMCF evaluation report is not a summary. It is a statement that connects evidence to a specific regulatory requirement.
For example: Based on 2,847 device-years of follow-up data, the observed complication rate of 1.8% remains consistent with the pre-market clinical investigation findings of 2.1%. No new safety signals have been identified. The benefit-risk profile established in the clinical evaluation report remains valid.
That is a conclusion. It references the data. It compares it to the original evidence. It states a regulatory position.
Compare that to: The device has been used in 2,847 device-years. Complications were reported in 1.8% of cases. No serious adverse events occurred. PMCF will continue as planned.
That is a data statement. It tells you what happened. It does not tell you what it means.
The structure that drives conclusions
The best PMCF evaluation reports I review follow a logical structure that forces interpretation at every stage. They do not just present findings. They interpret findings relative to regulatory expectations.
First, they restate the clinical claims and safety profile from the clinical evaluation. This creates the baseline for comparison. Without this, there is nothing to evaluate the PMCF data against.
Second, they present the data collected during the PMCF period. Adverse events. Performance outcomes. User feedback. Complaint trends. But each data section ends with an interpretation: What does this finding mean relative to the baseline?
Third, they provide a synthesis section that consolidates all findings into an updated benefit-risk statement. This is where the regulatory conclusion lives. Not in the introduction. Not scattered across data tables. In one place where the manufacturer takes a clear position.
Fourth, they identify any actions required. If the data revealed a gap, what will be done? If assumptions were challenged, how will they be re-evaluated? If new risks emerged, how will they be managed?
This structure ensures that data is never presented in isolation. Every piece of evidence is connected to a regulatory question.
A PMCF evaluation report should be readable by someone who has never seen your device and still understand whether the post-market data supports or challenges the original clinical claims. If that clarity is missing, the report is not functional.
The link between PMCF evaluation and CER updates
MDCG 2020-8 makes it clear that the PMCF evaluation report is not a standalone document. It feeds directly into the clinical evaluation report update process.
This means the conclusions you draw in the PMCF evaluation report must be usable in the CER. If your PMCF evaluation says “the device performs as expected,” what does that tell the CER author? Nothing. It cannot be used.
But if your PMCF evaluation says “the observed performance data confirms the clinical benefit claims in section 4.2 of the CER, with no new risks identified,” that becomes actionable. The CER can reference it. The benefit-risk section can integrate it. The Notified Body can trace the logic.
When I see a well-written PMCF evaluation report, I can immediately see how it will support the CER update. The conclusions are structured in a way that allows direct integration. The data is analyzed in terms that match the CER framework.
When the PMCF evaluation report is weak, the CER author has to reinterpret the PMCF data themselves. That introduces risk. Because now two people are making conclusions instead of one. And those conclusions may not align.
What happens when conclusions are missing
I reviewed a file last year where the PMCF evaluation report contained extensive real-world data but no regulatory conclusions. The manufacturer updated their CER and referenced the PMCF report as supporting evidence. But when the Notified Body reviewed it, they asked: What in the PMCF report supports your updated benefit-risk statement?
The manufacturer could not answer. Because the PMCF report did not contain a benefit-risk statement. It contained data. The CER author had assumed the data was positive, but that assumption was never validated in the PMCF evaluation.
The Notified Body issued a finding. Not on the CER. On the PMCF evaluation report. Because the foundation was missing.
PMCF evaluation reports that do not explicitly state how their findings support or modify the clinical evaluation report. The connection between post-market evidence and pre-market claims is implied but never formalized, leaving the CER update process without a clear foundation.
The role of the clinical evaluator in PMCF evaluation
The PMCF evaluation report should be written by someone who understands clinical evaluation logic. Not just someone who understands data collection.
Many manufacturers assign the PMCF evaluation report to the person who ran the PMCF activities. That person knows the data. They know what was collected and how. But they may not know how to translate that data into the regulatory framework that the clinical evaluation operates within.
This creates a disconnect. The data is sound. The analysis is competent. But the regulatory conclusions do not follow because the writer does not see the PMCF report as part of the clinical evaluation system.
When I write or review a PMCF evaluation report, I start by re-reading the relevant sections of the clinical evaluation report. I need to know what claims were made. What risks were accepted. What assumptions were documented. Only then can I interpret the PMCF data in context.
If the CER claimed a complication rate of 2%, and the PMCF data shows 1.8%, that is meaningful. If the CER assumed the device would be used primarily in one setting and the PMCF data shows widespread use in another setting, that is meaningful. If the CER identified a theoretical risk and the PMCF data shows it never materialized, that is meaningful.
But none of that meaning emerges unless the person writing the PMCF evaluation report is thinking in terms of the clinical evaluation framework.
How to structure your conclusions section
The conclusions section of the PMCF evaluation report is where regulatory thinking becomes visible. It should not be a summary. It should be a set of explicit statements that answer specific questions.
First: Does the PMCF data confirm the clinical performance claims made in the clinical evaluation? State yes or no. Then show why. Reference specific data points and compare them to the original claims.
Second: Does the PMCF data reveal any new risks or increase the severity or frequency of known risks? State yes or no. If yes, describe the risk and how it will be managed. If no, state that explicitly so the Notified Body can see you evaluated the question.
Third: Does the PMCF data support the ongoing acceptability of the benefit-risk profile? This is the core regulatory conclusion. Do not assume it. State it. And show the reasoning.
Fourth: Are there any gaps or limitations in the PMCF data that require further action? If the follow-up period was too short, say that. If certain patient populations were underrepresented, say that. If additional studies are needed, say that.
Each of these statements creates a clear position. The Notified Body can agree or challenge the position. But at least there is a position to evaluate. Without these statements, the review becomes a guessing game.
The conclusions section is not the end of the report. It is the reason the report exists. Everything before it should build toward those conclusions. If the conclusions are weak, the entire report loses its regulatory function.
What happens next
The PMCF evaluation report is not a final document. It is part of a cycle. The conclusions you draw feed into the CER update. The CER update may reveal new questions. Those questions drive the next PMCF period.
When this cycle works, clinical evidence is continuously refined. Risks are identified early. Claims are validated or adjusted. The device file remains current and defensible.
When this cycle breaks, the device file becomes static. PMCF data is collected but not used. The CER references outdated assumptions. And when a serious issue emerges, there is no foundation for responding because the feedback loop was never functional.
The PMCF evaluation report is the hinge point in that cycle. It is where data becomes knowledge. Where evidence becomes conclusions. Where post-market reality confirms or challenges pre-market assumptions.
If that transformation does not happen, the PMCF system is performative. It looks like vigilance. But it is not producing the regulatory intelligence that the MDR requires.
Your data exists. Now make it mean something.
Peace,
Hatem
Clinical Evaluation Expert for Medical Devices
Follow me for more insights and practical advice.
Frequently Asked Questions
What is a Clinical Evaluation Report (CER)?
A CER is a mandatory document under MDR 2017/745 that demonstrates the safety and performance of a medical device through systematic analysis of clinical data. It must be updated throughout the device lifecycle based on PMCF findings.
How often should the CER be updated?
The CER should be updated whenever significant new clinical data becomes available, after PMCF activities, when there are changes to the device or intended purpose, and at minimum during annual reviews as part of post-market surveillance.
What causes CER rejection by Notified Bodies?
Common reasons include inadequate equivalence demonstration, insufficient clinical data for claims, poorly structured SOTA analysis, missing gap analysis, and lack of clear benefit-risk determination. Structure and logical flow are as important as the data itself.
Which MDCG guidance documents are most relevant for clinical evaluation?
Key documents include MDCG 2020-5 (Equivalence), MDCG 2020-6 (Sufficient Clinical Evidence), MDCG 2020-13 (CEAR Template), MDCG 2020-7 (PMCF Plan), and MDCG 2020-8 (PMCF Evaluation Report). MDCG 2020-8
Need Expert Help with Your Clinical Evaluation?
Get personalized guidance on MDR compliance, CER writing, and Notified Body preparation.
✌
Peace, Hatem
Your Clinical Evaluation Partner
Follow me for more insights and practical advice.
– Regulation (EU) 2017/745 (MDR), Annex XIV Part B
– MDCG 2020-8: Post-Market Clinical Follow-up (PMCF) Evaluation Report Template
Translating PMCF data into conclusions requires a systematic approach. Read our complete guide on PMCF plans and reports under MDR.
Related Resources
Read our complete guide to PMCF under EU MDR: PMCF Plan & Report under EU MDR
Or explore Complete Guide to Clinical Evaluation under EU MDR





