When Post-Market Data Contradicts Pre-Market Predictions

Hatem Rabeh

Written by HATEM RABEH, MD, MSc Ing

Your Clinical Evaluation Expert And Partner

in
S

You submitted a clinical evaluation report predicting low complication rates based on literature. Six months post-market, your PMCF data shows something different. The Notified Body now questions your entire benefit-risk conclusion. This is not a theoretical problem. I see it in audits regularly, and it exposes a fundamental misunderstanding about how clinical evaluation works under MDR.

Most manufacturers treat the clinical evaluation report as a milestone. You finish it, submit it, get your certificate, and move forward. The assumption is that the work is done. The predictions have been made. The benefit-risk is established.

But MDR Article 61 does not describe clinical evaluation as a point in time. It describes it as a continuous process. And that continuity depends entirely on what happens when your post-market data arrives.

What almost no one prepares for is the moment when the data contradicts what you claimed before market entry.

The Problem Starts at Pre-Market Stage

Pre-market clinical evaluation is predictive. You use literature, equivalence data, clinical investigations, or some combination. You estimate performance. You estimate safety. You build a benefit-risk profile based on projections.

Those projections are necessarily uncertain. But manufacturers often write clinical evaluation reports as if the predictions are facts. The language is confident. The conclusions are definitive. The benefit-risk is presented as established.

Then the device enters the market.

PMCF starts collecting real-world data. And sometimes, that data does not align with what you predicted. The complication rate is higher. A particular patient group shows adverse outcomes you did not anticipate. A risk you labeled as low probability turns out to be more frequent.

The question then becomes: what do you do?

Key Insight
Most deficiencies I see in PMCF evaluations are not about the data itself. They are about how manufacturers respond when the data does not match their original claims. The instinct is to justify. The requirement is to reassess.

Why Contradictions Are Not Anomalies

Manufacturers often treat contradictory post-market data as noise. They look for explanations that protect the original conclusion. The patient population was different. The follow-up period was too short. The sample size was not representative. The complaint classification was incorrect.

Sometimes those explanations are valid. But more often, they are defensive.

The reality is that contradictions are expected. Pre-market predictions are based on incomplete information. You cannot predict every variable in real-world use. You cannot anticipate every clinical context. You cannot model every patient subgroup.

Post-market data is not there to confirm what you already said. It is there to test it.

Under MDR Annex XIV Part A, the clinical evaluation must be updated throughout the lifecycle. That update is not administrative. It is substantive. When post-market data contradicts your predictions, the clinical evaluation must reflect that contradiction, analyze it, and adjust the benefit-risk conclusion accordingly.

But this rarely happens smoothly. Because most manufacturers are not organizationally prepared to challenge their own pre-market conclusions.

What Happens in Practice

I review PMCF evaluation reports where contradictory data is buried in appendices. The main report still references the original benefit-risk conclusion. The language minimizes the discrepancy. The update is presented as minor.

Notified Bodies catch this immediately. Because they compare the PMCF data tables to the claims in the CER. When the numbers diverge, the first question is: did you update your benefit-risk?

If the answer is no, the deficiency is raised. And the deficiency is not just about the PMCF report. It triggers a reassessment of the entire clinical evaluation.

Because if your post-market data shows higher risks than you predicted, then your original benefit-risk balance may no longer hold. And if the benefit-risk no longer holds, the conformity of the device under MDR Article 1 is questioned.

Common Deficiency
Manufacturers update PMCF data without updating the benefit-risk conclusion. The new data is acknowledged but not integrated. This creates a disconnect between what the data shows and what the CER still claims.

The Organizational Challenge

The difficulty is not technical. The difficulty is organizational.

Clinical evaluation reports are written by one team. Post-market surveillance is managed by another. PMCF is often handled by yet another group. These teams do not always communicate systematically. And when contradictory data appears, the response is siloed.

The PMCF team sees the data first. They try to interpret it within their scope. They write a PMCF report. That report goes to regulatory affairs. Regulatory affairs looks at it and decides whether it triggers a CER update.

But regulatory affairs may not have the clinical background to assess whether the data truly contradicts the original predictions. And even if they do, they may not have the authority to challenge a CER that was already approved by the Notified Body.

So the data gets filed. The CER is not updated. And the contradiction persists until the next audit.

This is where deficiencies originate. Not from lack of data. From lack of integration.

What the Regulation Actually Requires

MDR Article 61(11) states that the clinical evaluation and its documentation must be actively updated with data from PMCF. This is not optional. It is not triggered by a major incident. It is continuous.

MDCG 2020-13 on clinical evaluation clarifies that updates must be substantive. When new data changes the understanding of safety or performance, the benefit-risk must be reassessed. That reassessment must be documented. And if the reassessment changes the conclusion, the manufacturer must act.

Acting can mean several things. It can mean updating labeling to reflect new risk information. It can mean adjusting indications for use. It can mean implementing risk mitigation measures. In extreme cases, it can mean withdrawing the device or trigging a field safety corrective action.

But the first step is recognizing that the contradiction exists and that it matters.

Most manufacturers recognize it. But they do not treat it as if it matters. They treat it as a data point to manage, not as a signal to respond to.

How to Respond When Data Contradicts Predictions

The first response should not be justification. It should be analysis.

Ask: what does this data actually show? Not what can we say about it to minimize impact. What does it show?

If the complication rate is higher than predicted, quantify the difference. Compare it to the original literature. Compare it to the equivalence device if you used one. Identify whether the difference is statistically significant or clinically meaningful.

Then ask: does this change the benefit-risk?

This is the question most manufacturers avoid. Because if the answer is yes, the implications are significant. But avoiding the question does not make it disappear. It just delays the moment when the Notified Body or a competent authority asks it for you.

If the benefit-risk changes, update the CER. Document the change transparently. Explain what the new data shows, how it differs from the original prediction, and what that means for the device’s safety and performance.

If the change is substantial, consider whether labeling needs to be updated. Consider whether risk management needs new controls. Consider whether the clinical investigation protocol needs adjustment if the study is still ongoing.

Key Insight
Notified Bodies do not penalize manufacturers for finding contradictions. They penalize manufacturers for ignoring them. Transparent reassessment builds credibility. Defensive justification destroys it.

The Role of the Clinical Evaluation Consultant

Many manufacturers outsource clinical evaluation. That is fine. But the consultant must be integrated into post-market surveillance processes. They must receive PMCF data systematically. They must be tasked with comparing that data to the original CER predictions.

I have reviewed projects where the consultant wrote the CER three years ago and has not been involved since. The PMCF data is now handled internally. No one is comparing the new data to the original claims because no one has both documents in front of them.

This is a structural failure. The clinical evaluation must be living. That means the evaluator must stay involved. Or at minimum, the internal team must be trained to perform the comparison themselves.

Most are not.

What Happens at the Notified Body Review

When a Notified Body reviews your PMCF evaluation report, they are not just checking whether you collected data. They are checking whether you used it.

They compare the adverse event rates in the PMCF report to the predicted rates in the CER. If the numbers differ, they expect an explanation. Not a vague acknowledgment. A documented analysis.

If that analysis is missing, the deficiency is raised. And the deficiency usually includes a request to update the CER and resubmit for review. This can delay renewals. It can delay certifications for line extensions. It can trigger additional audits.

All of this is avoidable if the manufacturer treats contradictions as part of the process, not as threats to the original conclusion.

The Deeper Issue: Confirmation Bias in Regulatory Work

The real challenge is psychological. Once a clinical evaluation report is approved, it becomes part of the company’s identity. It represents months of work. It survived scrutiny. It is the foundation of market access.

Challenging it feels risky. Updating it feels like admitting error. So manufacturers look for ways to preserve the original conclusion even when the data no longer supports it.

This is confirmation bias. And it is everywhere in regulatory work.

But under MDR, confirmation bias is not just a cognitive error. It is a compliance risk. Because the regulation requires you to follow the data, not your preference.

If the data says something different than what you predicted, you must say so. And you must act accordingly.

How to Build a System That Works

The solution is not more templates. It is process integration.

Link your PMCF data review directly to your CER update process. Make it automatic. When the PMCF report is finalized, it should trigger a CER comparison. That comparison should be documented. Any contradiction should be flagged and escalated to the clinical evaluator.

The clinical evaluator should then perform a formal reassessment. That reassessment should be documented in the CER update. If the benefit-risk changes, the update should state it clearly.

This should happen at least annually. More frequently if you have active PMCF or if you receive complaints or adverse events.

The system does not need to be complex. It needs to be deliberate.

Final Thought

Post-market data is not there to validate your pre-market predictions. It is there to test them. And when it contradicts those predictions, that contradiction is information. It is not failure. It is evidence.

The question is whether you treat it as evidence or whether you treat it as noise.

Under MDR, only one of those approaches is acceptable.

Peace,
Hatem
Clinical Evaluation Expert for Medical Devices
Follow me for more insights and practical advice.

Frequently Asked Questions

What is a Clinical Evaluation Report (CER)?

A CER is a mandatory document under MDR 2017/745 that demonstrates the safety and performance of a medical device through systematic analysis of clinical data. It must be updated throughout the device lifecycle based on PMCF findings.

How often should the CER be updated?

The CER should be updated whenever significant new clinical data becomes available, after PMCF activities, when there are changes to the device or intended purpose, and at minimum during annual reviews as part of post-market surveillance.

What causes CER rejection by Notified Bodies?

Common reasons include inadequate equivalence demonstration, insufficient clinical data for claims, poorly structured SOTA analysis, missing gap analysis, and lack of clear benefit-risk determination. Structure and logical flow are as important as the data itself.

Which MDCG guidance documents are most relevant for clinical evaluation?

Key documents include MDCG 2020-5 (Equivalence), MDCG 2020-6 (Sufficient Clinical Evidence), MDCG 2020-13 (CEAR Template), MDCG 2020-7 (PMCF Plan), and MDCG 2020-8 (PMCF Evaluation Report).

Need Expert Help with Your Clinical Evaluation?

Get personalized guidance on MDR compliance, CER writing, and Notified Body preparation.

Peace, Hatem

Your Clinical Evaluation Partner

Follow me for more insights and practical advice.

References:
– MDR 2017/745 Article 61
– MDR 2017/745 Annex XIV Part A
– MDCG 2020-13 Clinical Evaluation Assessment Report Template

Related Resources

Read our complete guide to PMCF under EU MDR: PMCF Plan & Report under EU MDR

Or explore Complete Guide to Clinical Evaluation under EU MDR