When clinical evaluation and risk management drift apart
You update the clinical evaluation report after new literature review. Six weeks later, risk management gets revised based on field data. The two documents now contradict each other on the same hazard. The Notified Body notices. Your submission stalls.
In This Article
This is not rare. It happens in almost every technical file I review.
Clinical evaluation and risk management are treated as parallel tracks that occasionally cross paths. In reality, they must move together at every step. When they drift apart, even slightly, the entire regulatory structure weakens.
The MDR does not allow disconnected documentation. Article 61(1) requires that the technical file demonstrates conformity through a coherent set of documents. Clinical evaluation and risk management are not independent exercises. They form a single argument about safety and performance.
But synchronization is hard. Different teams. Different timelines. Different triggers for updates. And often, different interpretations of the same data.
Why they drift apart in the first place
Clinical evaluation gets updated when new literature emerges, when equivalence needs revision, or when a PMCF report becomes available. These triggers come from clinical affairs teams or external consultants.
Risk management gets updated when complaints accumulate, when a field action occurs, or when post-market surveillance flags a trend. These triggers come from quality teams or vigilance officers.
The two streams operate on different schedules. They respond to different signals. They are managed by different people.
The problem is not that they update independently. The problem is that the updates do not automatically trigger corresponding reviews in the other document.
A clinical evaluation report concludes that infection risk is low based on published literature. Meanwhile, the risk management file shows three medium-severity infections in the last twelve months. Neither document references the other. The Notified Body asks which conclusion is correct.
Once the documents diverge, the technical file loses credibility. The reviewer cannot tell whether the manufacturer understands the device’s actual risk profile.
What synchronization actually means
Synchronization does not mean updating both documents at the same time. It means ensuring that every material change in one document prompts a formal review of the other.
When new clinical data changes your understanding of a residual risk, the risk management file must reflect that change. When a new hazard is identified through post-market data, the clinical evaluation must assess whether literature or clinical investigation supports that finding.
The linkage must be visible. Not implied. Not assumed. Documented.
This requires process, not good intentions. You need triggers. You need responsibilities. You need a clear rule: if Document A changes in a way that affects safety or performance, Document B gets reviewed within a defined timeframe.
Most companies lack this rule. They rely on people noticing. That only works until someone does not notice.
Where the conflicts emerge
The most common conflict appears in residual risk evaluation. The clinical evaluation report states that a hazard is rare based on literature. The risk management file classifies the same hazard as frequent based on complaint data.
Both statements may be technically correct. Literature reflects controlled studies. Complaints reflect real-world use. But if the two documents present contradictory frequencies without explanation, the Notified Body will question whether you understand your device.
Another conflict emerges in benefit-risk conclusions. The clinical evaluation report concludes that benefits outweigh risks. The risk management file shows several unresolved medium risks. If the clinical evaluation does not explicitly address those medium risks in its benefit-risk analysis, the conclusion appears unsupported.
These are not small gaps. They undermine the entire submission.
Synchronization is not about making the documents say the same thing. It is about making sure they explain each other. When they differ, the difference must be justified and cross-referenced.
The MDR expects this. MDCG 2020-5 on clinical evaluation states that the clinical evaluation must consider risk management outputs. Annex I, Section 1 and 8, requires that risk management inform clinical evaluation and vice versa.
But guidance is not enough. You need operational synchronization.
What working synchronization looks like
In technical files that pass review without major issues, I see consistent patterns.
First, both documents reference each other explicitly. The clinical evaluation report includes a section on risk management outputs. The risk management file includes a section on clinical data that informed risk estimation.
Second, updates are logged in both documents even when only one changes materially. If the risk management file is updated, the clinical evaluation report gets a version update with a note: “Reviewed in response to risk management update dated [date]. No changes required.” Or: “Updated Section 7.2 to reflect new hazard identified in risk management update.”
This creates a documented linkage. It proves that the synchronization happened.
Third, the PMCF plan serves as the coordination point. It should list all residual risks that require ongoing monitoring. Those risks should match exactly between the clinical evaluation report and the risk management file. Any mismatch is a red flag.
Fourth, the person responsible for clinical evaluation has formal visibility into risk management changes. Not just access. Active notification. When the risk management file is updated, clinical affairs gets an automatic alert to review.
This is not complex. It is a simple workflow rule. But most companies do not have it.
The synchronization checkpoint
The most effective mechanism I have seen is a quarterly synchronization checkpoint. Every three months, the clinical evaluation owner and risk management owner sit together and compare key sections.
They compare the list of identified hazards. They compare residual risk ratings. They compare benefit-risk conclusions. They compare PMCF monitoring priorities.
If anything has changed in one document but not the other, they decide whether a formal update is needed. If no changes are needed, they document the review.
This checkpoint does not replace triggered updates. It catches what the triggers missed.
It also builds institutional awareness. The two teams stop thinking of their documents as separate. They start thinking of them as two perspectives on the same reality.
Synchronization is not a document task. It is a communication task. If the clinical evaluation owner and risk management owner do not talk regularly, the documents will drift.
What happens when synchronization fails
The most visible consequence is Notified Body questions. But that is not the worst consequence.
The worst consequence is that your organization loses the ability to assess its own device accurately. If clinical evaluation and risk management present conflicting pictures, internal decision-making becomes unreliable.
You cannot decide whether a design change is necessary if you do not know whether the current risk profile is accurately captured. You cannot decide whether a PMCF study is needed if clinical evaluation and risk management disagree on which risks are uncertain.
Regulatory review exposes the problem. But the problem existed long before the Notified Body noticed.
The synchronization rule
Here is the rule I recommend to every regulatory team: If a document changes in a way that affects safety, performance, or benefit-risk conclusions, all related documents must be reviewed within 30 days.
This applies to clinical evaluation, risk management, IFU, and summary of safety and performance.
The review does not always result in a change. Sometimes the review confirms that no update is needed. But the review must happen. And it must be documented.
This rule works because it removes ambiguity. No one has to decide whether a change is “material enough” to trigger a review. If it affects safety or performance, it triggers a review.
The 30-day window allows time for coordination without letting documents drift for months.
And the documentation requirement ensures that the Notified Body can see that synchronization is part of your process, not an accident.
Final observation
Synchronization is not a burden. It is a discipline that makes your technical file stronger and your internal decision-making more reliable.
When clinical evaluation and risk management move together, the story they tell is coherent. The Notified Body can follow the reasoning. The evidence supports the conclusions.
When they drift, even excellent individual documents fail to create a convincing case.
The technical file is not a collection of documents. It is a single argument, presented through multiple perspectives. Those perspectives must align.
That alignment does not happen by chance. It happens through process.
Peace,
Hatem
Clinical Evaluation Expert for Medical Devices
Follow me for more insights and practical advice.
Frequently Asked Questions
What is a Clinical Evaluation Report (CER)?
A CER is a mandatory document under MDR 2017/745 that demonstrates the safety and performance of a medical device through systematic analysis of clinical data. It must be updated throughout the device lifecycle based on PMCF findings.
How often should the CER be updated?
The CER should be updated whenever significant new clinical data becomes available, after PMCF activities, when there are changes to the device or intended purpose, and at minimum during annual reviews as part of post-market surveillance.
What causes CER rejection by Notified Bodies?
Common reasons include inadequate equivalence demonstration, insufficient clinical data for claims, poorly structured SOTA analysis, missing gap analysis, and lack of clear benefit-risk determination. Structure and logical flow are as important as the data itself.
Which MDCG guidance documents are most relevant for clinical evaluation?
Key documents include MDCG 2020-5 (Equivalence), MDCG 2020-6 (Sufficient Clinical Evidence), MDCG 2020-13 (CEAR Template), MDCG 2020-7 (PMCF Plan), and MDCG 2020-8 (PMCF Evaluation Report).
Need Expert Help with Your Clinical Evaluation?
Get personalized guidance on MDR compliance, CER writing, and Notified Body preparation.
✌
Peace, Hatem
Your Clinical Evaluation Partner
Follow me for more insights and practical advice.
– MDR 2017/745, Article 61(1), Annex I Section 1 and Section 8
– MDCG 2020-5: Clinical Evaluation – A Guide for Manufacturers and Notified Bodies
Deepen Your Knowledge
Read Complete Guide to Clinical Evaluation under EU MDR for a comprehensive overview of clinical evaluation under EU MDR 2017/745.





