When Risk Management and Clinical Evaluation Don’t Talk

Hatem Rabeh

Written by HATEM RABEH, MD, MSc Ing

Your Clinical Evaluation Expert And Partner

in
S

I see it in nearly every clinical evaluation report that gets a major deficiency: the risk management file and the clinical evaluation live in separate universes. One team owns ISO 14971, another owns the CER. They exchange documents once, check a box, and assume integration happened. It didn’t.

The MDR doesn’t treat risk management and clinical evaluation as separate workstreams that occasionally intersect. It expects them to inform each other continuously. Yet most manufacturers still organize them as independent deliverables, creating gaps that reviewers notice immediately.

Let me show you where the connection breaks down and how to fix it.

Where the Regulatory Expectation Sits

Article 61 of the MDR requires a clinical evaluation that addresses safety and performance. Annex I, Section 1 requires manufacturers to eliminate or reduce risks as far as possible. These aren’t parallel requirements. They’re interdependent.

MDCG 2020-5 on clinical evaluation makes this explicit. It states that the clinical evaluation must evaluate residual risks and verify that they are acceptable when weighed against the benefits. You can’t evaluate what you haven’t characterized. You can’t verify acceptability without clinical data.

This means your risk management file feeds your clinical evaluation, and your clinical evaluation validates your risk management conclusions. When these two systems don’t communicate, you create a gap that no amount of documentation will hide.

Key Insight
Risk management identifies what needs clinical evidence. Clinical evaluation provides that evidence. The connection isn’t optional. It’s structural.

How the Connection Should Work in Practice

ISO 14971 requires you to identify hazards, estimate risks, evaluate them, and implement controls. For each residual risk, you need evidence that it’s acceptable in light of the intended benefits. Where does that evidence come from? Clinical data.

Your clinical evaluation report should directly reference the residual risks from your risk management file. For each significant residual risk, the CER must show that clinical data supports its acceptability. This isn’t about summarizing safety. It’s about tracing specific risks to specific evidence.

Most CERs I review fail here. They include a general safety section that discusses adverse events in the literature. But they don’t map those findings back to the specific residual risks identified in the risk file. The connection remains implicit, assumed, never demonstrated.

Reviewers don’t assume. They trace. If they can’t follow the line from identified risk to clinical evidence to benefit-risk conclusion, the integration is incomplete.

What This Looks Like in a Real CER

Take a surgical instrument with a residual risk of tissue damage due to sharp edges. Your risk file estimates this risk as medium severity, low probability after mitigation. Residual risk: acceptable if clinical benefit justifies it.

Your CER must now provide evidence. What does the clinical data say about tissue damage rates with this device or equivalent devices? What are the reported incidents? What are the clinical benefits that outweigh this risk?

If your CER doesn’t address this specific risk with specific evidence, the integration isn’t there. You might have a thousand pages of literature, but if none of it speaks to the residual risks in your file, the gap remains.

Common Deficiency
The CER includes general safety data but never explicitly addresses the residual risks documented in the risk management file. Reviewers see two disconnected documents.

The Reverse Flow: Clinical Evaluation Informs Risk Management

The connection doesn’t flow in one direction. Clinical data often reveals risks that weren’t fully anticipated during design. Post-market data, literature reviews, and clinical investigations surface adverse events, use errors, and long-term effects that need to feed back into your risk file.

Your PMCF plan should be designed to monitor the residual risks you identified. But it should also be open enough to detect new risks. When new clinical evidence emerges, your risk management file must be updated. This is required under ISO 14971 and reinforced by the MDR’s post-market surveillance requirements.

I see manufacturers treat the risk file as a static document, locked at the time of initial certification. The CER gets updated annually. Post-market data accumulates. But the risk file stays frozen. This creates a second disconnect.

Your clinical evaluation should trigger risk management updates. If post-market data shows an adverse event rate higher than estimated, that’s a risk management revision. If literature reveals a new hazard, that’s a risk management revision. The CER and the risk file need to stay synchronized.

How to Build This Into Your Process

Establish a formal mechanism where clinical evaluation findings trigger risk management review. This should be part of your PMCF procedures and your periodic safety update report (PSUR) process.

When you update your CER, include a section that explicitly identifies any new risks or changes in risk estimates based on clinical data. Route this to your risk management team. Don’t assume they’ll read the full CER and extract the relevant findings themselves.

When you update your risk file based on post-market or clinical data, reference the source in the CER. This creates traceability in both directions.

Key Insight
Integration isn’t about cross-referencing documents. It’s about building a feedback loop where clinical data continuously informs risk assessment and risk assessment drives clinical data collection.

What Reviewers Check

Notified Body reviewers and competent authorities have a clear checklist when they assess this integration. They look for three things.

First, they verify that the CER explicitly addresses the residual risks in the risk file. They’ll open the risk file, identify the top residual risks, and then search the CER for corresponding evidence. If they can’t find it, that’s a deficiency.

Second, they check whether the benefit-risk analysis in the CER is based on the actual risk estimates in the risk file. If your risk file says a complication occurs at 2% and your CER discusses it as rare without data, the two documents contradict each other.

Third, they assess whether post-market data has triggered appropriate risk management updates. If your PSUR shows adverse event trends but your risk file hasn’t been revised, they’ll question whether your post-market surveillance is effective.

These checks are straightforward. But most manufacturers don’t prepare for them because they treat the CER and risk file as separate deliverables owned by different teams.

How to Prepare for This Review

Before submission, conduct your own integration audit. Take your top five residual risks from the risk file. Search the CER for each one. Can you find explicit evidence that addresses it? Can you trace the benefit-risk conclusion back to those risks?

If not, revise. Add a dedicated section in your CER that lists the significant residual risks and addresses each one with clinical evidence. This makes the integration explicit and traceable.

Do the same in reverse. Review your clinical evaluation conclusions, especially safety findings. Have they triggered updates to the risk file? If not, document why or update the risk file.

Common Deficiency
The benefit-risk conclusion in the CER is generic and doesn’t reference the specific risks documented in the risk management file. Reviewers can’t verify that the analysis is grounded in actual risk estimates.

The Role of PMCF in Maintaining the Connection

Your PMCF plan is where the integration becomes operational. PMCF must be designed to confirm that residual risks remain acceptable and that no new risks have emerged. This connects risk management and clinical evaluation in real time.

When you design your PMCF plan, start with your risk file. What are the residual risks that need ongoing monitoring? What clinical data would confirm their acceptability? Those questions define your PMCF objectives.

Most PMCF plans I review are too general. They aim to collect safety and performance data without specifying what risks or performance claims they’re validating. This makes it impossible to use PMCF data to update the risk file or strengthen the CER.

Your PMCF report should explicitly state whether the data confirms the risk estimates in your risk file. If it doesn’t, that’s a signal for risk management review. If it does, that becomes evidence in your next CER update.

Building This Into Your PMCF Plan

In your PMCF plan, include a table that maps PMCF objectives to specific residual risks. For each risk, define what data you’ll collect, what threshold would trigger a risk reassessment, and how the data will feed into your CER.

In your PMCF report, include a section that evaluates each monitored risk against the collected data. State whether the risk estimate was confirmed, whether acceptability is maintained, and whether any revision is needed.

This turns PMCF from a general data collection exercise into a targeted risk validation process. It also makes the connection between risk management and clinical evaluation visible and traceable.

Why This Integration Matters More Under the MDR

The MDR has increased scrutiny on both risk management and clinical evaluation. Notified Bodies are under pressure to ensure that manufacturers truly understand and manage their device risks. Competent authorities are conducting unannounced audits and reviewing post-market data.

When risk management and clinical evaluation are disconnected, it signals to reviewers that the manufacturer’s quality system isn’t integrated. It suggests that compliance is document-driven, not process-driven.

Conversely, when the connection is clear, traceable, and maintained post-market, it demonstrates a mature quality system. It shows that risk management isn’t a one-time exercise and that clinical evaluation isn’t just a regulatory hurdle.

This distinction matters. It affects how reviewers approach your file, how much additional evidence they request, and how confident they are in your post-market surveillance.

Key Insight
The integration between risk management and clinical evaluation is a signal of organizational maturity. Reviewers use it to assess whether your quality system is functioning as intended.

Practical Steps to Close the Gap

If your risk management and clinical evaluation are currently disconnected, here’s how to fix it.

First, update your CER to include a dedicated section on residual risk evaluation. List the significant residual risks from your risk file. For each one, provide clinical evidence that supports its acceptability. Reference specific studies, post-market data, or clinical investigation results.

Second, establish a formal procedure for clinical evaluation findings to trigger risk management review. This should be part of your post-market surveillance and PMCF processes. Define who reviews clinical data for risk implications and how updates to the risk file are documented.

Third, revise your PMCF plan to explicitly map objectives to residual risks. Make it clear that PMCF is designed to validate risk acceptability, not just collect data.

Fourth, train your teams. Risk management engineers need to understand what clinical evidence looks like and how to interpret CER conclusions. Clinical affairs specialists need to understand how to read a risk file and identify what risks need clinical validation.

These steps take time. But they’re not optional under the MDR. The connection between risk management and clinical evaluation is a regulatory expectation, not a best practice.

When you build this integration into your processes, it becomes routine. The CER naturally addresses risks. The risk file naturally incorporates clinical findings. PMCF naturally monitors both. The system works as intended.

That’s what reviewers expect to see. And that’s what separates manufacturers who struggle with deficiencies from those who maintain smooth regulatory pathways.

Peace,
Hatem
Clinical Evaluation Expert for Medical Devices
Follow me for more insights and practical advice.

Frequently Asked Questions

What is a Clinical Evaluation Report (CER)?

A CER is a mandatory document under MDR 2017/745 that demonstrates the safety and performance of a medical device through systematic analysis of clinical data. It must be updated throughout the device lifecycle based on PMCF findings.

How often should the CER be updated?

The CER should be updated whenever significant new clinical data becomes available, after PMCF activities, when there are changes to the device or intended purpose, and at minimum during annual reviews as part of post-market surveillance.

What causes CER rejection by Notified Bodies?

Common reasons include inadequate equivalence demonstration, insufficient clinical data for claims, poorly structured SOTA analysis, missing gap analysis, and lack of clear benefit-risk determination. Structure and logical flow are as important as the data itself.

Which MDCG guidance documents are most relevant for clinical evaluation?

Key documents include MDCG 2020-5 (Equivalence), MDCG 2020-6 (Sufficient Clinical Evidence), MDCG 2020-13 (CEAR Template), MDCG 2020-7 (PMCF Plan), and MDCG 2020-8 (PMCF Evaluation Report).

Need Expert Help with Your Clinical Evaluation?

Get personalized guidance on MDR compliance, CER writing, and Notified Body preparation.

Peace, Hatem

Your Clinical Evaluation Partner

Follow me for more insights and practical advice.

References:
– Regulation (EU) 2017/745 (MDR), Article 61 and Annex I
– MDCG 2020-5: Clinical Evaluation Assessment Report Template
– ISO 14971:2019 Medical devices — Application of risk management to medical devices