Benefit-Risk: Why Your Analysis Looks Good But Fails Review
I once reviewed a clinical evaluation where the benefit-risk analysis spanned fifteen pages, cited twenty studies, and included detailed tables. The Notified Body rejected it in the first round. The reason? It looked comprehensive but answered none of the questions that actually matter for market access.
In This Article
- The Regulatory Foundation Nobody Questions
- What Reviewers Actually Check First
- The Logic Gap That Stops Approval
- Why Comparisons Matter More Than You Think
- The Weight You Assign Must Be Justified
- When Subgroups Change Everything
- The Post-Market Connection You Cannot Skip
- How to Structure Your Analysis So Reviewers Follow Your Logic
- What Happens When Your Analysis Is Weak
- Final Reflection
This happens more often than it should. Manufacturers invest significant effort documenting benefits and risks, but the analysis fails because it does not address what reviewers need to see. The disconnect is rarely about missing data. It is about missing reasoning.
Understanding what Notified Bodies expect from a benefit-risk analysis requires looking beyond templates and checklists. It requires understanding how they read, what they question, and where they stop believing your conclusions.
The Regulatory Foundation Nobody Questions
MDR Annex XIV Section 1 states that clinical evaluation must establish a favourable benefit-risk profile. MDR Article 61(1) reinforces this, requiring that devices present an acceptable benefit-risk ratio when used under normal conditions of use.
This sounds clear. But in practice, manufacturers often confuse documentation with demonstration. They list benefits. They list risks. They place them side by side and conclude the balance is favourable.
Notified Bodies do not accept conclusions by assertion. They expect you to show how you reached that conclusion, using what evidence, applying which criteria, and considering which alternatives.
A benefit-risk analysis is not a comparison table. It is a reasoning process that must be transparent, methodologically sound, and clinically defensible at every step.
What Reviewers Actually Check First
When a Notified Body reviews your benefit-risk analysis, they do not start by reading your conclusions. They start by checking whether your analysis is grounded in the right clinical context.
First question: Does the analysis reflect the intended patient population and conditions of use as stated in your IFU?
If your device is indicated for moderate to severe cases but your benefit-risk analysis includes studies on mild cases, the analysis is not aligned. If your device is intended for trained professionals but you cite data from supervised clinical trials, the analysis does not match real-world use.
Second question: Are the identified benefits clinically meaningful?
Statistically significant improvements do not automatically translate to clinical benefit. A device that reduces procedure time by thirty seconds may show statistical significance, but if that reduction does not impact patient outcomes, recovery, or clinical workflow, the benefit is weak.
Third question: Are the identified risks comprehensive and current?
This is where many analyses break down. Manufacturers list risks from their risk management file but fail to integrate risks emerging from clinical data, post-market surveillance, or updated SOTA. If your benefit-risk analysis does not reflect risks identified in recent publications or competitor incidents, reviewers will question whether your analysis is current.
Benefit-risk analyses often reference the risk management file but do not integrate clinical evidence of actual occurrence rates, severity patterns, or contributing factors from real-world data.
The Logic Gap That Stops Approval
The most frequent deficiency I see is not missing data. It is missing logic between data and conclusion.
A manufacturer presents a table showing ten benefits and five risks. The conclusion states that benefits outweigh risks. But there is no explanation of how this weighing was done.
Which benefits matter most to the target population? Which risks are acceptable given the clinical need? What happens if a severe risk occurs in a vulnerable subgroup? What alternatives exist, and how does your device compare?
These are not rhetorical questions. Notified Bodies expect answers, supported by evidence and reasoning.
If your device reduces infection risk by fifteen percent but increases procedure complexity, you must explain why that trade-off is acceptable. You must reference clinical guidelines, patient preferences, or health economic data that support this conclusion.
If your device has a rare but serious complication, you must explain why the overall benefit justifies this risk. You must compare it to existing treatments, show mitigation measures, and provide data on how often this complication actually occurs in your post-market experience.
This is not about writing more. It is about reasoning more clearly.
Why Comparisons Matter More Than You Think
One question always comes up during Notified Body reviews: compared to what?
A benefit-risk analysis in isolation tells only half the story. Patients and clinicians do not choose devices in a vacuum. They compare options.
If your device offers a certain benefit but carries a higher complication rate than existing alternatives, that matters. If your device reduces one risk but introduces another that existing options do not have, that changes the balance.
MDCG 2020-6 on sufficient clinical evidence emphasizes the need to consider the current state of the art. This includes understanding what alternatives are available, how your device performs relative to them, and whether your benefit-risk profile is at least equivalent or better.
Many manufacturers avoid this comparison, fearing it will weaken their case. The opposite is true. A transparent comparison strengthens credibility. It shows you understand the clinical landscape and can defend your device’s position within it.
Notified Bodies expect you to position your device within the treatment landscape. A benefit-risk analysis that does not address alternatives is incomplete, regardless of how much data you present.
The Weight You Assign Must Be Justified
Not all benefits are equal. Not all risks carry the same consequence.
A device that improves cosmetic outcomes and a device that prevents mortality both deliver benefits, but the clinical significance is vastly different. A device that causes temporary discomfort and a device that risks permanent injury both carry risks, but the severity is not comparable.
Notified Bodies expect you to weigh benefits and risks according to their clinical significance, not just their frequency. This means you must apply clinical judgment, supported by evidence.
If your device has a low-frequency but high-severity risk, you cannot dismiss it simply because it is rare. You must explain what makes this risk acceptable given the benefit, what mitigations are in place, and how patients and clinicians are informed.
If your primary benefit is marginal improvement over existing treatments, you must acknowledge this and explain why it still justifies market access. Perhaps it offers that improvement in a specific subgroup. Perhaps it reduces cost or simplifies workflow in a meaningful way. But the reasoning must be explicit.
This is where clinical expertise matters most. A well-reasoned benefit-risk analysis reflects deep understanding of the clinical need, patient priorities, and treatment context.
When Subgroups Change Everything
Here is a scenario that appears in many Notified Body reviews: Your overall benefit-risk analysis is favourable, but one subgroup shows a different profile.
Perhaps elderly patients experience higher complication rates. Perhaps pediatric use shows limited benefit. Perhaps patients with comorbidities face disproportionate risks.
If your clinical data reveals subgroup differences, your benefit-risk analysis must address them. You cannot present an overall conclusion and ignore divergent patterns in specific populations.
This does not mean you must exclude subgroups from your indication. It means you must analyze whether the benefit-risk profile remains acceptable in those populations, provide evidence for that conclusion, and ensure labeling appropriately informs users.
Notified Bodies will ask: Did you stratify your analysis by subgroup? If yes, what did you find? If no, why not? These questions are predictable. Your analysis should answer them before they are asked.
Benefit-risk analyses often present pooled data without addressing whether specific subgroups show different safety or performance profiles. This triggers requests for stratified analysis and delays approval.
The Post-Market Connection You Cannot Skip
A benefit-risk analysis is not a one-time document. It evolves as new data emerges.
Your initial analysis is based on pre-market clinical data. But once your device reaches the market, real-world evidence accumulates. Complaint trends emerge. Incident reports are filed. PMCF data reveals patterns you did not see in controlled studies.
Notified Bodies expect your benefit-risk analysis to reflect this evolution. During surveillance audits, they will check whether your clinical evaluation updates integrate new safety signals, whether updated risk analyses feed back into your benefit-risk assessment, and whether you have re-evaluated your conclusions.
If a new risk emerges post-market and your benefit-risk analysis has not been updated, you face two problems. First, regulatory non-compliance. Second, a weakened defense if an incident escalates to serious investigation.
This is not theoretical. I have reviewed files where manufacturers continued to reference benefit-risk analyses from three years prior, while their complaint database showed evolving risk patterns that were never integrated. The disconnect was obvious to the auditor. It cost the manufacturer months of remediation.
How to Structure Your Analysis So Reviewers Follow Your Logic
Structure matters because it guides the reviewer through your reasoning.
Start with the clinical context. Define the patient population, the clinical need, and the intended use. Explain what problem your device solves and for whom.
Present the benefits. Use clinical evidence to support each claimed benefit. Quantify the benefit where possible. Explain its clinical significance.
Present the risks. Include all identified risks from your risk management file, clinical data, and post-market sources. Do not downplay or omit risks hoping they will not be noticed.
Explain the weighing process. Show how you evaluated the relative importance of each benefit and risk. Reference clinical guidelines, patient preferences, health economic data, or expert opinion where relevant.
Address alternatives. Compare your device to existing treatments. Explain where your device fits in the treatment pathway and why its benefit-risk profile is acceptable relative to other options.
Analyze subgroups if relevant. Stratify your analysis for populations that may experience different profiles.
State your conclusion clearly. A favourable benefit-risk profile is acceptable given the evidence, the clinical need, and the available alternatives.
This structure is not a template to copy. It is a logical flow that makes your reasoning transparent and defensible.
Reviewers follow your logic only if you make it visible. Every step from data to conclusion must be traceable. Every weighing decision must be justified. Every assumption must be stated.
What Happens When Your Analysis Is Weak
A weak benefit-risk analysis does not just delay approval. It exposes your entire clinical evaluation to deeper scrutiny.
When a Notified Body questions your benefit-risk conclusion, they start questioning the data behind it. They re-examine your appraisal of studies. They challenge your equivalence claims. They request additional clinical investigations.
What begins as a deficiency on benefit-risk reasoning escalates into a broader credibility issue. The reviewer starts doubting whether your clinical evaluation team has the expertise to make sound judgments.
I have seen this cascade multiple times. A manufacturer submits a clinical evaluation with a superficial benefit-risk analysis. The Notified Body raises questions. The manufacturer responds with clarifications but no new reasoning. The Notified Body escalates to a formal deficiency. The manufacturer is forced to repeat large sections of the clinical evaluation, often requiring external expertise they should have engaged from the start.
This is avoidable. The effort required to build a solid benefit-risk analysis upfront is far less than the effort required to remediate a rejected submission.
Final Reflection
Notified Bodies do not expect perfection. They expect honesty, rigor, and clear reasoning.
Your device may have limitations. Your data may have gaps. Your benefit-risk balance may be narrow in certain populations. None of this automatically disqualifies your device. What disqualifies it is a benefit-risk analysis that hides these realities, avoids difficult questions, or presents conclusions without evidence.
The manufacturers who navigate benefit-risk reviews successfully are those who engage with the complexity. They do not simplify the analysis to make it look favorable. They analyze it thoroughly, acknowledge uncertainties, and provide reasoned judgments supported by evidence.
That is what Notified Bodies really expect. Not perfection. Not certainty. But transparency, logic, and clinical credibility.
Frequently Asked Questions
What is a Clinical Evaluation Report (CER)?
A CER is a mandatory document under MDR 2017/745 that demonstrates the safety and performance of a medical device through systematic analysis of clinical data. It must be updated throughout the device lifecycle based on PMCF findings.
How often should the CER be updated?
The CER should be updated whenever significant new clinical data becomes available, after PMCF activities, when there are changes to the device or intended purpose, and at minimum during annual reviews as part of post-market surveillance.
What causes CER rejection by Notified Bodies?
Common reasons include inadequate equivalence demonstration, insufficient clinical data for claims, poorly structured SOTA analysis, missing gap analysis, and lack of clear benefit-risk determination. Structure and logical flow are as important as the data itself.
Which MDCG guidance documents are most relevant for clinical evaluation?
Key documents include MDCG 2020-5 (Equivalence), MDCG 2020-6 (Sufficient Clinical Evidence), MDCG 2020-13 (CEAR Template), MDCG 2020-7 (PMCF Plan), and MDCG 2020-8 (PMCF Evaluation Report).
Need Expert Help with Your Clinical Evaluation?
Get personalized guidance on MDR compliance, CER writing, and Notified Body preparation.
✌
Peace, Hatem
Your Clinical Evaluation Partner
Follow me for more insights and practical advice.
– MDR 2017/745 Annex XIV Section 1
– MDR 2017/745 Article 61(1)
– MDCG 2020-6 Regulation (EU) 2017/745: Sufficient clinical evidence for legacy devices





