Clinical Benefit: Why Notified Bodies Reject Your Quantification

Hatem Rabeh

Written by HATEM RABEH, MD, MSc Ing

Your Clinical Evaluation Expert And Partner

in
S

I have seen clinical evaluation reports with 80 pages of technical data, full literature searches, and detailed safety analyses. They get rejected in the first round. The reason is always the same: clinical benefit is described, not quantified. Notified Bodies ask the same question every time: “How much benefit, for which patients, compared to what?”

Most manufacturers believe that demonstrating clinical benefit means showing the device works as intended. This belief leads to deficiencies that delay conformity assessment by months.

The device performs its function. It reduces pressure. It monitors glucose. It delivers drug X. Manufacturers describe these mechanisms in detail. They cite literature supporting the principle. They present verification and validation data.

Then the Notified Body asks: “Where is the clinical benefit quantification?”

The confusion is understandable. MDR Article 61 requires demonstration that clinical benefits outweigh clinical risks. MDCG 2020-6 clarifies that this demonstration must be based on clinical data and include quantitative analysis wherever possible.

But what does quantification actually mean in regulatory practice?

What Reviewers Actually Look For

When a Notified Body reviews clinical benefit, they follow a specific reasoning path. They want to see three elements connected with numerical evidence.

First, they look for the clinical outcome that matters to the patient. Not the device output. Not the technical performance. The health outcome.

A pressure relief device does not provide benefit by reducing interface pressure. It provides benefit by preventing pressure ulcers or reducing ulcer progression. The pressure reduction is a mechanism. The ulcer prevention is the benefit.

This distinction appears obvious until you read actual clinical evaluation reports. Most focus on demonstrating the mechanism works. They measure pressure reduction with precision. They cite studies showing pressure management principles. They validate sensor accuracy.

None of this quantifies clinical benefit.

Common Deficiency
Describing device performance as a proxy for clinical benefit. Reviewers will ask: “What is the actual health outcome improvement for the target population?”

Second, reviewers expect comparison to a reference. Clinical benefit is always relative. Better than what? Compared to which alternative?

MDCG 2020-6 is explicit about this. The benefit-risk determination requires comparison with the current state of the art. Not comparison with doing nothing. Not comparison with an outdated alternative. Comparison with what patients would receive today if your device did not exist.

This requirement creates immediate practical problems. Many manufacturers select a reference that makes their device look favorable rather than one that reflects actual clinical practice.

I have reviewed reports where a new monitoring device was compared to manual monitoring performed every 8 hours. The clinical reality was continuous electronic monitoring as standard care. The chosen reference was not the state of the art. The benefit quantification was not valid.

Third, reviewers want numerical evidence. How many patients? What magnitude of improvement? What confidence level?

This is where most clinical evaluation reports fail.

The Quantification Problem

Regulatory professionals understand that quantification means numbers. But which numbers satisfy a Notified Body?

The answer depends on the type of benefit and the available evidence. There is no single template. But there are patterns in what reviewers accept and what they reject.

For diagnostic devices, benefit quantification typically requires sensitivity, specificity, and predictive values compared to the clinical reference standard. Not just analytical performance. Clinical performance in the intended use population.

A glucose monitor might have excellent analytical accuracy. But the clinical benefit is determined by how often it detects hypoglycemia early enough for intervention, compared to current monitoring approaches, in the actual patient population.

The quantification must show: detection rate of clinically significant events, false alarm rate, time advantage for intervention, and impact on patient management decisions.

For therapeutic devices, benefit quantification usually requires patient-relevant outcomes. Not surrogate markers unless the link to clinical outcomes is established.

A wound dressing might show faster reduction in wound area. But wound area is a surrogate. The patient-relevant outcomes are complete healing rate, time to healing, pain reduction, infection prevention, and quality of life impact.

Reviewers will ask: what percentage of patients achieved complete healing? How much faster compared to standard dressings? What was the clinical significance threshold and was it reached?

Key Insight
Clinical benefit quantification requires three components: patient-relevant outcome, comparison to state of the art, and numerical evidence with statistical context. Missing any element leads to rejection.

The statistical context matters more than manufacturers realize. Presenting a mean difference without confidence intervals is insufficient. Showing statistical significance without clinical significance thresholds is insufficient.

Reviewers want to understand both the average effect and the variability. They want to know if the benefit is consistent across patient subgroups. They want to see how many patients actually benefited versus how many showed no difference or worsening.

This level of detail is rarely available from literature alone. It usually requires analysis of clinical investigation data or real-world evidence from post-market sources.

When Literature Does Not Provide Quantification

This is the situation most manufacturers face. Published studies demonstrate clinical benefit qualitatively but do not provide the specific quantification needed for the intended use and target population.

The literature shows the device type works. Studies report positive outcomes. But the exact magnitude of benefit for your specific device, in your specific indication, compared to the specific current alternative, is not directly reported.

What do Notified Bodies expect in this situation?

They expect transparency about the gap and a justified approach to bridge it. This is where equivalence claims, clinical investigations, and post-market clinical follow-up intersect with benefit quantification.

If you claim equivalence to a device with established benefit, you must demonstrate technical and clinical equivalence according to MDCG 2020-5. The benefit quantification of the equivalent device becomes your quantification, but only if equivalence is thoroughly justified.

Most equivalence claims fail because manufacturers focus on similarities and ignore differences. Reviewers focus on differences and whether those differences could affect clinical outcomes.

If equivalence cannot be established, you need your own clinical data. Either through clinical investigation or through structured post-market evidence collection.

But here is where planning often goes wrong. Manufacturers design studies to demonstrate safety and performance. They design endpoints that are easy to measure. They power studies for statistical significance on surrogate outcomes.

Then at submission, they realize they have data that shows the device works, but not data that quantifies benefit in regulatory terms.

Common Deficiency
Clinical investigation endpoints chosen for feasibility rather than regulatory relevance. The study is completed, but benefit quantification is still not possible. This cannot be fixed after data collection.

The endpoint selection must be driven by what constitutes clinical benefit for regulatory purposes. Patient-relevant outcomes. Comparison to state of the art. Sufficient sample size for meaningful quantification.

This planning must happen before the investigation starts. Once data is collected, the quantification you can provide is determined by what you measured.

The State of the Art Reference Challenge

Selecting the appropriate reference for benefit comparison is more difficult than most regulatory teams anticipate.

The state of the art is not necessarily the most expensive option. It is not the newest technology. It is what represents generally acknowledged current good practice in the relevant clinical field.

For some indications, this is clear. For others, clinical practice varies significantly across regions, settings, and patient subgroups. There may be multiple acceptable approaches.

Notified Bodies expect you to justify your reference choice. This justification must be based on clinical guidelines, standard practice patterns, and expert consensus. Not on competitive convenience.

I have seen reports where manufacturers compared their device to an older generation specifically because that comparison showed larger benefit. The current standard of care was actually a different device or approach. The comparison was not clinically relevant.

The reviewer immediately identified this issue. The question was direct: “Why did you not compare to current standard practice?”

When the reference is not clearly defined or when practice varies, you need to address this explicitly. Present the range of current approaches. Justify which one represents the most appropriate comparison for your device’s intended use.

If multiple comparisons are relevant, provide quantification for each. Show how benefit varies depending on what your device replaces.

This is not additional work. This is the only way to demonstrate that clinical benefits outweigh risks in the context of available alternatives, which is the regulatory requirement.

Integration With Risk Management

Clinical benefit quantification does not exist in isolation. It directly feeds into the benefit-risk determination required by Article 61.

The benefit magnitude must be sufficient to outweigh the residual risks identified in risk management. This is a comparative judgment, not an absolute one.

A device with significant residual risks needs strong, well-quantified clinical benefits to achieve favorable benefit-risk balance. Weak or uncertain benefit quantification makes the balance impossible to determine.

Reviewers look at this relationship carefully. They compare the probability and severity of harms from the risk analysis with the probability and magnitude of benefits from the clinical evaluation.

If your risk analysis identifies a 2% probability of a serious complication, your clinical benefit quantification must show benefits that clearly outweigh this. Not just qualitatively. Numerically.

How many patients benefit? How much do they benefit? How does this compare to the 2% who experience harm?

These questions cannot be answered without proper quantification. Describing benefit as “improved outcomes” or “enhanced recovery” provides no basis for comparison with quantified risks.

Key Insight
Benefit-risk determination requires commensurable data. If risks are quantified with probabilities and severities, benefits must be quantified with response rates and effect magnitudes. Qualitative benefit descriptions cannot balance quantitative risks.

This integration also affects PMCF planning. If benefit quantification from pre-market data is limited or uncertain, post-market data collection must specifically address these gaps.

The PMCF plan should identify what additional benefit quantification is needed and how it will be obtained. This is not generic outcomes monitoring. This is targeted data collection to complete the benefit quantification required for ongoing benefit-risk evaluation.

What Adequate Quantification Looks Like

After explaining what reviewers expect and why most submissions fall short, the practical question remains: what does adequate benefit quantification actually look like in a clinical evaluation report?

It starts with a clear statement of the patient-relevant outcome that constitutes benefit. Not device performance. Not mechanism of action. The actual health outcome that matters.

For a pressure relief device: “Clinical benefit is defined as prevention of hospital-acquired pressure ulcers in immobile patients, and reduction of existing pressure ulcer progression.”

Then comes the quantification from clinical data: “In the pivotal study of 200 high-risk patients followed for 30 days, 8% developed new pressure ulcers compared to 23% in the control group receiving standard hospital mattresses (p<0.001, NNT=6.7, 95% CI: 4.5-12.1)."

This provides: patient population, sample size, outcome measure, comparison reference, effect size, statistical significance, and clinical significance metric (number needed to treat).

Then the clinical interpretation: “For every 7 high-risk patients using this device instead of standard hospital mattresses, one pressure ulcer is prevented during a 30-day period. This benefit is considered clinically significant given the patient impact of pressure ulcers and the resource implications of treatment.”

Finally, the comparison to risks: “The device-related adverse events included skin irritation in 3% of users, all mild and resolving without intervention. No serious adverse events were attributed to the device. The benefit-risk balance is clearly favorable.”

This structure provides what Notified Bodies need to assess the benefit claim. Specific outcome. Numerical evidence. Appropriate comparison. Clinical context. Benefit-risk integration.

Most clinical evaluation reports do not reach this level of specificity. They describe the device, review literature, list studies, and conclude that benefit is demonstrated.

That approach no longer satisfies regulatory scrutiny under MDR.

Moving Forward

Clinical benefit quantification is not an additional documentation requirement. It is the core of demonstrating conformity with MDR Article 61.

Without proper quantification, you cannot demonstrate that benefits outweigh risks. You cannot justify device approval. You cannot support continued market access.

The path forward requires changes in how clinical evaluation is planned and executed. Benefit quantification must drive endpoint selection in clinical investigations. It must guide literature search strategy. It must shape equivalence analysis and PMCF planning.

This work cannot be done effectively at the writing stage. When the clinical evaluation report is being drafted, the data available determines what quantification is possible. The opportunity to collect the right data is already past.

The integration must happen earlier. During product development. During clinical investigation design. During risk management activities.

Regulatory teams that wait until submission preparation to think about benefit quantification will find themselves with insufficient evidence and no way to complete it quickly.

Notified Bodies are consistent in their expectations. They want patient-relevant outcomes. They want comparison to state of the art. They want numerical evidence with statistical context.

These expectations are not hidden. They are explicit in MDCG 2020-6 and in Article 61 requirements. The issue is not understanding what is required. The issue is building the capability to deliver it.

This is the foundation for clinical evaluation under MDR. Without proper benefit quantification, the entire regulatory strategy becomes uncertain. Next, we will examine how post-market clinical follow-up must be designed to maintain and update this quantification throughout the device lifecycle.

Peace,
Hatem

Clinical Evaluation Expert for Medical Devices
Follow me for more insights and practical advice.

Frequently Asked Questions

What is a Clinical Evaluation Report (CER)?

A CER is a mandatory document under MDR 2017/745 that demonstrates the safety and performance of a medical device through systematic analysis of clinical data. It must be updated throughout the device lifecycle based on PMCF findings.

How often should the CER be updated?

The CER should be updated whenever significant new clinical data becomes available, after PMCF activities, when there are changes to the device or intended purpose, and at minimum during annual reviews as part of post-market surveillance.

What causes CER rejection by Notified Bodies?

Common reasons include inadequate equivalence demonstration, insufficient clinical data for claims, poorly structured SOTA analysis, missing gap analysis, and lack of clear benefit-risk determination. Structure and logical flow are as important as the data itself.

Which MDCG guidance documents are most relevant for clinical evaluation?

Key documents include MDCG 2020-5 (Equivalence), MDCG 2020-6 (Sufficient Clinical Evidence), MDCG 2020-13 (CEAR Template), MDCG 2020-7 (PMCF Plan), and MDCG 2020-8 (PMCF Evaluation Report). MDCG 2020-6, MDR Article 61

Need Expert Help with Your Clinical Evaluation?

Get personalized guidance on MDR compliance, CER writing, and Notified Body preparation.

Peace, Hatem

Your Clinical Evaluation Partner

Follow me for more insights and practical advice.

References:
– Regulation (EU) 2017/745 (MDR), Article 61
– MDCG 2020-6: Regulation (EU) 2017/745: Clinical evidence needed for medical devices previously CE marked under Directives 93/42/EEC or 90/385/EEC
– MDCG 2020-5: Clinical Evaluation Assessment Report Template

Deepen Your Knowledge

Read Complete Guide to Clinical Evaluation under EU MDR for a comprehensive overview of clinical evaluation under EU MDR 2017/745.