Why Your Literature Scores Keep Getting Challenged by Reviewers

Hatem Rabeh

Written by HATEM RABEH, MD, MSc Ing

Your Clinical Evaluation Expert And Partner

in
S

A Notified Body reviewer returns your Clinical Evaluation Report with one consistent observation: the literature appraisal lacks systematic justification of evidence quality. You scored studies as high-quality, but the reasoning is missing. The reviewer wants to know why you weighted certain studies more than others, and how evidence levels influenced your conclusions.

This happens more often than it should. Manufacturers appraise literature. They extract data. They summarize findings. But when the Notified Body reviews the CER, the question comes back: on what basis did you assign evidence weight?

The problem is not the effort. It is the structure. Clinical evaluation under the MDR requires transparent, methodical appraisal of evidence quality. This is not optional, and it is not about opinion. It is about demonstrating that your conclusions rest on solid ground.

When evidence levels are unclear or inconsistently applied, your entire clinical evaluation weakens. Reviewers see it immediately.

What the MDR Actually Requires

Article 61 of MDR 2017/745 mandates that clinical evaluation follow a defined and methodologically sound procedure. Annex XIV Part A lays out that appraisal of clinical data must be systematic, objective, and rationally justified.

MDCG 2020-6 on sufficient clinical evidence further reinforces that the evaluation must consider the quality and relevance of the data. You cannot treat all studies equally. A case report does not carry the same weight as a randomized controlled trial. This is not about preference. It is about evidence hierarchy.

But here is what I see in real submissions: manufacturers acknowledge the concept of evidence levels, then fail to apply it consistently. The appraisal becomes a checkbox exercise instead of a structured analysis.

Common Deficiency
Literature tables list study design, but no explicit evidence level is assigned. The appraisal narrative discusses findings without explaining why certain studies were given more weight. When reviewers ask how evidence levels influenced conclusions, the answer is not in the document.

Evidence Hierarchies Are Not Arbitrary

Evidence-based medicine uses hierarchies for a reason. Stronger study designs reduce bias, control confounding, and produce more reliable conclusions. Clinical evaluation for medical devices follows the same logic.

At the top of the hierarchy: systematic reviews of randomized controlled trials, then individual RCTs with adequate power and rigorous methodology. Below that: cohort studies, case-control studies, case series, case reports, expert opinion.

This is not about dismissing lower-level evidence. In many device categories, RCTs are rare or impractical. But that does not eliminate the need to classify evidence systematically. It means you must be transparent about what level of evidence you are relying on, and why it is appropriate for your device and intended purpose.

The issue is when manufacturers blend evidence levels without distinction. A single case report is cited alongside a multi-center cohort study, and both are treated as if they carry equal weight. Reviewers notice this. They question whether the conclusions are actually supported.

How Evidence Levels Should Shape Your Appraisal

Every piece of literature in your CER should be appraised for both relevance and quality. Relevance asks: does this study apply to my device, my population, my intended use? Quality asks: how reliable is this evidence?

Evidence level is a core component of quality. It reflects study design, risk of bias, sample size, follow-up duration, and methodological rigor. When you appraise a study, you are not just summarizing its findings. You are assessing whether those findings are trustworthy enough to support your clinical evaluation.

This means your appraisal should explicitly state the evidence level for each study. Use a recognized framework. Many manufacturers apply the Oxford Centre for Evidence-Based Medicine levels, or adapt GRADE methodology. The specific framework matters less than consistency and transparency.

Key Insight
Assigning evidence levels is not about labeling studies. It is about making your reasoning visible. When a reviewer sees that you systematically classified evidence and weighted it accordingly, they understand that your conclusions are methodologically grounded.

What Happens When Evidence Levels Are Missing

Reviewers read clinical evaluations critically. They look for logical consistency. If your document claims strong clinical evidence but relies heavily on case reports and expert opinion, the gap is obvious.

When evidence levels are not clearly assigned, reviewers cannot verify your reasoning. They cannot see whether your conclusions are proportional to the strength of the data. This creates doubt, and doubt leads to additional questions, clarifications, and delays.

I have seen CERs returned because the literature appraisal lacked a structured quality assessment. The manufacturer had extracted data, summarized outcomes, and written a clinical evaluation narrative. But when asked how they weighted evidence, there was no clear answer.

The Notified Body did not reject the literature itself. They rejected the appraisal process. The studies may have been appropriate, but the evaluation did not demonstrate systematic quality assessment.

How to Appraise Evidence Levels in Practice

Start by selecting a recognized evidence hierarchy framework. Document this in your clinical evaluation plan. The framework should align with the type of clinical data you expect to find for your device category.

When you appraise each study, assign an evidence level based on study design and methodological quality. Document this in your literature table. Do not leave it implicit. A column for evidence level should be visible, consistent, and justified.

Then, in your appraisal narrative, explain how evidence levels influenced your conclusions. If you are relying on lower-level evidence, justify why. Explain what higher-level evidence exists or does not exist, and why the available data is still sufficient.

If you have a mix of evidence levels, describe how you weighted them. Did you prioritize RCTs over case series? Did you use case reports only to identify rare adverse events? Make the logic explicit.

Key Insight
The goal is traceability. A reviewer should be able to follow your reasoning from literature appraisal to clinical conclusions, and see that evidence quality was systematically considered at every step.

When Lower-Level Evidence Is All You Have

Not every device category has abundant high-level evidence. For some devices, case series and retrospective cohorts are the norm. This does not disqualify your clinical evaluation.

What matters is transparency. Acknowledge the evidence level. Explain why higher-level studies are absent. Justify why the available evidence is still appropriate for demonstrating safety and performance.

For example, if your device is used in a rare condition, large RCTs may not exist. You rely on case series from specialized centers. Document this. Explain that the evidence base is limited by the clinical context, not by insufficient literature search.

Reviewers accept this reasoning when it is clearly stated. What they do not accept is treating low-level evidence as if it were high-level, or ignoring evidence quality altogether.

How This Connects to Clinical Conclusions

Evidence levels do not just influence literature appraisal. They shape the strength of your clinical conclusions. When you conclude that your device is safe and performs as intended, that conclusion must be proportional to the evidence supporting it.

If your conclusion is based on high-level evidence—multiple well-designed studies with consistent findings—you can state it with confidence. If your conclusion relies on lower-level evidence, your language should reflect that. You may conclude that available evidence suggests safety and performance, while acknowledging gaps to be addressed through PMCF.

This is not about weakening your conclusions. It is about aligning them with the evidence base. Reviewers respect measured, evidence-based statements. They question overconfident conclusions that are not supported by the data.

Common Deficiency
Manufacturers conclude “strong clinical evidence supports safety and performance” based entirely on case reports and small retrospective studies. The evidence may be appropriate, but the language overstates its strength. Reviewers flag this inconsistency immediately.

What Reviewers Look For

When a Notified Body reviews your literature appraisal, they look for structure, transparency, and consistency. They want to see that you applied a defined method, that you assessed quality systematically, and that your conclusions follow logically from the evidence.

They check whether evidence levels are assigned. They check whether the appraisal narrative explains how evidence quality influenced your evaluation. They check whether your clinical conclusions are proportional to the strength of the data.

If these elements are missing or inconsistent, they raise questions. Not because they doubt your competence, but because the evaluation does not meet the methodological standard required by the MDR.

How to Fix This Before Submission

Before submitting your CER, review your literature appraisal critically. Ask: can a reviewer see how I assessed evidence quality? Can they trace my reasoning from appraisal to conclusions?

Check your literature tables. Are evidence levels clearly assigned? Check your appraisal narrative. Does it explain how evidence quality influenced your evaluation? Check your clinical conclusions. Are they proportional to the strength of the data?

If the answer to any of these questions is unclear, revise. Add the missing elements. Make your reasoning explicit.

This is not about adding length. It is about adding structure. A few clear statements about evidence levels can prevent multiple rounds of clarifications later.

Key Insight
The effort to appraise evidence levels systematically is not extra work. It is the foundation of a defensible clinical evaluation. When evidence quality is transparent, your conclusions become stronger, not weaker.

Final Thought

Appraising literature quality is not about following a formula. It is about making your reasoning visible. When you assign evidence levels systematically, weight studies accordingly, and align your conclusions with the strength of the data, your clinical evaluation becomes more credible and more defensible.

Reviewers do not expect perfection. They expect rigor. They expect transparency. They expect that you can justify why your conclusions are supported by the evidence you cited.

If your literature appraisal lacks explicit evidence levels, the entire evaluation weakens. Not because the studies are wrong, but because the reasoning is hidden.

Make it visible. Make it systematic. Make it defensible.

Peace,
Hatem
Clinical Evaluation Expert for Medical Devices
Follow me for more insights and practical advice.

Frequently Asked Questions

What is a Clinical Evaluation Report (CER)?

A CER is a mandatory document under MDR 2017/745 that demonstrates the safety and performance of a medical device through systematic analysis of clinical data. It must be updated throughout the device lifecycle based on PMCF findings.

How often should the CER be updated?

The CER should be updated whenever significant new clinical data becomes available, after PMCF activities, when there are changes to the device or intended purpose, and at minimum during annual reviews as part of post-market surveillance.

What causes CER rejection by Notified Bodies?

Common reasons include inadequate equivalence demonstration, insufficient clinical data for claims, poorly structured SOTA analysis, missing gap analysis, and lack of clear benefit-risk determination. Structure and logical flow are as important as the data itself.

Which MDCG guidance documents are most relevant for clinical evaluation?

Key documents include MDCG 2020-5 (Equivalence), MDCG 2020-6 (Sufficient Clinical Evidence), MDCG 2020-13 (CEAR Template), MDCG 2020-7 (PMCF Plan), and MDCG 2020-8 (PMCF Evaluation Report).

Need Expert Help with Your Clinical Evaluation?

Get personalized guidance on MDR compliance, CER writing, and Notified Body preparation.

Peace, Hatem

Your Clinical Evaluation Partner

Follow me for more insights and practical advice.

References:
– Regulation (EU) 2017/745 (MDR), Article 61 and Annex XIV
– MDCG 2020-6: Sufficient Clinical Evidence for Legacy Devices
– MDCG 2020-13: Clinical Evaluation Assessment Report Template