Negative Literature Doesn’t Go Away When You Ignore It

Hatem Rabeh

Written by HATEM RABEH, MD, MSc Ing

Your Clinical Evaluation Expert And Partner

in
S

I’ve reviewed CERs where manufacturers found unfavorable studies during their literature search, left them in the screening log, then silently excluded them without real justification. When the Notified Body asks why these studies weren’t discussed, the answer is usually vague. That’s not appraisal. That’s avoidance.

The instinct is understandable. You’ve spent months building a clinical case for your device. Then a study surfaces showing complications, poor outcomes, or design limitations for a similar device. The temptation is to minimize it, call it not relevant, or bury it in a footnote.

But reviewers see that pattern immediately. And it signals something worse than a weak device. It signals weak methodology.

What the MDR Actually Requires

MDR Annex XIV Part A Section 1 is explicit. The clinical evaluation must be based on a critical appraisal of all relevant clinical data. Not favorable data. Not supportive data. All relevant clinical data.

This includes data that contradicts your claims, questions your design choices, or raises safety concerns. The regulation doesn’t give you permission to curate a comfortable narrative.

MDCG 2020-5 reinforces this. The literature review must be systematic, transparent, and objective. If a study meets your inclusion criteria during the search protocol, you cannot exclude it later just because the results are inconvenient.

Key Insight
Relevance is determined by your search protocol, not by whether the findings support your device. If the study matches your PICO and meets quality thresholds, it’s relevant.

The problem is that many manufacturers misunderstand what “appraisal” means. They think it means deciding whether a study is good or bad for their case. It doesn’t. It means evaluating the quality, applicability, and weight of the evidence.

Negative literature is still evidence.

Why Negative Studies Appear in the First Place

When unfavorable data shows up, there’s usually a reason. And understanding that reason is more important than dismissing the study.

Sometimes the study involves an earlier generation of the technology. The failure mode identified may have been addressed in newer designs. That’s a legitimate discussion, but only if you demonstrate how your device differs and why the same issue doesn’t apply.

Sometimes the study reflects poor technique, inadequate training, or use outside indications. Again, that’s worth analyzing. But you need to show that these factors were the cause, not just assert it.

Sometimes the study is methodologically weak. Small sample size, no control group, retrospective design. But weak studies can still be relevant if they’re the only data available for a specific endpoint or population.

And sometimes the study is high quality, well-designed, and directly applicable. In that case, the findings matter. A lot.

Common Deficiency
Manufacturers often claim a study is “not applicable” without explaining why. A Notified Body will not accept geographic location, publication date, or author affiliation as valid reasons unless you explain the clinical significance of these factors.

How Reviewers Spot Avoidance

Notified Bodies don’t just read your CER. They often run their own literature searches. Not full systematic reviews, but targeted checks to see if the literature selection was reasonable.

When they find studies you didn’t include or didn’t discuss properly, they compare your exclusion reasoning to what the study actually says. If your justification is generic or contradicts the study content, it raises a flag.

They also look at how you handled the SOTA section. If competitors faced specific complications or design failures, and you don’t acknowledge them, the SOTA looks incomplete. That affects the benefit-risk analysis and the clinical claims.

Another pattern they notice is inconsistency. You cite a study to support one claim, but ignore its negative findings on another endpoint. That’s selective use of data. It undermines the credibility of the entire appraisal.

Reviewers are also trained to spot the difference between critical appraisal and dismissal. If every negative study is excluded for “low quality” or “not applicable,” but positive studies with similar limitations are included, the bias is obvious.

The Right Way to Address Negative Literature

The correct approach is not defensive. It’s analytical.

First, include the study in your appraisal table. Document its design, population, intervention, outcomes, and findings. Be accurate. Don’t soften the results.

Then assess its quality using a recognized tool. CASP checklists, Cochrane risk of bias assessments, Newcastle-Ottawa scale. Apply the same standard you use for favorable studies.

Next, evaluate applicability. Does the study population match your intended use? Does the device design match yours? Are the outcomes relevant to your clinical claims?

If the study is applicable and high quality, the negative findings must be integrated into your benefit-EF assessment. You cannot ignore them. You need to explain what they mean for your device, whether the risk is acceptable, and how it will be managed through labeling, training, or risk controls.

Key Insight
Addressing negative literature strengthens your CER, not weakens it. It shows the Notified Body that your appraisal was thorough, objective, and complete. It also demonstrates that you understand the real risks and have a plan to manage them.

If the study is not applicable, explain why in specific clinical terms. “The device used a different material” is not enough. You need to explain why that material difference matters for the observed complication.

If the study is low quality, describe the methodological flaws clearly. “Small sample size” is not a complete justification. You need to explain how the small sample size affects the reliability of the findings, and whether the trend observed is still clinically relevant.

When Negative Data Changes Your Clinical Strategy

Sometimes, confronting negative literature forces a strategic decision.

You might need to adjust your indications for use. If a study shows poor outcomes in a specific population, you may need to exclude that population or add a contraindication.

You might need to revise your instructions for use. If complications are linked to technique or learning curve, you may need to add training requirements or procedural guidance.

You might need to strengthen your PMCF plan. If a risk signal appears in the literature but is not conclusive, your post-market surveillance should specifically monitor that risk in your device.

You might even need to redesign. If the literature reveals a fundamental design flaw in equivalent devices, and your device shares that design, the clinical evaluation may not be sufficient to support CE marking without design changes.

These are uncomfortable conclusions. But they’re also the right ones.

Common Deficiency
Manufacturers often conclude “no action needed” after identifying negative literature, without explaining why. If a serious adverse event appears in the literature, your CER must document how you evaluated whether that risk applies to your device and what risk mitigation is in place.

The Real Test of a Clinical Evaluation

A strong clinical evaluation is not one that only presents positive evidence. It’s one that presents all the evidence and demonstrates sound clinical judgment.

Notified Bodies know that no device is perfect. They know that every technology has limitations, risks, and trade-offs. What they assess is whether the manufacturer understands those trade-offs and has made a reasoned case that the benefits outweigh the risks.

If you hide negative literature, you lose the opportunity to make that case. You also lose credibility.

But if you address it transparently, analyze it rigorously, and integrate it into your benefit-risk conclusion, you demonstrate exactly what MDR Article 61 requires: a clinical evaluation based on sufficient clinical evidence, critically appraised, and documented with objectivity.

That’s what gets through the review.

Next time you see an unfavorable study in your search results, don’t ask whether you can exclude it. Ask what it tells you about your device, your claims, and your evidence gaps.

That’s how you build a defensible CER.

Peace,
Hatem
Clinical Evaluation Expert for Medical Devices
Follow me for more insights and practical advice.

Frequently Asked Questions

What is a Clinical Evaluation Report (CER)?

A CER is a mandatory document under MDR 2017/745 that demonstrates the safety and performance of a medical device through systematic analysis of clinical data. It must be updated throughout the device lifecycle based on PMCF findings.

How often should the CER be updated?

The CER should be updated whenever significant new clinical data becomes available, after PMCF activities, when there are changes to the device or intended purpose, and at minimum during annual reviews as part of post-market surveillance.

What causes CER rejection by Notified Bodies?

Common reasons include inadequate equivalence demonstration, insufficient clinical data for claims, poorly structured SOTA analysis, missing gap analysis, and lack of clear benefit-risk determination. Structure and logical flow are as important as the data itself.

Which MDCG guidance documents are most relevant for clinical evaluation?

Key documents include MDCG 2020-5 (Equivalence), MDCG 2020-6 (Sufficient Clinical Evidence), MDCG 2020-13 (CEAR Template), MDCG 2020-7 (PMCF Plan), and MDCG 2020-8 (PMCF Evaluation Report).

Need Expert Help with Your Clinical Evaluation?

Get personalized guidance on MDR compliance, CER writing, and Notified Body preparation.

Peace, Hatem

Your Clinical Evaluation Partner

Follow me for more insights and practical advice.

References:
– Regulation (EU) 2017/745 (MDR), Annex XIV Part A
– MDCG 2020-5: Clinical Evaluation – Assessment Report Template
– MDR Article 61: Clinical Evaluation