What to do when your literature search finds conflicting data
You ran a systematic literature search. Followed MEDLINE and Embase protocols. Extracted 40 relevant papers. Then you saw it: two high-quality studies showing opposite safety outcomes for the same indication. Your clinical evaluation report deadline is in two weeks. Your instinct is to ignore one and cite the favorable paper. That instinct will fail your review.
In This Article
- Why contradictory evidence appears in clinical literature
- What MDR requires when evidence conflicts
- The three-step method for handling conflicting data
- How Notified Bodies evaluate conflicting evidence in your CER
- When contradictory evidence signals a deeper problem
- Practical documentation in the clinical evaluation report
- What happens if you ignore conflicting data
- Final considerations for regulatory strategy
Contradictory evidence is not a flaw in your search. It is a reality of clinical science. Different study designs, patient populations, follow-up durations, and outcome definitions produce different results. Sometimes genuinely.
The problem is not the contradiction. The problem is what you do when you find it.
Most manufacturers treat conflicting literature like a formatting error. They either cite the favorable study and quietly drop the unfavorable one, or they mention both in a table but fail to analyze the contradiction. Then the Notified Body asks: “Why did you ignore Study B?” or “How do you justify selecting Study A over Study C when both are RCTs?”
There is no good answer to that question if you did not address it upfront.
Why contradictory evidence appears in clinical literature
Clinical research is not a single truth waiting to be discovered. It is a layered process where methods, populations, and endpoints vary across studies.
A randomized controlled trial in a tertiary hospital with strict inclusion criteria will produce different results than a registry study with real-world patients. A six-month follow-up will show different complication rates than a five-year follow-up. A study defining “device failure” as explantation will report lower failure rates than one defining it as any revision.
These are not errors. These are methodological choices.
When you encounter contradictory findings, the first step is not to pick the best result. The first step is to understand why the contradiction exists.
Contradictory evidence does not mean the literature is unreliable. It means the clinical context is complex. Your role is to explain that complexity, not hide it.
What MDR requires when evidence conflicts
Regulation (EU) 2017/745 Article 61(4) states that the clinical evaluation must be based on clinical data providing sufficient evidence for conformity with relevant safety and performance requirements.
“Sufficient evidence” does not mean only favorable evidence. It means evidence that allows a reasoned judgment on the benefit-risk profile.
MDCG 2020-5 on Clinical Evaluation clarifies that the appraisal of clinical data must consider the quality, relevance, and weight of each study. When studies conflict, the manufacturer must explain which data carries more weight for the intended use and clinical context of their device.
This is not optional phrasing. It is the regulatory expectation.
If your clinical evaluation report lists conflicting studies in a table but provides no analysis of why they conflict or which findings are more applicable, the report is incomplete. The Notified Body will ask for clarification. If the clarification is weak, you will receive a major non-conformity.
The three-step method for handling conflicting data
When literature review identifies contradictory findings, you need a structured approach that demonstrates scientific reasoning and regulatory compliance.
Step 1: Document the contradiction transparently
Do not bury contradictory studies in supplementary appendices. Do not mention them only in passing. Present them clearly in your appraisal section.
State the nature of the conflict. For example: “Study X reported a 12-month infection rate of 2.3%, while Study Y reported 8.1% in a similar population.”
Transparency builds credibility. Reviewers expect conflicts in real-world data. What they do not accept is selective reporting.
Step 2: Analyze why the contradiction exists
This is where most clinical evaluation reports fail. They state the conflict but provide no explanation.
Ask these questions:
Were the study populations comparable? A study enrolling high-risk diabetic patients will show different infection rates than one enrolling healthy adults.
Were the devices truly equivalent? Two devices in the same regulatory class may have different coatings, materials, or design features that affect clinical outcomes.
Were the follow-up durations the same? Early outcomes often differ from long-term outcomes. A study stopping at six months may miss late complications.
Were the outcome definitions consistent? “Clinical success” defined as pain reduction below VAS 3 is not the same as “complete pain resolution.”
Were the methodologies comparable? An RCT with blinded outcome assessment is not directly comparable to a retrospective chart review.
Your clinical evaluation must address these factors. You are not defending one study over another. You are explaining the source of the variation.
Manufacturers often write: “Results varied across studies.” This is observation, not analysis. Reviewers will reject it. You must explain why results varied and what that means for your device.
Step 3: Weigh the evidence and justify your conclusion
After analyzing the contradiction, you must state which evidence carries more weight for your specific device and intended use.
This does not mean cherry-picking the favorable study. It means applying scientific reasoning to prioritize data that is most relevant and reliable.
For example: If Study X enrolled 200 patients with a device identical to yours in design and material, and Study Y enrolled 50 patients with a device that differs in coating technology, Study X is more relevant regardless of which shows better outcomes.
If Study A followed patients for five years and Study B stopped at one year, Study A provides more information on long-term safety even if Study B showed better short-term results.
If Study C is a well-designed RCT and Study D is a case series with unclear patient selection, Study C carries more methodological weight.
You must document this reasoning explicitly. The conclusion must be traceable to the appraisal. A reviewer reading your clinical evaluation report should be able to follow your logic step by step.
How Notified Bodies evaluate conflicting evidence in your CER
Notified Body reviewers do not expect perfect consensus in the literature. They expect manufacturers to demonstrate competence in interpreting imperfect data.
When they encounter conflicting evidence in your clinical evaluation, they assess three things:
Did you identify all relevant conflicting data, or did you selectively cite only favorable studies?
Did you analyze the reasons for the conflict using valid scientific criteria?
Did you justify your conclusions with transparent reasoning that links back to your device characteristics and intended use?
If any of these three elements is missing, the clinical evaluation report will be rejected or flagged for major revision.
I have reviewed reports where manufacturers cited five studies showing low complication rates and ignored two studies showing high complication rates. When asked why those two studies were excluded, the manufacturer stated they were “outliers.” That is not scientific reasoning. That is wishful thinking.
If a study is an outlier, you must explain why based on methodology, population, or device differences. You cannot declare it irrelevant because it does not fit your preferred narrative.
Notified Bodies do not penalize manufacturers for finding conflicting evidence. They penalize manufacturers for ignoring it, misrepresenting it, or failing to analyze it with scientific rigor.
When contradictory evidence signals a deeper problem
Sometimes conflicting data is not just methodological variation. It is a signal that the clinical performance of the device category is inconsistent or poorly understood.
If multiple high-quality studies show conflicting results despite similar populations and methods, that inconsistency itself is clinically significant.
It may indicate that outcomes are highly operator-dependent. It may suggest that patient selection criteria are poorly defined in clinical practice. It may reveal that the device performance varies with subtle differences in technique or anatomical factors.
When this is the case, your clinical evaluation must acknowledge the uncertainty. You cannot claim that safety and performance are well-established if the evidence base shows significant variability.
This does not mean your device is unsafe. It means the benefit-risk profile requires careful post-market surveillance to monitor real-world outcomes and refine clinical understanding.
Your PMCF plan should reflect this. If literature shows conflicting complication rates, your PMCF should include specific data collection on those complications to determine whether your device follows the favorable or unfavorable trend.
Practical documentation in the clinical evaluation report
When your literature appraisal section addresses conflicting evidence, structure it clearly:
Present all relevant conflicting studies in a summary table with key characteristics: study design, population, device details, follow-up duration, and outcomes.
In the narrative analysis, explain the factors contributing to the conflict. Reference specific differences in methodology, population, or endpoints.
State which studies are most relevant to your device and why. Justify this with explicit reasoning tied to device characteristics and intended clinical use.
If the conflict cannot be fully resolved, acknowledge the uncertainty and explain how your risk management and PMCF plan address it.
Do not hide contradictions in footnotes or appendices. Address them directly in the main appraisal text where your scientific reasoning is documented.
What happens if you ignore conflicting data
Selective citation is one of the most common deficiencies in clinical evaluation reports. It is also one of the easiest for Notified Bodies to identify.
Reviewers run the same literature searches you do. If they find relevant studies that contradict your conclusions and you did not address them, your report loses credibility immediately.
The consequence is not just a request for clarification. It is a question about the integrity of your entire clinical evaluation process. If you selectively cited literature, what else did you overlook or misrepresent?
That question is difficult to recover from. It shifts the review from technical assessment to trust verification. Every subsequent section of your report will be scrutinized with suspicion.
Avoiding this is simple: address all relevant evidence upfront, explain conflicts transparently, and justify conclusions with clear reasoning.
Manufacturers sometimes write: “Conflicting data were found but are not relevant to our device.” This is a red flag. If the data were truly irrelevant, explain why based on device design, population, or methodology. If you cannot explain it, the data are relevant and must be analyzed.
Final considerations for regulatory strategy
Contradictory evidence is not a barrier to regulatory approval. It is a normal feature of clinical research that requires competent interpretation.
Your clinical evaluation demonstrates competence not by finding perfect consensus in the literature, but by analyzing imperfect data with scientific rigor and transparent reasoning.
Notified Bodies expect manufacturers to work with real-world evidence, which includes variability, uncertainty, and contradiction. What they do not accept is manufacturers pretending that variability does not exist.
If your literature search identifies conflicting findings, treat it as an opportunity to demonstrate depth of analysis. Address the conflict transparently. Explain the methodological and clinical factors contributing to it. Weigh the evidence carefully and justify your conclusions with explicit reasoning.
That approach will pass regulatory review. Selective citation and superficial analysis will not.
Contradictory evidence does not weaken your clinical evaluation. Poor handling of contradictory evidence does.
Peace,
Hatem
Clinical Evaluation Expert for Medical Devices
Follow me for more insights and practical advice.
Frequently Asked Questions
What is a Clinical Evaluation Report (CER)?
A CER is a mandatory document under MDR 2017/745 that demonstrates the safety and performance of a medical device through systematic analysis of clinical data. It must be updated throughout the device lifecycle based on PMCF findings.
How often should the CER be updated?
The CER should be updated whenever significant new clinical data becomes available, after PMCF activities, when there are changes to the device or intended purpose, and at minimum during annual reviews as part of post-market surveillance.
What causes CER rejection by Notified Bodies?
Common reasons include inadequate equivalence demonstration, insufficient clinical data for claims, poorly structured SOTA analysis, missing gap analysis, and lack of clear benefit-risk determination. Structure and logical flow are as important as the data itself.
Which MDCG guidance documents are most relevant for clinical evaluation?
Key documents include MDCG 2020-5 (Equivalence), MDCG 2020-6 (Sufficient Clinical Evidence), MDCG 2020-13 (CEAR Template), MDCG 2020-7 (PMCF Plan), and MDCG 2020-8 (PMCF Evaluation Report).
Need Expert Help with Your Clinical Evaluation?
Get personalized guidance on MDR compliance, CER writing, and Notified Body preparation.
✌
Peace, Hatem
Your Clinical Evaluation Partner
Follow me for more insights and practical advice.
– Regulation (EU) 2017/745 (MDR) Article 61
– MDCG 2020-5 Clinical Evaluation – A Guide for Manufacturers and Notified Bodies
Related Resources
Read our complete guide to SOTA analysis under EU MDR: State of the Art (SOTA) Analysis under EU MDR
Or explore Complete Guide to Clinical Evaluation under EU MDR





