When equivalence breaks down: navigating mixed evidence strategies
You started with full equivalence. Then you realized two technical differences cannot be bridged. Now you are stuck between abandoning equivalence entirely or pretending those differences don’t matter. Most manufacturers get trapped here and waste months trying to justify what cannot be justified.
In This Article
There is a third option that most clinical evaluation teams overlook: a mixed evidence approach.
This is not about finding a workaround. It is about recognizing where equivalence holds and where it does not, then building a structured justification that addresses the gaps without pretending they don’t exist.
I see this situation regularly. A team claims equivalence to an established device. The technical comparison looks strong at first. Then, during the detailed analysis, two or three characteristics emerge that cannot be bridged through engineering arguments alone.
The reaction is often binary: either force the equivalence claim through wishful reasoning, or abandon it completely and start from scratch with clinical investigations.
Both paths lead to problems.
What partial equivalence actually means
Partial equivalence is not a defined term in the MDR. But the regulatory logic behind it is clear.
MDR Article 61(5) and Annex XIV Part A allow you to rely on clinical data from an equivalent device if you can demonstrate equivalence in technical, biological, and clinical characteristics. The regulation does not require absolute identity. It requires demonstration that differences do not adversely affect clinical performance or safety.
MDCG 2020-5 explains this further. Equivalence can be claimed when differences are minor and do not influence safety or performance. When differences are not minor, you cannot claim full equivalence.
But here is what the guidance does not explicitly say: when differences exist that prevent full equivalence, you can still use clinical data from a similar device as part of your overall evidence base.
You just cannot rely on it exclusively.
Partial equivalence is not a formal claim under MDR. It is a practical recognition that some characteristics align while others do not. This recognition then determines your evidence strategy.
The shift here is conceptual. You are no longer claiming equivalence as your primary justification. You are using comparative data as one element within a broader clinical evaluation that includes other evidence types.
When this situation arises in practice
I encounter this most often in three scenarios.
First, when a device shares the same intended use and most design features with a predicate device, but uses a different material or coating that affects tissue interaction. The manufacturer wants to leverage extensive clinical data from the predicate, but the material difference introduces biocompatibility questions that cannot be fully addressed through bench testing alone.
Second, when a device adds a new feature or indication to an established platform. The core technology is proven, but the new use case or functionality creates clinical questions that existing data does not answer.
Third, when a device modifies a parameter that influences clinical outcomes indirectly. For example, changing the stiffness of a catheter or the diameter of a mesh. These changes may not seem major, but they can alter handling, deployment, or integration in ways that affect safety or performance.
In all three scenarios, the clinical data from the similar device remains valuable. It demonstrates safety and performance in a closely related context. But it does not fully cover the clinical questions raised by the differences.
Manufacturers often present these situations as if the differences are negligible, then provide a brief engineering justification and move on. Reviewers see this immediately. If the difference truly does not matter, the justification would be straightforward and supported by data. If the justification is vague or speculative, the difference matters.
Building a mixed evidence strategy
A mixed evidence approach starts with transparency.
You acknowledge the differences clearly in the technical comparison section of your clinical evaluation report. You explain why these differences prevent full equivalence under MDCG 2020-5 criteria.
Then you describe how you will address the clinical questions raised by those differences.
This is where most manufacturers lose clarity. They list multiple evidence sources without explaining how each source contributes to answering specific clinical questions.
The structure should be explicit. For each difference that breaks equivalence, identify the clinical question it raises. Then map the evidence that addresses that question.
For example, if the difference is a new material, the clinical question might be: does this material cause adverse tissue reactions or affect device performance in vivo? The evidence to address this could include biocompatibility testing per ISO 10993, bench testing that simulates physiological conditions, and animal studies if the risk profile justifies it.
If the difference is a modified design feature that affects deployment, the clinical question might be: does this feature change the learning curve, deployment success rate, or acute complication profile? The evidence could include human factors testing, simulated use studies, and early clinical data from a limited market release or feasibility study.
The clinical data from the similar device still plays a role. It establishes baseline safety and performance in the same anatomical site or clinical context. It demonstrates that the core technology is sound. But it does not, on its own, address the specific questions raised by the differences.
A mixed evidence strategy is not weaker than full equivalence. It is more realistic. It shows that you understand the limits of comparative data and have addressed the gaps systematically.
What reviewers look for in mixed strategies
When I review a clinical evaluation report that uses mixed evidence, I look for three things.
First, is the rationale clear? Does the manufacturer explain why they moved away from full equivalence, or does the report still try to claim equivalence while simultaneously presenting additional evidence? Inconsistency here raises immediate questions.
Second, is the evidence mapping logical? Does each piece of evidence actually address the clinical question it is supposed to answer, or is the connection assumed rather than demonstrated?
Third, is the evidence sufficient? This is where risk management comes in. The level of evidence required depends on the severity and probability of the risks introduced by the differences. A minor design modification with low risk consequences can be supported by bench testing and literature. A material change that affects long-term biocompatibility in a high-risk device requires clinical data.
Notified Bodies apply the same reasoning. They will accept a mixed strategy if it is well justified and proportionate to risk. But they will reject a strategy that looks like an attempt to avoid clinical investigation when clinical investigation is clearly needed.
Manufacturers often present mixed evidence without a clear hierarchy. All evidence sources are listed as if they carry equal weight. Reviewers need to see which evidence is primary for which clinical question, and which evidence is supportive or contextual.
The role of post-market data in mixed strategies
One advantage of a mixed evidence approach is that it allows you to use post-market data more strategically.
If your device is already on the market under a previous regulatory pathway, you have real-world evidence of safety and performance. This evidence can directly address some of the clinical questions raised by differences from the predicate device.
For example, if the difference is a modified feature, and post-market surveillance shows no increase in related adverse events, that is strong evidence that the modification does not adversely affect safety.
But this only works if your post-market data collection is structured to answer specific clinical questions. Generic complaint logs and summary reports are not sufficient. You need targeted data on the outcomes affected by the differences.
This is where PMCF planning becomes critical. When you identify that your device cannot claim full equivalence, your PMCF plan should include specific objectives related to the clinical questions that remain open.
For devices entering the market, this means designing a PMCF study that collects data on the modified features or new indications from the start. For devices already on the market, it means conducting a gap analysis of existing post-market data and planning additional data collection if gaps remain.
Documentation and traceability
A mixed evidence strategy requires more documentation than a straightforward equivalence claim.
The clinical evaluation report must clearly outline the logic. This includes a section that explains why full equivalence could not be demonstrated, which clinical questions remain, and how the overall evidence package addresses those questions.
Each evidence source should be summarized with enough detail to show its relevance. Generic references to standards or literature are not enough. The report should explain what each test, study, or dataset contributes to the clinical safety and performance demonstration.
Traceability is critical. Reviewers will follow the chain of reasoning from the technical differences, to the clinical questions, to the evidence, to the conclusions. If any link is missing or unclear, the entire strategy weakens.
This level of documentation takes time. But it is time well spent. A clear, well-structured clinical evaluation report reduces review cycles and deficiency letters.
The quality of your clinical evaluation report is not measured by how much evidence you present. It is measured by how clearly you connect the evidence to the clinical questions and demonstrate that all relevant questions are addressed.
When mixed strategies are not enough
There are situations where a mixed evidence approach is not sufficient.
If the differences from the predicate device introduce significant new risks or raise fundamental questions about clinical performance, you cannot rely on indirect evidence. You need direct clinical data from your device.
This is the case when the modification affects a critical function, changes the mechanism of action, or involves a patient population with different risk profiles.
MDCG 2020-5 and MDCG 2020-6 make this clear. When differences are substantial, equivalence cannot be claimed, and clinical investigations are required.
The decision point is risk-based. If your risk analysis shows that the residual risks from the differences cannot be adequately controlled or evaluated without clinical data, then clinical investigation is not optional.
Trying to force a mixed strategy in these situations leads to prolonged review cycles, requests for additional data, and ultimately, the requirement to conduct clinical investigations anyway.
It is better to recognize early when clinical investigation is needed and plan accordingly.
Final thoughts
Partial equivalence is not a failure. It is a realistic assessment.
When your device shares significant similarities with an established device but has differences that matter clinically, a mixed evidence strategy allows you to leverage existing data while addressing the gaps systematically.
The key is clarity. Be transparent about the differences, explicit about the clinical questions they raise, and structured in how you address them.
Reviewers do not expect perfection. They expect logic, traceability, and evidence that is proportionate to risk.
If you build your clinical evaluation with that mindset, a mixed evidence approach becomes a strength, not a compromise.
Peace,
Hatem
Clinical Evaluation Expert for Medical Devices
Follow me for more insights and practical advice.
Frequently Asked Questions
What is a Clinical Evaluation Report (CER)?
A CER is a mandatory document under MDR 2017/745 that demonstrates the safety and performance of a medical device through systematic analysis of clinical data. It must be updated throughout the device lifecycle based on PMCF findings.
How often should the CER be updated?
The CER should be updated whenever significant new clinical data becomes available, after PMCF activities, when there are changes to the device or intended purpose, and at minimum during annual reviews as part of post-market surveillance.
What causes CER rejection by Notified Bodies?
Common reasons include inadequate equivalence demonstration, insufficient clinical data for claims, poorly structured SOTA analysis, missing gap analysis, and lack of clear benefit-risk determination. Structure and logical flow are as important as the data itself.
Which MDCG guidance documents are most relevant for clinical evaluation?
Key documents include MDCG 2020-5 (Equivalence), MDCG 2020-6 (Sufficient Clinical Evidence), MDCG 2020-13 (CEAR Template), MDCG 2020-7 (PMCF Plan), and MDCG 2020-8 (PMCF Evaluation Report).
Need Expert Help with Your Clinical Evaluation?
Get personalized guidance on MDR compliance, CER writing, and Notified Body preparation.
✌
Peace, Hatem
Your Clinical Evaluation Partner
Follow me for more insights and practical advice.
– MDR 2017/745 Article 61(5) and Annex XIV Part A
– MDCG 2020-5 Clinical Evaluation – Equivalence
– MDCG 2020-6 Sufficient Clinical Evidence for Legacy Devices
Deepen Your Knowledge
Read Complete Guide to Clinical Evaluation under EU MDR for a comprehensive overview of clinical evaluation under EU MDR 2017/745.





