The Equivalence Assessment That Withstands Scrutiny
I’ve seen equivalence claims collapse during Notified Body review more than any other part of the clinical evaluation. The manufacturer had data. They had documentation. They had a predicate device. But when the auditor asked one simple question, the entire clinical strategy fell apart.
In This Article
The question was this: “How did you confirm that the clinical performance of your device can be predicted from this equivalent device?”
Silence.
Because the manufacturer had focused on demonstrating technical similarity. They had compared materials, design features, and intended purpose. They had checked all the boxes in their equivalence table. But they had never actually established that the clinical outcomes of one device could reliably predict the clinical outcomes of the other.
This is where most equivalence assessments fail. Not because the work wasn’t done, but because the logic wasn’t complete.
What MDR Actually Requires
Under MDR Article 61(5) and Annex XIV Part A Section 3, equivalence is permitted as a route to demonstrate conformity when certain conditions are met. MDCG 2020-5 provides the detailed framework.
The regulation is clear: equivalence means you can rely on clinical data from another device to support your own device’s clinical evaluation. But it’s conditional. The equivalent device must be similar enough that its clinical data accurately predicts your device’s clinical performance and safety profile.
Most manufacturers understand this conceptually. Where they struggle is in the execution.
They produce equivalence reports that list similarities. They show that Device A and Device B share the same intended purpose, similar design, and comparable materials. They conclude equivalence. They move on.
But reviewers don’t conclude equivalence from a list of similarities. They conclude it from a structured demonstration that clinical outcomes can be reliably transferred.
The equivalence report demonstrates technical similarity but never explains why these technical similarities justify the transfer of clinical conclusions. The logical bridge is missing.
The Three-Part Equivalence Structure
An equivalence assessment that withstands scrutiny follows a three-part structure. Each part builds on the previous one. Skip one, and the argument collapses.
Part 1: Technical Equivalence
This is the foundation. You compare the devices across technical characteristics that influence clinical performance.
The comparison must be systematic. I use a detailed table that covers design features, materials in contact with tissue or blood, operating principles, energy delivery mechanisms, software functions, mechanical properties, and any other characteristic that could affect how the device interacts with the body.
For each characteristic, you state whether the devices are identical, similar, or different. If similar or different, you explain why the difference is not clinically significant.
This is standard practice. Most manufacturers do this part adequately.
The problem is they stop here.
Part 2: Biological and Clinical Equivalence
This is where the logic must become explicit. Technical similarity does not automatically mean clinical equivalence.
You must demonstrate that the technical characteristics you compared are the ones that determine clinical outcomes. And you must show that the degree of similarity you established is sufficient to predict equivalent clinical performance.
I address this by linking each relevant technical characteristic to its biological or clinical consequence.
For example, if the devices use the same biomaterial in contact with tissue, I don’t just note the similarity. I explain that this material determines biocompatibility, tissue response, and long-term implant stability. I cite literature showing how material composition influences clinical outcomes for this device type. Then I conclude that identical material composition supports the expectation of equivalent biological performance.
If the devices have similar but not identical geometries, I explain how geometry affects mechanical loading, tissue contact area, or fluid dynamics. I justify why the geometric differences observed are within a range that does not alter clinical performance, referencing device-specific studies or biomechanical principles.
This part requires clinical reasoning. It’s not a checklist. It’s an argument.
Equivalence is not demonstrated by listing similarities. It’s demonstrated by explaining why those similarities matter clinically and why the differences do not.
Part 3: Clinical Data Applicability
The final part addresses the clinical data itself. You must show that the data from the equivalent device is relevant, sufficient, and applicable to your device.
Relevance means the data addresses the clinical outcomes and risks that matter for your device. If your device is used in a different patient population or clinical setting than the equivalent device, you must justify why the data still applies.
Sufficiency means the data is adequate in scope and quality. MDCG 2020-13 sets the standard for clinical evidence. The equivalent device’s data must meet the same standard your device would need to meet if you were generating your own data.
Applicability means you’ve accounted for any differences in use conditions, patient population, or clinical context. If the equivalent device was studied in younger patients and you’re targeting older patients, you explain why the outcomes are transferable or you generate supplementary data.
This part is often overlooked entirely. Manufacturers demonstrate equivalence but never assess whether the equivalent device’s clinical data is actually robust enough to support conformity.
I’ve reviewed equivalence claims where the predicate device had minimal published data, no long-term follow-up, and limited patient numbers. The manufacturer argued equivalence, but even if equivalence was valid, the data was insufficient. Equivalence doesn’t create data where none exists.
Where Equivalence Claims Break Down
Let me walk through the common failure points I see in real audits.
Failure Point 1: Comparing to a Legacy Device
The manufacturer selects an equivalent device that has been on the market for years, often under the MDD. They assume this demonstrates safety and performance.
But when you examine the available data, there’s almost nothing. A few case series. Maybe some registry data. No controlled studies. No systematic post-market surveillance.
The equivalence may be technically valid, but the data is insufficient. The manufacturer now needs to generate their own clinical evidence, which defeats the purpose of the equivalence route.
The lesson: evaluate the data before you commit to an equivalence strategy.
Failure Point 2: Ignoring Subtle but Critical Differences
The devices are similar in most respects, but there’s one difference the manufacturer dismisses as minor.
A slightly different coating. A modified release mechanism. A smaller change in geometry.
The manufacturer argues it’s not clinically significant. But they never actually demonstrate this. They just assert it.
Reviewers don’t accept assertions. They want evidence or reasoning. If the difference could plausibly affect clinical outcomes, you need to address it with data, literature, or rigorous justification.
In one case I reviewed, the manufacturer changed the fixation method of an implant from cemented to press-fit. They argued this was a minor design change and maintained equivalence to the cemented version.
But fixation method fundamentally alters load transfer, osseointegration, and failure modes. This was not a minor difference. It required separate clinical evaluation.
The equivalence claim was rejected.
The manufacturer minimizes a technical difference without analyzing its clinical impact. The reviewer sees it differently and questions the entire equivalence claim.
Failure Point 3: Circular Equivalence Chains
Device A claims equivalence to Device B. Device B’s clinical evaluation is based on equivalence to Device C. Device C references Device D.
The chain is long, and the original clinical data is thin. Each link in the chain introduces uncertainty. By the time you trace it back, the connection to actual clinical evidence is tenuous.
MDCG 2020-5 allows equivalence to a device that itself relied on equivalence, but it requires that the entire chain be transparent and that the final clinical data is sufficient.
In practice, long equivalence chains raise red flags. Reviewers question whether the cumulative technical drift across the chain invalidates the clinical predictions.
My advice: keep equivalence direct. If you must use a chain, document every link clearly and ensure the foundational clinical data is robust.
How to Structure the Equivalence Argument
The strongest equivalence assessments I’ve seen follow a clear argumentative structure. They don’t just present information. They build a case.
Start with a clear statement of what you’re claiming. You are asserting that Device X is equivalent to Device Y for the purpose of relying on Device Y’s clinical data.
Then lay out your criteria for equivalence. What technical, biological, and clinical characteristics must be similar? Why are these the right characteristics?
Compare the devices systematically against these criteria. Present the data in tables, but don’t let the tables do the talking. Interpret the findings. Explain what they mean.
Address differences explicitly. Don’t hide them. Show that you’ve considered them and explain why they don’t break equivalence.
Assess the clinical data from the equivalent device. Is it sufficient? Is it applicable? Does it cover the right outcomes, populations, and durations?
Conclude with a clear statement: based on the analysis, the clinical data from Device Y can be relied upon to demonstrate the safety and performance of Device X.
This structure works because it mirrors how reviewers evaluate equivalence. They don’t read the report looking for similarities. They read it looking for gaps in the logic.
An equivalence assessment is a logical argument, not a documentation exercise. Structure it like you’re presenting a case, not filling out a form.
The Role of Testing in Equivalence
Testing often provides the missing link in equivalence arguments.
When technical differences exist but you believe they’re not clinically significant, testing can demonstrate this. Bench testing, biocompatibility testing, and performance testing under simulated use conditions provide objective evidence that the devices behave equivalently.
I use testing strategically. If there’s a geometric difference, I run mechanical testing to show that the devices have equivalent strength, fatigue resistance, or flexibility. If there’s a material difference, I run biocompatibility or degradation testing to show equivalent biological response.
Testing doesn’t replace clinical data, but it strengthens the equivalence argument by reducing uncertainty about how technical differences translate clinically.
The key is to design tests that address the specific concern raised by the difference. Generic testing protocols add little value. Targeted tests that answer the clinical question are what reviewers find convincing.
When Equivalence Is Not the Right Path
Not every device should pursue equivalence. Sometimes generating your own clinical data is the clearer, faster path to conformity.
Equivalence works when the devices are genuinely similar and the equivalent device has strong clinical data. It fails when you’re stretching the argument to cover meaningful differences or when the predicate device’s data is limited.
I’ve seen manufacturers spend months building an equivalence case, only to have it rejected because a key difference couldn’t be justified. They would have been better off running a clinical study from the start.
Before committing to equivalence, ask two questions:
First, is there a truly equivalent device with robust clinical data? If the answer is no, don’t force it.
Second, can we convincingly explain why clinical outcomes are transferable? If the argument requires too many assumptions or justifications, it’s probably weak.
Equivalence is a tool, not a shortcut. Use it when it’s the right tool.
What Happens After Equivalence Is Established
Establishing equivalence doesn’t end your clinical evaluation responsibilities. You still need to appraise the equivalent device’s data, analyze it for applicability to your device and its intended use, and generate your own post-market data.
The equivalence assessment becomes part of your clinical evaluation report, but the CER must still address all the clinical safety and performance questions for your device. The equivalent device’s data is your starting point, not your finish line.
And you must monitor your own device post-market. Even if equivalence is valid at launch, real-world use may reveal differences that weren’t apparent during development. Your PMCF must be designed to detect these differences early.
Equivalence is a claim you make at one point in time based on available evidence. It’s subject to ongoing verification.
If post-market data from your device shows outcomes that diverge from the equivalent device, your equivalence claim may no longer hold. You must be prepared to revisit it.
Final Considerations
Equivalence is one of the most scrutinized parts of any clinical evaluation. Notified Bodies know it’s where manufacturers try to minimize the clinical data burden. They look for weak links.
The equivalence assessments that survive scrutiny are the ones built on clear logic, supported by objective evidence, and honest about limitations.
They don’t oversell the similarities. They don’t dismiss the differences. They explain why the clinical data transfer is valid and they acknowledge where uncertainty remains.
This approach takes more effort upfront. But it’s the only approach that holds up under review.
If you’re building an equivalence case, don’t ask whether you can claim equivalence. Ask whether a skeptical reviewer, who has no interest in approving your device, would find your argument convincing.
That’s the standard you’re being held to.
Peace,
Hatem
Clinical Evaluation Expert for Medical Devices
Follow me for more insights and practical advice.
Frequently Asked Questions
What is a Clinical Evaluation Report (CER)?
A CER is a mandatory document under MDR 2017/745 that demonstrates the safety and performance of a medical device through systematic analysis of clinical data. It must be updated throughout the device lifecycle based on PMCF findings.
How often should the CER be updated?
The CER should be updated whenever significant new clinical data becomes available, after PMCF activities, when there are changes to the device or intended purpose, and at minimum during annual reviews as part of post-market surveillance.
What causes CER rejection by Notified Bodies?
Common reasons include inadequate equivalence demonstration, insufficient clinical data for claims, poorly structured SOTA analysis, missing gap analysis, and lack of clear benefit-risk determination. Structure and logical flow are as important as the data itself.
Which MDCG guidance documents are most relevant for clinical evaluation?
Key documents include MDCG 2020-5 (Equivalence), MDCG 2020-6 (Sufficient Clinical Evidence), MDCG 2020-13 (CEAR Template), MDCG 2020-7 (PMCF Plan), and MDCG 2020-8 (PMCF Evaluation Report).
Need Expert Help with Your Clinical Evaluation?
Get personalized guidance on MDR compliance, CER writing, and Notified Body preparation.
✌
Peace, Hatem
Your Clinical Evaluation Partner
Follow me for more insights and practical advice.
– MDR 2017/745 Article 61(5) and Annex XIV Part A Section 3
– MDCG 2020-5 Rev.1 Clinical Evaluation – Equivalence
– MDCG 2020-13 Clinical Evaluation Assessment Report
The equivalence route can significantly reduce costs compared to clinical investigation. See our full CE marking costs for medical devices.
Deepen Your Knowledge
Read Complete Guide to Clinical Evaluation under EU MDR for a comprehensive overview of clinical evaluation under EU MDR 2017/745.





