Why Your Equivalence Table Gets Flagged Before Page Two
I see equivalence comparison tables rejected not because the device is non-equivalent, but because the documentation structure makes equivalence impossible to verify. The assessor stops reading after the first few rows, writes a deficiency, and moves on. The issue is not what you concluded. It is what you documented and how you documented it.
In This Article
The equivalence demonstration is the foundation of most clinical evaluation routes under MDR. When you claim equivalence to another device, you shift the clinical evidence burden from generating new data to proving that existing data applies to your device.
But here is what happens in practice.
Most equivalence tables are built to satisfy the manufacturer, not the reviewer. They list technical characteristics, check boxes for similarity, and declare equivalence at the end. The structure looks complete. But when a Notified Body assessor or competent authority reviewer opens the file, they cannot reconstruct the reasoning. They cannot verify the claim independently.
So they reject it.
Not because your device is not equivalent. But because your table does not let them verify equivalence without making assumptions.
The Core Problem: Structure That Hides Logic
An equivalence table is not a data dump. It is a verification tool.
Every row should allow the reviewer to check one aspect of equivalence without needing to interpret, guess, or cross-reference. But most tables force the reviewer to work harder than necessary. Characteristics are listed without context. Differences are minimized without justification. The impact of variations is assumed rather than explained.
This creates friction. And friction in regulatory review does not slow things down—it stops them.
Tables that list device characteristics in parallel columns without explaining why each parameter matters for clinical performance. The reviewer cannot assess whether a difference is critical or negligible.
The reviewer should not have to figure out if a difference in polymer composition affects biocompatibility risk. You should state it. The reviewer should not have to wonder if a dimensional variation influences the mechanism of action. You should clarify it in the table itself.
What MDR and MDCG Actually Require
MDR Annex XIV Part A defines equivalence through technical, biological, and clinical characteristics. All three categories must demonstrate equivalence. MDCG 2020-5 Rev. 1 further details the comparison process and emphasizes that differences must be assessed for their impact on clinical safety and performance.
This is not a checklist exercise.
You are not proving that two devices are identical. You are proving that differences do not affect the applicability of clinical data. This means every documented difference must be accompanied by a risk-based justification of why it does not change clinical outcomes.
But most tables skip this step.
They list differences. They acknowledge variations. But they do not explain why those variations are acceptable. The logical gap is left for the reviewer to fill. And reviewers do not fill gaps—they flag them.
The Documentation Framework That Works
An equivalence comparison table that passes review has a specific structure. It is not about adding more columns. It is about making the reasoning explicit at every level.
1. Define the Comparison Parameters First
Before comparing anything, define what you are comparing and why it matters.
Each technical characteristic should be linked to a clinical or safety-relevant function. Material composition matters because it affects biocompatibility. Dimensional tolerances matter because they influence mechanical performance. Surface finish matters because it impacts tissue integration or microbial adhesion.
State the connection upfront. Do not assume the reviewer knows why a parameter is relevant.
2. Present Both Devices With Equal Detail
One common mistake: the subject device is described in full technical detail while the equivalent device is referenced by trade name and a few vague specifications.
This does not work.
The reviewer must be able to verify equivalence from the table itself. That means both devices need the same level of specification. If you list the polymer grade and processing method for your device, do the same for the equivalent device. If you describe coating thickness and adhesion strength, provide the same data for both sides.
Asymmetric detail suggests incomplete research. It signals that you do not fully understand the equivalent device. And if you do not fully understand it, you cannot claim equivalence.
The depth of specification for the equivalent device should match the depth of specification for the subject device. Imbalanced documentation is a red flag.
3. Document Differences With Impact Assessment
Every difference must be followed by an analysis of clinical impact.
Not a general statement. Not a qualitative dismissal. A specific explanation tied to the intended use and clinical context.
For example:
If the subject device has a slightly different catheter wall thickness, explain whether this affects insertion force, kink resistance, or flow rate—and whether those variations are within the range covered by the clinical data of the equivalent device.
If the coating chemistry differs, explain whether this changes surface interaction with blood or tissue, and how this is reflected in biocompatibility testing that demonstrates equivalent biological response.
If the sterility assurance level is achieved through a different method, justify why the end result—sterility—is equivalent despite the process difference.
Each difference gets its own justification. No bundling. No generic claims. The reviewer should be able to verify your reasoning row by row.
What Reviewers Actually Check
I have been on both sides of equivalence reviews. I have written equivalence tables and I have assessed them during technical file reviews and audits.
Here is what I check first:
Can I verify the claim without external documents?
If I need to hunt through other sections of the technical file to understand a single row, the table fails. A good table is self-contained.
Are differences acknowledged or hidden?
If I spot a difference that is not listed, I assume there are others. Transparency matters more than perfection.
Is the impact analysis device-specific?
Generic statements like “difference is not clinically significant” are insufficient. I want to know why it is not significant for this device, in this application, with this clinical data set.
Does the clinical data from the equivalent device actually cover the subject device?
If the subject device is used in a different patient population, anatomical location, or duration of use, the clinical data may not transfer. The table must address this explicitly.
If any of these checks fail, I write a deficiency. Not because I think the device is unsafe. But because the documentation does not allow me to verify safety and performance.
The Role of Biological and Clinical Equivalence
Technical equivalence is only the first step.
Biological equivalence requires that materials, surface characteristics, and any substance released from the device do not create new or increased risks. This is demonstrated through biocompatibility testing aligned with ISO 10993.
But testing alone is not enough. The table must show that the testing scope covers all material and surface differences. If the subject device uses a different polymer stabilizer, for example, the biocompatibility testing must include that specific formulation.
Clinical equivalence is the final layer. It requires that the clinical data from the equivalent device applies to the subject device without additional uncertainties. This is where most tables fall apart.
They list the clinical data. They cite studies. But they do not explain how differences in design, materials, or indications for use are accounted for within the scope of that data.
Clinical data summaries are referenced but not mapped to specific device characteristics. The reviewer cannot verify that all technical and biological differences are within the bounds of the clinical evidence.
For each clinical study cited, you must show:
- What device version was used in the study
- How that version compares to both the subject device and the equivalent device
- Whether any differences fall outside the scope of the study population, endpoints, or follow-up duration
If you cannot show this mapping, the clinical equivalence claim is incomplete.
When Equivalence Is Not Enough
Sometimes, even a well-documented equivalence table is not sufficient.
If the differences are too significant, if the clinical data is too limited, or if the equivalent device has a concerning post-market history, the equivalence route may not be viable.
This is not a documentation problem. It is a strategic problem.
But many manufacturers realize this too late—after the table has been written, after the Notified Body has issued a deficiency, after months of back-and-forth that could have been avoided.
The earlier you assess whether equivalence is viable, the better. And that assessment starts with an honest comparison table built for verification, not for wishful thinking.
Final Thought
A passing equivalence table is not about quantity of data. It is about transparency of reasoning.
Every row should allow independent verification. Every difference should have a justified impact assessment. Every claim should be supported by specific references, not general assertions.
When the reviewer finishes reading your table, they should not have questions. They should have confidence.
That is what makes the difference between a rejected submission and one that moves forward.
Peace,
Hatem
Clinical Evaluation Expert for Medical Devices
Follow me for more insights and practical advice.
Frequently Asked Questions
What is a Clinical Evaluation Report (CER)?
A CER is a mandatory document under MDR 2017/745 that demonstrates the safety and performance of a medical device through systematic analysis of clinical data. It must be updated throughout the device lifecycle based on PMCF findings.
How often should the CER be updated?
The CER should be updated whenever significant new clinical data becomes available, after PMCF activities, when there are changes to the device or intended purpose, and at minimum during annual reviews as part of post-market surveillance.
What causes CER rejection by Notified Bodies?
Common reasons include inadequate equivalence demonstration, insufficient clinical data for claims, poorly structured SOTA analysis, missing gap analysis, and lack of clear benefit-risk determination. Structure and logical flow are as important as the data itself.
Which MDCG guidance documents are most relevant for clinical evaluation?
Key documents include MDCG 2020-5 (Equivalence), MDCG 2020-6 (Sufficient Clinical Evidence), MDCG 2020-13 (CEAR Template), MDCG 2020-7 (PMCF Plan), and MDCG 2020-8 (PMCF Evaluation Report).
Need Expert Help with Your Clinical Evaluation?
Get personalized guidance on MDR compliance, CER writing, and Notified Body preparation.
✌
Peace, Hatem
Your Clinical Evaluation Partner
Follow me for more insights and practical advice.
– Regulation (EU) 2017/745 (MDR), Annex XIV Part A
– MDCG 2020-5 Rev. 1: Clinical Evaluation – Equivalence
– ISO 10993 series: Biological evaluation of medical devices





