Class IIa Devices: Where Equivalence Claims Usually Break Down

Hatem Rabeh

Written by HATEM RABEH, MD, MSc Ing

Your Clinical Evaluation Expert And Partner

in
S

Most Class IIa clinical evaluation reports fail not because of missing data, but because the equivalence demonstration was never actually built. Manufacturers assume similarity is enough. Reviewers look for something much more structured.

Class IIa devices sit in regulatory territory that confuses many manufacturers. They are not Class I, so the clinical evidence bar is higher. But they are not Class IIb or III, so the pathway through equivalence feels accessible. This is where most problems start.

The MDR does not offer a separate chapter for Class IIa requirements. Instead, it integrates them into the broader framework of Annex XIV and the general obligations under Articles 61 and 62. The challenge is not that the regulation is unclear. The challenge is that manufacturers misinterpret what clinical evaluation actually requires at this risk level.

I see this repeatedly during file reviews. The clinical evaluation report exists. It references literature. It claims equivalence to another device. But when you follow the reasoning, the demonstration is incomplete or circular. The manufacturer assumed that functional similarity plus published studies would be enough. It is not.

What the Regulation Actually Says

For Class IIa devices, Article 61(4) and Annex XIV Part A define the clinical evidence requirements. Clinical data must demonstrate safety and performance. This can be achieved through a clinical investigation, or through equivalence to another device that is already supported by sufficient clinical data.

The key word here is “sufficient.” Not similar. Not comparable. Sufficient.

MDCG 2020-5 and MDCG 2020-13 clarify how equivalence must be demonstrated. You cannot reference another device and assume the job is done. The process requires a structured comparison across technical, biological, and clinical characteristics. Each dimension must be evaluated. Each difference must be justified. And the clinical data supporting the equivalent device must be identified, appraised, and analyzed.

Key Insight
Equivalence is not a shortcut. It is a structured clinical argument that uses another device’s data as your evidentiary foundation. If that foundation is weak, your entire clinical evaluation collapses.

Most manufacturers think of equivalence as a comparison table. Reviewers think of it as a justification chain. Every link in that chain must hold under scrutiny.

Where the Equivalence Argument Breaks

The failure pattern is consistent. The clinical evaluation report claims equivalence. It provides a comparison table showing that the two devices share similar materials, similar indications, and similar mechanisms of action. Then it references published literature on the general device type. The conclusion states that clinical data supports safety and performance.

But when you look closely, critical steps are missing.

First, the equivalent device is not clearly defined. Sometimes it is described as a product family. Sometimes it is a device type rather than a specific model. The manufacturer is trying to claim equivalence to a category, not to a single traceable device with identified clinical data.

This does not work. Equivalence must be to a specific device with a defined technical file, a defined intended purpose, and an identified body of clinical evidence. You cannot demonstrate equivalence to an abstraction.

Common Deficiency
The equivalent device is described generically. No manufacturer name. No model number. No CE certificate number. No traceability to clinical data. Reviewers reject this immediately because there is nothing concrete to assess.

Second, the technical comparison is incomplete. Manufacturers focus on similarities and ignore differences. But equivalence is not about listing what matches. It is about demonstrating that any differences do not affect clinical safety or performance.

If your device uses a different coating material, that difference must be addressed. If the size range is broader, that must be justified. If the sterilization method changed, the clinical impact must be evaluated. Silence on differences is not equivalence. It is avoidance.

Third, the clinical data supporting the equivalent device is not actually analyzed. The CER references studies, but it does not explain how those studies support the specific claims being made. It does not appraise study quality. It does not synthesize results. It does not explain how the data translates to your device given the identified technical and biological characteristics.

This is where most Class IIa reports fail. The equivalence claim rests on literature that was never properly integrated into the clinical argument.

What Reviewers Expect to See

When I review a Class IIa clinical evaluation based on equivalence, I follow a specific logic. First, is the equivalent device clearly identified? Can I trace it? Can I verify its regulatory status and access its clinical data?

If the answer is no, the equivalence claim is not assessable. The review stops there.

If the device is traceable, I look at the comparison. Are technical, biological, and clinical characteristics compared in detail? Are differences acknowledged and justified? Is there a clear conclusion that equivalence is valid or not?

Then I look at the clinical data itself. What studies support the equivalent device? Were they appraised for relevance and quality? Do they cover the intended purpose, the patient population, the clinical context? Are the endpoints meaningful? Are the results consistent?

Finally, I check if the clinical data was synthesized into evidence. This means the CER should not just list studies. It should explain what the body of evidence shows, what gaps exist, and how those gaps are addressed through other means such as PMCF.

Key Insight
A clinical evaluation based on equivalence is not easier than one based on clinical investigation. It is a different structure, but the rigor is identical. You are borrowing someone else’s data, which means you must demonstrate that the borrowing is scientifically justified.

If any of these steps is weak, the entire clinical evaluation is insufficient. And “insufficient” is the exact word that appears in deficiency letters. Not incorrect. Not incomplete. Insufficient. The evidence does not meet the threshold required by the regulation.

The Role of PMCF in Class IIa Devices

Class IIa devices require a PMCF plan. This is not optional. Article 61(11) and Annex XIV Part B are explicit. Post-market clinical follow-up must be conducted throughout the lifecycle of the device.

But here again, there is confusion. Manufacturers treat PMCF as a formality. They write a plan that describes passive surveillance. They commit to reviewing complaint data and literature. They do not design active data collection.

This approach fails to meet the intent of PMCF. The purpose is not just to monitor. It is to confirm that the clinical evidence supporting your device remains valid as real-world use accumulates. For devices based on equivalence, PMCF becomes even more critical because you are relying on data from another device. Your own device performance must be verified in practice.

The PMCF plan should identify specific evidence gaps from the clinical evaluation. It should define methods to address those gaps. It should set timelines for data collection and analysis. It should specify what triggers an update to the clinical evaluation.

If your PMCF plan is generic, it will be rejected. If it does not connect to the clinical evaluation, it will be flagged as insufficient. The two documents must work as a pair.

Common Deficiency
PMCF plan states general surveillance activities but does not identify what clinical questions remain unanswered from the pre-market evaluation. No clear connection between evidence gaps and post-market data collection strategy.

The Consequence of Weak Clinical Evaluation

A weak clinical evaluation for a Class IIa device does not just delay certification. It undermines the entire technical file. The clinical evaluation feeds into risk management, into labeling, into the instructions for use, into post-market surveillance.

If your clinical evaluation is based on an invalid equivalence claim, your risk analysis is incomplete. You have not identified risks specific to your design because you assumed equivalence covered them. You have not validated performance claims because you relied on someone else’s data without proper justification.

When the Notified Body reviews your file, they see this. The clinical evaluation is not an isolated document. It is the evidentiary core of your conformity assessment. If it is weak, everything connected to it becomes questionable.

And once a major objection is raised on clinical evaluation, the resolution is not simple. You cannot fix it with a paragraph. You must rebuild the argument, reappraise the data, re-justify the equivalence claim or conduct new studies. This takes months.

The pressure to get certified quickly pushes manufacturers to submit incomplete clinical evaluations. The result is the opposite of speed. It is delay, rework, and additional cost.

What Should Be Done Differently

For Class IIa devices, the clinical evaluation should start with a clear decision: equivalence or clinical investigation. If equivalence is chosen, the equivalent device must be identified early. Not at the writing stage. At the planning stage.

You need to confirm that the device exists, that it is legally marketed, that clinical data is available, and that you can access it. If you cannot meet these conditions, equivalence is not viable. You will need clinical data specific to your device.

Once the equivalent device is confirmed, the comparison must be thorough. Create a structured table covering technical characteristics, biological characteristics, and clinical characteristics. For each parameter, assess similarity or difference. For every difference, evaluate clinical impact. Document the reasoning.

Then perform a proper literature search. Define search terms. Define databases. Define inclusion and exclusion criteria. Identify studies relevant to the equivalent device and the intended purpose. Appraise those studies. Synthesize the results. Explain what the evidence shows and what it does not show.

Write the clinical evaluation as a logical argument. Do not write it as a regulatory obligation. Write it as a scientific case that safety and performance are demonstrated. A reviewer should be able to follow your reasoning from the first page to the conclusion without encountering gaps or assumptions.

Key Insight
The quality of your clinical evaluation reflects the quality of your clinical thinking. If you understand the evidence, the report will be clear. If you do not, no amount of formatting will hide the gaps.

Finally, integrate the PMCF plan. Identify what remains uncertain after the pre-market evaluation. Design data collection to address those uncertainties. Commit to updating the clinical evaluation as new data becomes available. Make PMCF an active process, not a passive obligation.

Final Observations

Class IIa devices are not low risk in regulatory terms. They require a complete clinical evaluation. They require structured equivalence demonstration if that pathway is chosen. They require active PMCF.

The assumption that Class IIa is easier than Class IIb is wrong. The pathway may allow equivalence, but equivalence is not simpler. It is differently structured. And if that structure is weak, the file will not pass review.

I see manufacturers invest significant effort into design verification and validation. They test extensively. They document thoroughly. But when it comes to clinical evaluation, the same rigor is often absent. The result is a file that is technically strong but clinically weak.

Clinical evaluation is not a paperwork exercise. It is the justification that your device works as intended in the clinical context where it will be used. For Class IIa devices, that justification must be clear, complete, and scientifically sound.

The next time you review a Class IIa clinical evaluation, ask yourself: if I were the reviewer, would this argument convince me? If the answer is uncertain, the work is not done.

Peace,
Hatem
Clinical Evaluation Expert for Medical Devices
Follow me for more insights and practical advice.

Frequently Asked Questions

What is a Clinical Evaluation Report (CER)?

A CER is a mandatory document under MDR 2017/745 that demonstrates the safety and performance of a medical device through systematic analysis of clinical data. It must be updated throughout the device lifecycle based on PMCF findings.

How often should the CER be updated?

The CER should be updated whenever significant new clinical data becomes available, after PMCF activities, when there are changes to the device or intended purpose, and at minimum during annual reviews as part of post-market surveillance.

What causes CER rejection by Notified Bodies?

Common reasons include inadequate equivalence demonstration, insufficient clinical data for claims, poorly structured SOTA analysis, missing gap analysis, and lack of clear benefit-risk determination. Structure and logical flow are as important as the data itself.

Which MDCG guidance documents are most relevant for clinical evaluation?

Key documents include MDCG 2020-5 (Equivalence), MDCG 2020-6 (Sufficient Clinical Evidence), MDCG 2020-13 (CEAR Template), MDCG 2020-7 (PMCF Plan), and MDCG 2020-8 (PMCF Evaluation Report).

Need Expert Help with Your Clinical Evaluation?

Get personalized guidance on MDR compliance, CER writing, and Notified Body preparation.

Peace, Hatem

Your Clinical Evaluation Partner

Follow me for more insights and practical advice.

References:
– Regulation (EU) 2017/745 (MDR), Articles 61 and 62, Annex XIV
– MDCG 2020-5: Clinical evaluation — Equivalence
– MDCG 2020-13: Clinical evaluation assessment report template