Your competitor claim needs clinical data you probably don’t have
Marketing wants to say your device is better than the competitor’s. Legal approves the wording. You submit the technical file. Then the Notified Body flags it during assessment: “Where is the clinical evidence supporting this comparative claim?” And suddenly you realize the claim rests on assumptions, not data.
In This Article
This happens more often than it should. A comparative claim enters labeling or promotional material because it seems reasonable, defensible, or obvious. But under MDR, every claim about your device must be backed by clinical evidence. And when that claim involves a competitor’s device, the evidence standard becomes stricter and more complex than most teams anticipate.
The problem is not that the claim is false. The problem is that it was never clinically validated in the first place.
What MDR Article 7 actually requires
Article 7 of the MDR addresses claims made by manufacturers. It states clearly that any claim concerning safety, performance, or characteristics of a device must be substantiated by sufficient clinical evidence. This is not limited to performance specifications or intended use statements. It extends to comparative claims, superiority statements, and any assertion that positions your device relative to another.
When you claim your device is safer, faster, more accurate, or less invasive than a competitor’s product, you are making a claim about clinical performance. And that claim must be supported by data that directly compares the two devices under clinically relevant conditions.
Most deficiencies I see in this area come from three misconceptions. First, that bench testing alone is enough. Second, that literature on similar devices can substitute for direct comparison. Third, that claims approved by marketing or legal are automatically compliant with MDR.
None of these assumptions hold under regulatory scrutiny.
A comparative claim is a clinical claim. It must be treated with the same rigor as any other statement about device performance. If you would not write it in your intended use without evidence, you should not write it in your marketing material either.
Why bench data is not enough
Bench testing can show that your device has different technical characteristics. It can demonstrate that a parameter is numerically superior under controlled laboratory conditions. But it cannot prove that this difference translates into a clinically meaningful benefit.
Reviewers know this. They see bench comparisons presented as justification for claims like “better outcomes” or “improved safety.” And they reject them because the leap from technical performance to clinical performance is not demonstrated.
A device may have higher precision in a laboratory setting. But does that precision lead to better diagnostic accuracy in real patients? Does it reduce complications? Does it shorten procedure time in a way that matters clinically?
These questions cannot be answered by bench data. They require clinical evidence.
If your claim implies a clinical benefit, your evidence must come from clinical use. If your claim compares your device to a competitor’s, that comparison must be made in a clinical context, not just on a test bench.
The literature gap most teams miss
Some teams try to build comparative claims using published literature. They find studies on their own device and studies on the competitor’s device, then compare the results across publications.
This approach fails for a simple reason: indirect comparison across different studies is not valid evidence for a direct comparative claim.
Different studies use different patient populations, different protocols, different endpoints, and different statistical methods. Even if both studies report the same outcome measure, comparing their results does not constitute a head-to-head comparison.
Reviewers see this immediately. They recognize that you are inferring a comparison rather than demonstrating one. And they will ask for direct comparative data.
The only way literature can support a comparative claim is if the literature itself contains a direct comparison between your device and the competitor’s device, conducted in the same study, under the same conditions, with the same population.
Such studies are rare. And if they do not exist, your comparative claim lacks the evidence base it needs.
Manufacturers present separate literature reviews for their device and the competitor’s device, then draw comparative conclusions. This is not acceptable evidence. The comparison must be direct, not inferred across independent datasets.
What direct evidence actually means
Direct evidence means your device and the competitor’s device were evaluated in the same study, with the same patients or conditions, measured against the same endpoints, analyzed with the same methods.
This could be a head-to-head clinical trial. It could be a prospective comparative study. It could be a well-designed retrospective analysis where both devices were used in the same clinical setting and outcomes were compared directly.
If such a study does not exist, you have two options. Either conduct the study yourself, or remove the comparative claim.
There is no third option that satisfies MDR requirements. You cannot bridge the gap with assumptions, expert opinion, or marketing logic. The evidence must be there, documented, and scientifically sound.
This is where many manufacturers face a difficult choice. Conducting a head-to-head study is expensive and time-consuming. Removing the comparative claim may weaken the commercial message.
But the regulatory standard is clear. If the claim is made, the evidence must support it. If the evidence does not exist, the claim cannot stand.
The role of PMCF in managing comparative claims
Post-market clinical follow-up can play a role here, but it must be planned correctly. If you are making a comparative claim at the time of market entry, you need evidence at that time. PMCF cannot retroactively justify a claim that was unsupported from the start.
However, if your initial evidence is limited but suggestive, PMCF can be designed to generate the direct comparative data needed to strengthen or confirm the claim over time. This requires a PMCF plan that explicitly includes comparative endpoints, a defined competitor product, and a clear methodology for direct comparison.
What does not work is vague language in the PMCF plan about “monitoring performance relative to alternatives.” Reviewers will ask how you define alternatives, which specific devices you will compare against, what endpoints you will measure, and how you will collect data that allows direct comparison.
If those details are missing, the PMCF plan does not address the evidence gap. It just postpones the problem.
PMCF can support ongoing validation of comparative claims, but it cannot replace the need for initial evidence. If the claim is made at launch, the evidence must exist at launch. PMCF builds on that foundation, it does not create it.
When competitor data is not available
One of the most common arguments I hear is that the competitor’s clinical data is not publicly available, so direct comparison is impossible. This is often true. Competitor data may be proprietary, unpublished, or protected.
But this does not change the regulatory requirement. If you cannot access the data needed to validate a comparative claim, the claim cannot be made.
The burden of proof is on the manufacturer making the claim, not on the regulator to accept the claim without proof. If the evidence does not exist or cannot be obtained, the claim must be removed or reworded to avoid the comparison.
This is a commercial problem, not a regulatory one. The regulation does not care whether the competitor’s data is available. It cares whether your claim is substantiated.
Some manufacturers try to work around this by making softer claims, using language like “designed to improve” or “intended to reduce.” But if the claim still implies a comparative advantage, it still requires comparative evidence.
The wording matters less than the meaning. If the claim positions your device as superior to an alternative, it is a comparative claim. And it must be backed by comparative data.
How to approach this during development
The best way to avoid this issue is to align your clinical evidence plan with your commercial strategy early in development. If marketing intends to make comparative claims, those claims must be part of the clinical development plan from the start.
This means identifying which competitor devices will be referenced, which endpoints will be compared, and how the comparison will be designed into your clinical studies or PMCF activities.
It also means being realistic about what evidence you can generate. If a head-to-head trial is not feasible, the comparative claim may not be feasible either. And it is better to recognize that early than to discover it during Notified Body review.
I have seen manufacturers redesign entire labeling strategies after a review because the comparative claims could not be defended. That delay costs time, money, and sometimes market position.
The alternative is to plan for the evidence from the beginning. If the claim is strategic, the evidence must be strategic too.
Comparative claims appear in labeling or marketing material without corresponding entries in the clinical evaluation plan or PMCF plan. This mismatch signals that the claim was not clinically validated. Reviewers catch this immediately.
What reviewers look for
When a Notified Body or competent authority sees a comparative claim, they go directly to the clinical evaluation report. They look for the section that addresses the claim. They check whether direct comparative data is presented, whether the comparison is methodologically sound, and whether the conclusion is supported by the data.
If the claim is in the labeling but not in the clinical evaluation report, that is a deficiency. If the clinical evaluation report references the claim but does not present comparative data, that is a deficiency. If the data is indirect or inferential, that is a deficiency.
The standard is straightforward: the claim must be traceable to specific evidence, that evidence must be direct, and the conclusion must be justified.
Anything less will be flagged. And once flagged, the options are limited. You either provide the missing data, or you remove the claim.
Final thoughts
Comparative claims feel like a commercial decision. But under MDR, they are a clinical decision. They require the same level of evidence as any other performance statement.
The regulatory system does not distinguish between claims that are commercially important and claims that are clinically important. It treats all claims the same way: they must be substantiated.
If your labeling or promotional material includes comparative language, trace it back to your clinical evidence. Ask whether the comparison is direct. Ask whether the data supports the conclusion. Ask whether a reviewer would agree.
If the answer is no, the claim needs to change.
Because in the end, the claim is only as strong as the evidence behind it. And under MDR, the evidence must be there before the claim is made.
Peace,
Hatem
Clinical Evaluation Expert for Medical Devices
Follow me for more insights and practical advice.
Frequently Asked Questions
What is a Clinical Evaluation Report (CER)?
A CER is a mandatory document under MDR 2017/745 that demonstrates the safety and performance of a medical device through systematic analysis of clinical data. It must be updated throughout the device lifecycle based on PMCF findings.
How often should the CER be updated?
The CER should be updated whenever significant new clinical data becomes available, after PMCF activities, when there are changes to the device or intended purpose, and at minimum during annual reviews as part of post-market surveillance.
What causes CER rejection by Notified Bodies?
Common reasons include inadequate equivalence demonstration, insufficient clinical data for claims, poorly structured SOTA analysis, missing gap analysis, and lack of clear benefit-risk determination. Structure and logical flow are as important as the data itself.
Which MDCG guidance documents are most relevant for clinical evaluation?
Key documents include MDCG 2020-5 (Equivalence), MDCG 2020-6 (Sufficient Clinical Evidence), MDCG 2020-13 (CEAR Template), MDCG 2020-7 (PMCF Plan), and MDCG 2020-8 (PMCF Evaluation Report). MDR Article 7
Need Expert Help with Your Clinical Evaluation?
Get personalized guidance on MDR compliance, CER writing, and Notified Body preparation.
✌
Peace, Hatem
Your Clinical Evaluation Partner
Follow me for more insights and practical advice.
– MDR 2017/745 Article 7
– MDR 2017/745 Annex XIV Part A (Clinical Evaluation)
Deepen Your Knowledge
Read Complete Guide to Clinical Evaluation under EU MDR for a comprehensive overview of clinical evaluation under EU MDR 2017/745.





