Why your clinical evaluation consultant may be steering you wrong

Hatem Rabeh

Written by HATEM RABEH, MD, MSc Ing

Your Clinical Evaluation Expert And Partner

in
S

I once reviewed a clinical evaluation report that had been approved by three consultants before reaching the Notified Body. It was rejected in the first round. The issue was not missing data or poor writing. The issue was that the entire methodological approach violated MDR Article 61 requirements. Three consultants had missed it.

This is not an isolated case. I see it regularly. Companies hire consultants who sound confident, reference the right documents, and deliver reports on time. But when the Notified Body or Competent Authority reviews the file, fundamental deficiencies emerge.

The problem is not always incompetence. Often, it is something more subtle. The consultant may be working from outdated frameworks, applying pre-MDR logic, or following internal templates that were never truly aligned with the regulation.

Understanding where consultants go wrong helps you ask better questions and catch issues before they reach the assessor.

The consultant is working from the wrong baseline

Before MDR, clinical evaluation was often treated as a documentation exercise. You gathered clinical data, wrote a report, and checked the box. Many consultants built their careers in that environment.

Under MDR, clinical evaluation is a continuous, methodologically rigorous process. It is governed by Article 61, supported by Annex XIV, and interpreted through MDCG 2020-5 and MDCG 2020-6. The expectations are fundamentally different.

But here is what I observe: many consultants still approach clinical evaluation as if it is an MDD deliverable. They structure the report the same way. They use the same equivalence shortcuts. They treat literature searches as optional background material rather than systematic evidence appraisal.

Common Deficiency
The consultant delivers a report that looks complete but uses MDD-era reasoning for equivalence, does not address all intended purposes under MDR Article 61(1), and lacks a clear clinical development plan tied to the device lifecycle.

The result is a document that reads well but collapses under regulatory scrutiny.

The consultant does not understand how reviewers reason

Writing a clinical evaluation report is not the same as preparing one for review. The Notified Body assessor reads with a specific lens. They check for methodological consistency, traceability, and regulatory alignment.

Many consultants do not think like reviewers. They write for the manufacturer. They explain what the device does. They summarize studies. But they do not anticipate the questions that will come back.

For example, when a consultant claims equivalence, the reviewer will immediately look for three things: technical comparison in Annex XIV terms, biological and clinical equivalence data, and justification for any differences. If the consultant does not structure the argument in that sequence, the claim will be challenged.

I see this constantly. The consultant writes, “Device A is equivalent to Device B because both are used in the same clinical setting.” That is not equivalence under MDCG 2020-5. That is similarity. And similarity does not allow you to rely on another device’s clinical data.

Key Insight
A good consultant writes every section knowing exactly what the reviewer will question. They structure the argument to answer those questions before they are asked. If your consultant does not reference the review logic, they are not preparing you for the real process.

The consultant relies on templates without adapting them

Templates are useful starting points. But they are not regulatory arguments. Every device has unique characteristics, intended purposes, and clinical contexts. A template cannot address that specificity.

Yet many consultants work almost entirely from templates. They copy headings from MDCG documents, fill in device-specific data, and call it a clinical evaluation. The structure looks correct. The language sounds regulatory. But the content does not actually evaluate the device.

The giveaway is generic statements. Phrases like “the clinical data demonstrates safety and performance” without explaining what data, how it was appraised, or what uncertainties remain. Or sections that describe the device but never link those descriptions to clinical outcomes.

When I review these reports, I can often identify which template was used. And that is a problem. The report should be dictated by the device and the evidence, not by a pre-existing document structure.

The consultant does not challenge your assumptions

A consultant should not just document what you tell them. They should question whether your clinical strategy is defensible under MDR.

If you claim your device is non-invasive and low-risk, the consultant should verify that classification against Annex VIII. If you say your PMCF plan will rely on passive surveillance, the consultant should ask whether that is sufficient given your device’s novelty and risk profile.

Many consultants do not do this. They accept your framing and build the report around it. This creates a false sense of confidence. You think your strategy is solid because the consultant did not raise concerns. But the Notified Body will.

I have seen this with PMCF plans. The manufacturer proposes a simple registry. The consultant writes it into the plan without questioning whether it will generate the data required by Annex XIV Part B. The Notified Body rejects the plan because it does not address specific safety and performance questions.

Common Deficiency
The consultant agrees with your approach without testing it against MDR requirements. The clinical evaluation reads smoothly but fails to address gaps that will be identified during review.

A consultant who challenges you is doing their job. A consultant who only agrees with you is not.

The consultant does not stay current with guidance updates

MDR implementation is still evolving. MDCG publishes new guidance documents. Notified Bodies issue position papers. The regulatory landscape changes.

Some consultants do not track these updates. They work from the version of MDCG 2020-5 they read three years ago. They are not aware of clarifications in MDCG 2022-14 or changes in how Notified Bodies interpret clinical development for certain device categories.

This is especially problematic with equivalence and state of the art. The interpretation of what constitutes adequate equivalence has tightened. The expectations for SOTA have become more explicit. If your consultant is not aware of these shifts, they may guide you toward strategies that no longer pass review.

The risk is not just rejection. It is wasted time, delayed certification, and unnecessary clinical studies that could have been avoided with a better initial strategy.

The consultant does not integrate clinical evaluation with the broader submission

Clinical evaluation does not exist in isolation. It must align with the risk management file, the technical documentation, the IFU, and the PMCF plan.

Many consultants treat clinical evaluation as a standalone deliverable. They write the report without cross-checking whether the claims match the risk analysis or whether the PMCF plan addresses the gaps identified in the clinical evaluation.

This creates inconsistencies. The clinical evaluation says the device is for general use. The IFU lists specific contraindications. The risk file identifies a residual risk. The clinical evaluation does not address how that risk is managed clinically.

When the Notified Body reviews the full submission, these inconsistencies become major deficiencies. They suggest the manufacturer does not have a unified understanding of their device.

Key Insight
A good consultant does not just write the clinical evaluation. They ensure it aligns with every other part of your technical file. If your consultant works in isolation from your regulatory and quality teams, you will have integration gaps.

What you should expect from a clinical evaluation consultant

You should expect someone who asks difficult questions. Someone who pushes back when your clinical strategy does not align with MDR. Someone who anticipates what the Notified Body will challenge and addresses it proactively.

You should expect someone who writes for the reviewer, not just for you. Someone who structures arguments methodologically and traces every claim to evidence.

You should expect someone who stays current, integrates clinical evaluation with your broader submission, and understands that clinical evaluation is not a one-time document but an ongoing process.

If your consultant is not doing these things, you are not getting the guidance you need.

How to recognize the problem early

Ask your consultant to explain their methodology. Not the structure of the report, but the reasoning behind their clinical strategy. How do they determine equivalence? How do they appraise literature? How do they decide what goes into the PMCF plan?

If the answers are vague or rely heavily on “standard practice,” you have a problem. MDR does not accept standard practice. It requires explicit justification.

Review the draft report critically. Look for generic statements. Look for gaps between what the report claims and what the evidence actually shows. If you can spot those gaps, the Notified Body will too.

And most importantly, do not assume that a delivered report means the work is done. Clinical evaluation is iterative. If your consultant is not proposing updates based on new data, regulatory changes, or feedback from your team, they are not managing the process correctly.

The consultant you hire shapes your regulatory path. If they are steering you wrong, you will pay for it later. The earlier you recognize the issue, the less costly the correction.

Peace,
Hatem
Clinical Evaluation Expert for Medical Devices
Follow me for more insights and practical advice.

Frequently Asked Questions

What is a Clinical Evaluation Report (CER)?

A CER is a mandatory document under MDR 2017/745 that demonstrates the safety and performance of a medical device through systematic analysis of clinical data. It must be updated throughout the device lifecycle based on PMCF findings.

How often should the CER be updated?

The CER should be updated whenever significant new clinical data becomes available, after PMCF activities, when there are changes to the device or intended purpose, and at minimum during annual reviews as part of post-market surveillance.

What causes CER rejection by Notified Bodies?

Common reasons include inadequate equivalence demonstration, insufficient clinical data for claims, poorly structured SOTA analysis, missing gap analysis, and lack of clear benefit-risk determination. Structure and logical flow are as important as the data itself.

Which MDCG guidance documents are most relevant for clinical evaluation?

Key documents include MDCG 2020-5 (Equivalence), MDCG 2020-6 (Sufficient Clinical Evidence), MDCG 2020-13 (CEAR Template), MDCG 2020-7 (PMCF Plan), and MDCG 2020-8 (PMCF Evaluation Report).

Need Expert Help with Your Clinical Evaluation?

Get personalized guidance on MDR compliance, CER writing, and Notified Body preparation.

Peace, Hatem

Your Clinical Evaluation Partner

Follow me for more insights and practical advice.

References:
– Regulation (EU) 2017/745 (MDR), Article 61 and Annex XIV
– MDCG 2020-5: Clinical Evaluation Assessment Report Template
– MDCG 2020-6: Sufficient Clinical Evidence for Legacy Devices
– MDCG 2022-14: Clinical Investigation and Evaluation for Certain Class IIb and Class III Devices