Expert Opinion in Your CER: When It Saves You, When It Sinks You

Hatem Rabeh

Written by HATEM RABEH, MD, MSc Ing

Your Clinical Evaluation Expert And Partner

in
S

I’ve reviewed clinical evaluation reports where the expert opinion was the strongest part of the submission. I’ve also seen expert opinions that turned into liabilities during Notified Body review. The difference is rarely the expert’s credentials. It’s how the opinion was structured, what it was asked to prove, and how it connected to the data.

Expert clinical opinion has become a critical component of clinical evaluation under MDR. When data gaps exist, when equivalence requires clinical judgment, when benefit-risk needs medical interpretation, expert opinion can provide the bridge between evidence and regulatory acceptance.

But here’s what I’ve learned from real submissions: expert opinion is a tool with clear boundaries. When used correctly, it strengthens your case. When misused, it exposes the weakness of your clinical evaluation and invites deeper scrutiny from reviewers.

The regulatory framework acknowledges expert opinion. MDR Article 61(5) and Annex XIV Part A Section 1 recognize that clinical evaluation may require expert judgment, particularly when interpreting clinical data or assessing benefit-risk. MDCG 2020-13 on clinical evaluation further clarifies how expert input should integrate into the appraisal process.

Yet the guidance doesn’t tell you how expert opinion fails in practice.

When Expert Opinion Actually Works

Expert opinion works when it does what data cannot do alone: provide clinical interpretation, context, and judgment based on medical knowledge and experience.

I’ve seen strong expert opinions used successfully in three situations.

First, when interpreting clinical data that requires medical judgment. A set of adverse events may appear in raw form in a clinical investigation report. The expert clinician reviews the events, considers the patient population, the procedural context, the severity grading, and provides a reasoned assessment of whether these events reflect acceptable risk given the device’s intended benefit.

This isn’t filling a data gap. This is applying clinical expertise to data that exists but requires interpretation.

Second, when establishing clinical context for equivalence. You may have a device that is technically and biologically equivalent to a comparator device, but the clinical use differs slightly in procedural approach or patient positioning. An expert clinician can provide reasoned opinion on whether these differences affect clinical performance or safety profile.

Again, the expert isn’t replacing missing data. The expert is applying clinical reasoning to data that exists but needs contextualization.

Third, when assessing benefit-risk balance. The quantitative data may show specific complication rates and effectiveness outcomes. The expert integrates this data with clinical experience, considers the severity of the condition being treated, the alternatives available, and provides a clinical opinion on whether the benefit-risk profile is acceptable for the intended patient population.

The data is present. The expert provides the clinical lens through which that data should be interpreted.

Key Insight
Expert opinion works best when it interprets existing data, not when it replaces missing data. The expert should be reasoning from evidence, not compensating for the absence of evidence.

When Expert Opinion Becomes a Liability

Now let’s talk about when expert opinion fails.

I’ve seen manufacturers commission expert opinions to fill data gaps they couldn’t otherwise close. The device has no clinical investigation. The equivalence claim is weak. The literature search found limited relevant studies. So the manufacturer asks an expert clinician to provide an opinion that the device is safe and effective.

This doesn’t work.

Notified Body reviewers and competent authorities see through this immediately. An expert opinion that isn’t anchored in objective data is just an assertion. It may be an informed assertion, but it carries no evidentiary weight in a clinical evaluation.

When the expert is being asked to substitute for clinical evidence, the opinion becomes a red flag rather than a support.

Here’s what happens during review. The assessor reads the expert opinion. Then the assessor goes back to the clinical data section of the CER. If the opinion references specific studies, specific outcomes, specific patient populations, and those references are traced back to the data appraisal, the opinion gains credibility.

But if the opinion makes broad clinical statements without clear linkage to the data presented in the CER, the assessor questions the basis. What data is the expert actually reviewing? What methodology did the expert follow? How was the expert’s reasoning documented?

Without this traceability, the opinion looks like an attempt to create confidence where evidence is insufficient.

Common Deficiency
Expert opinions are commissioned late in the process to “cover” identified data gaps, rather than integrated into the clinical evaluation methodology from the beginning. Reviewers recognize this and it weakens the entire submission.

The Problem of Unstructured Expert Opinion

Even when expert opinion is appropriate, the way it’s documented determines whether it gets accepted.

I’ve reviewed expert opinion letters that read like endorsements. The expert’s CV is attached. The letter states the expert has reviewed the device and relevant data. The letter concludes the device is safe and effective based on the expert’s clinical experience.

This format fails because it provides no transparency into the expert’s reasoning process.

The Notified Body reviewer needs to see how the expert reached the conclusion. What data did the expert review? What clinical considerations were weighed? What alternative interpretations were considered? What uncertainties remain?

Without this structure, the expert opinion becomes a black box. The reviewer cannot verify the reasoning. The opinion cannot be challenged, refined, or validated.

Structured expert opinion, by contrast, documents the process. It identifies the clinical question being addressed. It lists the data sources reviewed by the expert. It presents the clinical reasoning step by step. It acknowledges limitations and uncertainties. It reaches a conclusion that is justified by the documented reasoning.

This approach gives the reviewer confidence that the expert opinion is methodologically sound, not just credible based on the expert’s credentials.

I’ve seen this difference change outcomes. A manufacturer receives a major non-conformity for insufficient clinical evidence. They commission an expert opinion to address the gap. If the opinion is structured properly and references the available data, the non-conformity can be resolved or downgraded. If the opinion is a general endorsement letter, the non-conformity remains.

The Question of Expertise Match

Another issue that surfaces during review is whether the expert’s clinical expertise matches the device’s intended use and clinical context.

A device intended for use in interventional cardiology should not be evaluated by a general cardiologist if the clinical questions involve procedural technique, catheter navigation, or intra-procedural risk assessment. The expert needs to practice in the specific clinical domain where the device will be used.

Reviewers look at the expert’s CV not just to confirm credentials, but to confirm relevance. If the expert’s clinical practice doesn’t align with the device’s intended patient population or clinical setting, the opinion loses weight.

I’ve also seen manufacturers use multiple experts when the device crosses clinical domains. A combination device used in surgical oncology might require input from both a surgical oncologist and a medical device specialist familiar with the technological aspects. This multi-expert approach can strengthen the evaluation, but only if each expert addresses the specific questions within their domain.

Using multiple experts to create redundancy, where each expert provides the same general opinion, doesn’t add value. It just multiplies the documentation without increasing evidentiary strength.

Key Insight
The expert’s clinical domain must match the device’s specific application context. General expertise in the broader field is not sufficient if the device’s use requires specialized procedural or diagnostic knowledge.

Independence and Conflict of Interest

Independence is becoming a more sensitive issue in Notified Body assessments.

If the expert providing the opinion is the principal investigator of the clinical investigation supporting the device, the opinion is not independent. If the expert is on the manufacturer’s medical advisory board, independence is compromised.

This doesn’t mean the expert’s input is worthless. It means the opinion cannot be presented as independent clinical judgment. It should be disclosed as part of the manufacturer’s internal clinical assessment process, not as external expert validation.

Reviewers are trained to identify conflicts of interest. If the expert’s CV shows employment history with the manufacturer, consulting relationships, or co-authorship on studies funded by the manufacturer, these relationships must be disclosed.

Failure to disclose creates a credibility problem that affects the entire clinical evaluation, not just the expert opinion.

In my experience, the cleanest approach is to use experts who have no financial or professional relationship with the manufacturer. They are compensated for the specific opinion work, but they have no ongoing relationship. This ensures the opinion can be presented as independent clinical judgment.

Timing and Integration in the CER

When the expert opinion is commissioned also matters.

If the expert opinion is obtained after the CER is substantially complete, and it’s inserted as a standalone appendix, it reads as an afterthought. The CER should show that expert input was part of the clinical evaluation methodology, not a corrective action added later.

In structured clinical evaluations, expert consultation happens early. The manufacturer identifies clinical questions that require expert interpretation. The expert is engaged before the data appraisal is finalized. The expert’s input informs how the data is interpreted and how the conclusions are drawn.

This integration makes the expert opinion a natural component of the evaluation, not a patch applied to cover deficiencies identified during internal review or Notified Body assessment.

The timing also affects how the expert opinion is updated. If the expert opinion was provided three years ago and the CER is being updated for periodic review, the opinion should be revisited. Has new data emerged that changes the expert’s clinical assessment? Has clinical practice evolved in ways that affect the benefit-risk interpretation?

An outdated expert opinion that hasn’t been reconsidered in light of new information becomes a static document that no longer reflects current clinical judgment.

What Reviewers Actually Look For

When a Notified Body reviewer assesses an expert opinion, they follow a specific logic.

First, they check if the clinical question is clearly defined. What is the expert being asked to assess? Is it benefit-risk? Is it equivalence? Is it interpretation of adverse events?

If the question isn’t clear, the opinion’s relevance cannot be determined.

Second, they verify that the expert reviewed the same data that is presented in the CER. If the expert opinion references studies that aren’t included in the CER’s data appraisal, this creates a traceability gap. If the expert opinion doesn’t reference the studies that are in the CER, it suggests the expert didn’t actually review the manufacturer’s clinical evaluation.

Third, they assess the reasoning process. Does the expert explain how they reached their conclusion? Are alternative interpretations considered? Are limitations and uncertainties acknowledged?

Fourth, they consider the expert’s qualifications and independence. Does the CV demonstrate relevant clinical expertise? Are conflicts of interest disclosed?

Fifth, they check how the expert opinion integrates into the CER’s overall conclusion. Is the opinion used to support a specific claim? Is it used to justify a clinical decision, such as waiving certain post-market studies? Is the reliance on the opinion proportionate to the strength of the opinion?

If any of these elements are weak, the expert opinion doesn’t perform its intended function. It may even create doubt about the adequacy of the clinical evaluation as a whole.

Common Deficiency
Expert opinions are presented without clear linkage to the data appraisal in the CER, making it impossible for reviewers to verify the basis of the expert’s conclusions. Traceability between opinion and evidence is essential.

The Practical Approach

From what I’ve seen work consistently, here’s the practical approach to expert opinion in clinical evaluation.

Identify the specific clinical questions where expert judgment adds value. Don’t commission an expert opinion just to have one. Commission it because there is a defined gap in interpretation, context, or clinical reasoning that requires specialized input.

Select experts whose clinical practice directly matches the device’s intended use. Review their CV not just for credentials, but for relevance to the specific clinical domain.

Provide the expert with the full data package: clinical investigation reports, literature appraisal, PMCF data, device description, intended use, patient population. The expert should review the same data that forms the basis of your CER.

Request a structured opinion that documents the clinical question, the data reviewed, the reasoning process, the consideration of alternatives and uncertainties, and the final clinical judgment.

Integrate the expert opinion into the CER’s clinical evaluation methodology. Reference the opinion in the relevant sections of the appraisal. Show how the expert’s input informed your conclusions.

Disclose any relationships or conflicts of interest. If the expert has any connection to the manufacturer, state it clearly.

Update the expert opinion when the CER is updated. If new data becomes available or clinical practice evolves, ask the expert to reconsider their assessment.

This approach treats expert opinion as what it should be: a transparent, reasoned, and traceable component of the clinical evaluation process.

When Expert Opinion Isn’t the Solution

There are situations where expert opinion cannot solve the underlying problem.

If the device has no clinical data and the equivalence claim is scientifically weak, an expert opinion cannot create a sufficient demonstration of safety and performance. The fundamental evidentiary gap remains.

If the PMCF data shows emerging safety signals that contradict the original benefit-risk assessment, an expert opinion that dismisses these signals without addressing the data directly will not satisfy reviewers.

If the device’s clinical performance is uncertain because the intended clinical benefit has not been measured, an expert opinion that assumes the benefit exists does not replace the need for objective evidence.

In these cases, the solution isn’t better expert opinion. The solution is better clinical evidence. Expert opinion can complement evidence, interpret evidence, and contextualize evidence. It cannot replace evidence.

Manufacturers sometimes treat expert opinion as a shortcut to clinical evaluation compliance. It isn’t. It’s a tool that works only when the underlying clinical evaluation is structured correctly and supported by adequate data.

Final Thoughts

Expert clinical opinion has a legitimate and valuable role in clinical evaluation under MDR. When used appropriately, it strengthens the demonstration of safety and performance by providing clinical interpretation that data alone cannot deliver.

But expert opinion is not a substitute for clinical evidence, and it’s not a remedy for an inadequate clinical evaluation.

The difference between expert opinion that works and expert opinion that fails comes down to structure, transparency, and integration. The opinion must be tied to data, documented with clear reasoning, provided by appropriately qualified and independent experts, and integrated into the clinical evaluation methodology.

When manufacturers treat expert opinion as a last-minute add-on or a way to cover data gaps, reviewers see through it immediately. When manufacturers use expert opinion as it’s intended, as part of a robust and transparent clinical evaluation process, it becomes a genuine asset.

The question isn’t whether to use expert opinion. The question is how to use it in a way that stands up to regulatory scrutiny.

Peace,
Hatem
Clinical Evaluation Expert for Medical Devices
Follow me for more insights and practical advice.

Frequently Asked Questions

What is a Clinical Evaluation Report (CER)?

A CER is a mandatory document under MDR 2017/745 that demonstrates the safety and performance of a medical device through systematic analysis of clinical data. It must be updated throughout the device lifecycle based on PMCF findings.

How often should the CER be updated?

The CER should be updated whenever significant new clinical data becomes available, after PMCF activities, when there are changes to the device or intended purpose, and at minimum during annual reviews as part of post-market surveillance.

What causes CER rejection by Notified Bodies?

Common reasons include inadequate equivalence demonstration, insufficient clinical data for claims, poorly structured SOTA analysis, missing gap analysis, and lack of clear benefit-risk determination. Structure and logical flow are as important as the data itself.

Which MDCG guidance documents are most relevant for clinical evaluation?

Key documents include MDCG 2020-5 (Equivalence), MDCG 2020-6 (Sufficient Clinical Evidence), MDCG 2020-13 (CEAR Template), MDCG 2020-7 (PMCF Plan), and MDCG 2020-8 (PMCF Evaluation Report).

Need Expert Help with Your Clinical Evaluation?

Get personalized guidance on MDR compliance, CER writing, and Notified Body preparation.

Peace, Hatem

Your Clinical Evaluation Partner

Follow me for more insights and practical advice.

References:
– Regulation (EU) 2017/745 (MDR), Article 61(5) and Annex XIV Part A Section 1
– MDCG 2020-13: Clinical evaluation assessment framework
– MDCG 2020-5: Clinical evaluation framework

Related Resources

Read our complete guide to CER under EU MDR: Clinical Evaluation Report (CER) under EU MDR

Or explore Complete Guide to Clinical Evaluation under EU MDR