The Clinical Evaluation Team That Passes Review

Hatem Rabeh

Written by HATEM RABEH, MD, MSc Ing

Your Clinical Evaluation Expert And Partner

in
S

I have seen clinical evaluation reports rejected not because the data was weak, but because the team that wrote it had no qualified physician signature. The manufacturer had engineers, consultants, and regulatory specialists. But when the Notified Body asked for the medical qualifications of the evaluator, there was silence.

This is not an isolated case. It happens more often than manufacturers expect. The assumption is that clinical evaluation is a regulatory document, so regulatory people should write it. But the regulation is clear: clinical evaluation requires medical and scientific competence. And when that competence is not demonstrated, the report does not pass.

The problem is not just about having a name on a signature page. The problem is that the team composition determines what questions get asked, what evidence gets identified, and what conclusions can be justified. A weak team produces weak reports, even with strong data.

What the Regulation Actually Requires

MDR Article 61 and Annex XIV set the expectation clearly: clinical evaluation must be performed by qualified personnel with appropriate training and experience. MDCG 2020-13 reinforces this by specifying that the clinical evaluation should involve medical expertise relevant to the device type and clinical field.

But here is what many manufacturers miss: this is not just a checkbox requirement. The Notified Body reviewer is looking for evidence that the team understands the clinical discipline, the pathophysiology, the treatment alternatives, and the clinical context where the device will be used.

When the team lacks this depth, it shows. The state of the art is incomplete. The clinical data interpretation is shallow. The benefit-risk analysis does not reflect real clinical judgment.

Key Insight
The team composition is not administrative. It determines the quality of the clinical reasoning throughout the entire report. If the team cannot clinically justify decisions, the report will not withstand review.

The Core Roles That Must Be Present

A functional clinical evaluation team has three essential competencies. Not three people necessarily, but three areas of expertise that must be represented.

Medical and Clinical Competence

This is the foundation. Someone with medical training and clinical experience must lead the evaluation or be deeply involved in it. This person understands the clinical condition, the patient population, the care pathway, and the relevant treatment standards.

This role is responsible for defining what clinical outcomes matter, interpreting the clinical significance of data, and assessing whether the evidence supports safe and effective use in the intended clinical context.

Without this competence, the evaluation becomes a regulatory exercise disconnected from clinical reality.

Scientific and Methodological Competence

Clinical evaluation is an evidence assessment process. Someone must be able to critically appraise literature, assess study design, recognize bias, and determine whether the evidence is sufficient and applicable.

This role ensures that the clinical data review is rigorous, that equivalence reasoning is valid, and that gaps in evidence are identified and addressed. It is not enough to list studies. The team must demonstrate that they understand what the studies actually show.

In many cases, this competence overlaps with the medical role. A physician trained in evidence-based medicine can fulfill both. But if the medical expert is not trained in critical appraisal, someone else on the team must be.

Regulatory and Technical Knowledge

The clinical evaluation must align with the regulatory framework and the technical characteristics of the device. Someone must understand the classification, the intended purpose, the essential performance requirements, and how the device fits within the quality management system.

This role ensures that the clinical evaluation answers the questions the regulation asks, that it integrates with risk management and post-market surveillance, and that it follows the format and structure expected by the Notified Body.

But this role cannot replace the clinical and scientific competence. Regulatory knowledge structures the report. It does not generate the clinical conclusions.

Common Deficiency
Manufacturers often assign clinical evaluation to regulatory affairs professionals without medical or scientific background. The result is a document that looks correct structurally but lacks clinical depth. Notified Bodies reject it not for format, but for insufficient clinical reasoning.

What Happens When the Team Is Incomplete

I have reviewed reports where the manufacturer hired a consultant to write the clinical evaluation, but that consultant had no access to the device design team, the clinical data, or the quality records. The consultant worked from a template and publicly available literature.

The report was generic. The device description was superficial. The state of the art did not address the specific clinical application. The benefit-risk analysis did not reflect actual use conditions.

When the Notified Body asked follow-up questions, the consultant could not answer them. The manufacturer could not answer them either, because they had outsourced the thinking, not just the writing.

This is what happens when the team composition is treated as a formality.

Another pattern I see: the clinical evaluation is written by someone with medical credentials, but that person has no experience with the specific clinical field. A dermatologist writes the evaluation for a cardiovascular device. A general practitioner evaluates an oncology diagnostic.

The qualifications look correct on paper. But the clinical reasoning is weak because the evaluator does not have the specialized knowledge needed to assess the device in context.

The Notified Body notices this. They ask questions that reveal the gap. The manufacturer then realizes they need to rebuild the team.

How to Structure the Team Effectively

The best approach is to assign clear ownership and collaboration. One person should be named as the clinical evaluation lead. This person is accountable for the conclusions and must have the medical and scientific qualifications to defend them.

That lead works with a support team that brings technical, regulatory, and specialized clinical input. The team meets regularly during the evaluation process, not just at the signature stage.

In practice, this often means:

For low-risk devices with well-established equivalence: A qualified medical professional with regulatory training can lead the evaluation, supported by the design and regulatory teams.

For higher-risk devices or novel technologies: A clinical specialist in the relevant field should lead, supported by a methodologist for evidence appraisal and a regulatory professional for alignment with MDR requirements.

For devices with limited clinical data: The team must include someone who can design and interpret clinical investigations, because the evaluation will likely identify gaps that require new data generation.

Key Insight
The team composition should match the complexity of the clinical evaluation task. A simple equivalence-based evaluation for a well-known device type requires less specialized input than a novel high-risk device with limited literature. Tailor the team to the risk and the evidence gaps.

External Support and When to Use It

Many manufacturers do not have in-house medical expertise. This is common, especially for smaller companies or engineering-focused organizations. In these cases, external clinical evaluation support is necessary.

But external support must be integrated, not isolated. The external evaluator must have access to the technical file, risk management documentation, post-market data, and the design team. They must be able to ask questions and receive answers.

I have seen manufacturers hire external physicians to review and sign reports written by internal regulatory staff. This does not work. The physician cannot defend conclusions they did not develop. The Notified Body will ask detailed questions, and the lack of involvement becomes obvious.

If you use external support, involve them early. Give them the information they need. Treat them as part of the team, not as a signature service.

Documentation of Team Qualifications

The clinical evaluation report should include a section that identifies the team members and their qualifications. This is not optional. MDCG 2020-13 explicitly states that the report should document who performed the evaluation and what their relevant qualifications are.

This means:

• Names and roles of team members
• Educational background and professional credentials
• Relevant clinical or scientific experience
• Specific involvement in the evaluation process

Do not just list names. Explain why each person is qualified to contribute to this specific evaluation. A CV in the appendix is not enough if the relevance is not clear.

Notified Bodies check this. If the qualifications do not match the device type or clinical field, they will raise a nonconformity.

Common Deficiency
Manufacturers include generic CVs without explaining the relevance to the device or clinical field. The Notified Body then questions whether the team had appropriate expertise. Always connect qualifications to the specific evaluation task.

What Notified Bodies Actually Look For

During document review and audit, the Notified Body assesses whether the team had the competence to perform the evaluation. They do this by:

• Reviewing the qualifications documented in the report
• Asking follow-up questions that require clinical and scientific judgment
• Checking whether the conclusions align with the evidence and clinical knowledge
• Interviewing team members if uncertainties arise

If the answers reveal that the team did not understand the clinical context, the state of the art, or the evidence limitations, the Notified Body will conclude that the evaluation was not performed by appropriately qualified personnel.

This is a nonconformity. It cannot be resolved by adding a name to the report. It requires rework with proper team involvement.

Why This Matters Beyond Approval

The clinical evaluation team is not just for the initial submission. This team must remain engaged throughout the device lifecycle. They are responsible for updating the evaluation with new data, responding to post-market signals, and maintaining the clinical evidence.

If the team was assembled only to pass the initial review, the manufacturer will struggle to maintain compliance. When a PSUR is required, when a safety issue arises, when the state of the art evolves, the team must be able to respond with clinical judgment.

This is why the team composition must be sustainable. You cannot rebuild the team every time an update is needed.

The right team is not just about passing review. It is about maintaining clinical oversight over the device for as long as it is on the market.

I have worked with manufacturers who treated clinical evaluation as a one-time hurdle. They passed certification, then disbanded the team. Two years later, when post-market data raised questions, they had no one qualified to interpret it. They had to start over, recruiting expertise and relearning their own device.

This is inefficient and risky. Build the team to last.

Peace,
Hatem
Clinical Evaluation Expert for Medical Devices
Follow me for more insights and practical advice.

Frequently Asked Questions

What is a Clinical Evaluation Report (CER)?

A CER is a mandatory document under MDR 2017/745 that demonstrates the safety and performance of a medical device through systematic analysis of clinical data. It must be updated throughout the device lifecycle based on PMCF findings.

How often should the CER be updated?

The CER should be updated whenever significant new clinical data becomes available, after PMCF activities, when there are changes to the device or intended purpose, and at minimum during annual reviews as part of post-market surveillance.

What causes CER rejection by Notified Bodies?

Common reasons include inadequate equivalence demonstration, insufficient clinical data for claims, poorly structured SOTA analysis, missing gap analysis, and lack of clear benefit-risk determination. Structure and logical flow are as important as the data itself.

Which MDCG guidance documents are most relevant for clinical evaluation?

Key documents include MDCG 2020-5 (Equivalence), MDCG 2020-6 (Sufficient Clinical Evidence), MDCG 2020-13 (CEAR Template), MDCG 2020-7 (PMCF Plan), and MDCG 2020-8 (PMCF Evaluation Report).

Need Expert Help with Your Clinical Evaluation?

Get personalized guidance on MDR compliance, CER writing, and Notified Body preparation.

Peace, Hatem

Your Clinical Evaluation Partner

Follow me for more insights and practical advice.

References:
– MDR 2017/745 Article 61 and Annex XIV
– MDCG 2020-13: Clinical Evaluation Assessment Report Template
– MDCG 2020-5: Clinical Evaluation

Related Resources

Read our complete guide to CER under EU MDR: Clinical Evaluation Report (CER) under EU MDR

Or explore Complete Guide to Clinical Evaluation under EU MDR