Why Your CER Keeps Getting Rejected: The QMS Integration Gap

Hatem Rabeh

Written by HATEM RABEH, MD, MSc Ing

Your Clinical Evaluation Expert And Partner

in
S

I reviewed a clinical evaluation report last month that was technically brilliant. Strong literature review. Solid statistical analysis. Comprehensive appraisal. The Notified Body still issued a major non-conformity. The reason? The CER existed in isolation from the quality management system. No traceability to post-market surveillance data. No connection to complaint handling. No linkage to risk management updates. The clinical team did excellent work, but it was disconnected from the reality documented in their own QMS.

This is not an isolated case. I see this pattern repeatedly in MDR submissions and audit responses. Manufacturers invest heavily in their clinical evaluation reports. They hire experienced consultants. They conduct thorough literature searches. They produce impressive documents.

Then they submit to their Notified Body and receive findings about process integration, not clinical content.

The gap is not about clinical competence. It is about how clinical evaluation connects to the quality system that governs the entire device lifecycle.

The Regulatory Foundation Everyone Cites But Few Actually Implement

MDR Article 10(9) requires manufacturers to have documented procedures for clinical evaluation. ISO 13485 requires documented processes for all activities affecting product conformity. MDCG 2020-6 emphasizes the ongoing nature of clinical evaluation.

Everyone quotes these requirements. Few manufacturers actually build the procedural bridges between their clinical evaluation processes and their broader QMS.

The result? Clinical evaluation becomes a document production exercise rather than an integrated process that informs and is informed by other quality system elements.

Common Deficiency
Clinical evaluation procedures exist in the QMS document tree, but they do not define clear inputs from post-market surveillance, complaint handling, corrective actions, or risk management. The procedure describes what goes into a CER, not how clinical evaluation integrates with operational quality processes.

I have seen manufacturers with comprehensive QMS manuals where the clinical evaluation procedure sits alongside procedures for design control, supplier management, and CAPA. But when you trace the process flows, clinical evaluation is an island. No formal inputs. No defined outputs to other processes. No clear triggers for updates based on QMS data.

This is what auditors find. This is what leads to non-conformities.

The Integration Points That Actually Matter

Let me walk through the connections that reviewers look for and manufacturers consistently miss.

Post-Market Surveillance Data to Clinical Evaluation

Your PMS plan defines what data you collect from the market. Your PMCF plan defines what clinical data you generate post-market. Your clinical evaluation should systematically integrate this information.

Here is what I observe in practice: PMS reports exist. PMCF reports exist. CER updates happen periodically. But there is no documented procedure that defines how PMS findings trigger a review of clinical evaluation conclusions.

A manufacturer receives increased complaint rates about a specific failure mode. The complaint handling process works. CAPA is initiated. Risk management is updated. But the clinical evaluation is not systematically reviewed to determine if the increased failure rate affects the benefit-risk conclusion or the clinical performance claims.

The QMS managed the quality event. But it did not integrate that event into the ongoing clinical evaluation required by MDR.

Key Insight
The integration point is not just about data flow. It is about decision points. Your QMS procedures must define who reviews PMS data for clinical relevance, what triggers a clinical evaluation update, and how that decision is documented. Without this, you have data collection but not clinical evaluation integration.

Risk Management Updates to Clinical Evaluation

Risk management is supposed to be iterative throughout the device lifecycle. Clinical evaluation is supposed to be ongoing throughout the device lifecycle. These are not separate iterative processes. They inform each other.

When your risk management file is updated based on post-market data, does your clinical evaluation procedure require a systematic review of those updates? When your clinical evaluation identifies new literature about adverse effects, does your procedure define how that feeds into risk management?

Most manufacturers answer yes conceptually. But when I ask to see the documented procedure that defines this integration, the documented decision points, and the records that prove it happens systematically, the answer changes.

I see risk management files updated in response to incidents. I see clinical evaluation reports updated on annual schedules. But I rarely see the procedural linkage that makes these updates responsive to each other.

Clinical Evaluation to Design Changes

Here is a scenario I encounter regularly: A manufacturer updates their clinical evaluation and identifies new data suggesting a design modification could reduce a known residual risk. The CER is updated. The conclusion notes this potential improvement.

Six months later, the Notified Body asks: What happened with that design modification suggestion? Was it formally evaluated? Was a design change initiated? Was it rejected with documented rationale?

The manufacturer has no record. The CER identified an improvement opportunity, but the QMS did not process that identification as a formal input to design control.

The integration point is missing. Clinical evaluation generates insights, but those insights do not feed systematically into change management or design control processes.

Common Deficiency
Clinical evaluation conclusions that identify improvement opportunities or highlight gaps in clinical data do not trigger formal actions in other QMS processes. The CER documents the finding, but no procedure requires that finding to be processed as a design input, a PMCF objective, or a risk management update.

State of the Art to Literature Monitoring

MDCG 2020-6 requires that clinical evaluation considers the current state of the art. This is not a one-time assessment. State of the art evolves. New techniques emerge. New evidence appears. Comparative devices change.

Your QMS should include a procedure for ongoing literature monitoring and state of the art assessment. This procedure should define who is responsible, what sources are monitored, how frequently, and what triggers an update to the clinical evaluation.

I rarely see this documented systematically. I see literature searches conducted when the CER is updated. But I do not see ongoing literature monitoring procedures integrated into the QMS with clear responsibilities and frequencies.

When a Notified Body asks how you stay current with state of the art between CER updates, the answer should be a documented procedure with records of execution. Not an explanation that the clinical team stays informed.

The Procedural Gap That Creates Audit Findings

Let me be specific about what creates non-conformities in audits.

Auditors do not only read your CER. They trace processes. They start with a PMS report and ask to see how that data fed into clinical evaluation. They start with a complaint trend and ask to see the clinical review. They start with a literature finding and ask to see how it was processed.

When your procedures do not define these integration points, you cannot demonstrate systematic compliance. You can show that the data eventually reached the CER. But you cannot show that your QMS ensures it always will.

This is the difference between reactive document updates and systematic process integration.

Key Insight
ISO 13485 requires process-based quality management. Clinical evaluation is a process, not just a document. If your QMS procedures do not define inputs, activities, outputs, responsibilities, and records for clinical evaluation as an integrated process, you have a gap in your quality system regardless of how good your CER content is.

I have seen manufacturers receive findings that state:

Frequently Asked Questions

What is a Clinical Evaluation Report (CER)?

A CER is a mandatory document under MDR 2017/745 that demonstrates the safety and performance of a medical device through systematic analysis of clinical data. It must be updated throughout the device lifecycle based on PMCF findings.

How often should the CER be updated?

The CER should be updated whenever significant new clinical data becomes available, after PMCF activities, when there are changes to the device or intended purpose, and at minimum during annual reviews as part of post-market surveillance.

What causes CER rejection by Notified Bodies?

Common reasons include inadequate equivalence demonstration, insufficient clinical data for claims, poorly structured SOTA analysis, missing gap analysis, and lack of clear benefit-risk determination. Structure and logical flow are as important as the data itself.

Which MDCG guidance documents are most relevant for clinical evaluation?

Key documents include MDCG 2020-5 (Equivalence), MDCG 2020-6 (Sufficient Clinical Evidence), MDCG 2020-13 (CEAR Template), MDCG 2020-7 (PMCF Plan), and MDCG 2020-8 (PMCF Evaluation Report). ISO 13485, MDCG 2020-6

Need Expert Help with Your Clinical Evaluation?

Get personalized guidance on MDR compliance, CER writing, and Notified Body preparation.

Peace, Hatem

Your Clinical Evaluation Partner

Follow me for more insights and practical advice.

Related Resources

Read our complete guide to CER under EU MDR: Clinical Evaluation Report (CER) under EU MDR

Or explore Complete Guide to Clinical Evaluation under EU MDR