Your complaint data might be clinical evidence in disguise

Hatem Rabeh

Written by HATEM RABEH, MD, MSc Ing

Your Clinical Evaluation Expert And Partner

in
S

A manufacturer receives thirty complaint reports about minor skin irritation after three months of market surveillance. The complaints are logged, investigated, closed with corrective action. The vigilance file is clean. Then the Notified Body asks during the audit: where is this data in your clinical evaluation? The team looks confused. We handled them as complaints, not clinical data.

This happens more often than it should. Teams separate post-market surveillance from clinical evaluation as if they belong in different regulatory universes. One team manages complaints. Another updates the CER. The information never crosses.

But MDR does not make that separation. And neither do Notified Bodies during audits.

Under MDR Annex XIV Part A, clinical evidence includes data from PMCF activities, vigilance, and post-market surveillance. Complaint data often contains clinical information about device performance, safety events, and real-world use conditions. When that data reflects clinical outcomes or reveals patterns affecting benefit-risk, it stops being just a complaint. It becomes part of your clinical evidence base.

The question is not whether you log complaints properly. The question is whether you recognize which complaints carry clinical meaning and whether that meaning makes it into your clinical evaluation.

The regulatory expectation is clear but often misapplied

MDR Article 61 requires manufacturers to have a post-market surveillance system that collects and analyzes data on device safety and performance. Article 83 addresses vigilance and requires reporting of serious incidents and field safety corrective actions. Annex III Section 1.1 (b) requires you to demonstrate compliance through clinical data including post-market experience.

MDCG 2020-8 on post-market surveillance explicitly connects PMS data to clinical evaluation. The guidance states that PMS findings, including complaint trends, must feed into the ongoing assessment of clinical safety and performance. This is not optional integration. This is a regulatory loop you are required to maintain.

Yet in many technical files, complaints live in one section, clinical evaluation lives in another. The CER references PMCF data in general terms but does not address specific complaint patterns. When a Notified Body reviewer cross-references the complaint log with the CER, they find no trace of clinically relevant issues that were clearly documented.

Common Deficiency
Complaint data is managed entirely within the quality management system without any assessment of clinical relevance. The CER does not reference complaint trends even when those trends involve safety signals or performance deviations affecting patient outcomes.

The deficiency notice comes back. The manufacturer is asked to demonstrate how complaint data was analyzed for clinical impact and integrated into the benefit-risk assessment. The team scrambles because there is no process for that transfer.

Not every complaint is clinical evidence

This does not mean every complaint belongs in your clinical evaluation. A report about damaged packaging does not carry clinical meaning. A complaint about labeling legibility is a quality issue, not a clinical data point.

But consider this. A surgeon reports difficulty inserting a device during a procedure. The complaint is logged as a usability issue. The investigation concludes that the device met specifications and user training was adequate. Case closed.

Then you receive three more similar reports over the next six months. Same difficulty. Same procedural context. Now you have a pattern. That pattern suggests something about device performance in real-world use that may not have appeared in your clinical investigation. It may indicate a limitation in the instructions for use, a need for additional user training, or a design characteristic that affects clinical outcomes in a specific subset of users.

This is when the complaint stops being just a complaint. It becomes clinical evidence that must inform your benefit-risk analysis.

The distinction is not always obvious. It requires clinical judgment. It requires someone with medical and regulatory expertise to look at the complaint log and ask: does this reflect a safety concern, a performance issue, or a real-world use condition that affects clinical outcomes?

If the answer is yes, that data must be evaluated clinically. And that evaluation must be documented in your CER.

The integration gap in most quality systems

Most quality management systems are built to manage complaints efficiently. They track the complaint, assign responsibility, investigate root cause, implement corrective action, close the case. The focus is on resolution and risk mitigation within the QMS framework.

But there is rarely a systematic process to screen complaints for clinical relevance. There is no trigger that flags a complaint for clinical evaluation. There is no designated handoff between the complaint handler and the clinical affairs team.

The complaint data sits in a database. The clinical evaluator writes the CER using literature, clinical investigation data, and perhaps some aggregated PMCF statistics. The two data streams never connect.

Then during a Notified Body audit, the reviewer asks about a specific complaint that involved an adverse event. The complaint was closed months ago. The investigation concluded no device defect. But the reviewer wants to know: was this event analyzed in the context of your clinical evaluation? Did it affect your benefit-risk conclusion? Is there any mention of this event type in your CER?

If the answer is no, you have a gap. And that gap is difficult to close retroactively because the clinical analysis was never done when the complaint was active.

Key Insight
The integration of complaint data into clinical evaluation is not automatic. It requires a defined process within your QMS that screens complaints for clinical relevance and triggers clinical assessment when needed. Without that process, clinically significant data disappears into closed complaint files.

What clinical relevance actually means

A complaint is clinically relevant if it involves or suggests:

A safety event affecting a patient or user. This includes adverse events that may not meet the threshold for vigilance reporting but still contribute to understanding the safety profile of your device.

A performance deviation that affects clinical outcomes. If the device does not perform as intended in a way that impacts diagnosis, treatment, or patient monitoring, that is clinical data.

A pattern of use errors or usability issues. Repeated complaints about how the device is used in real-world settings may reveal limitations in design, labeling, or training that were not evident in controlled clinical investigations.

Unexpected device behavior in specific patient populations or use conditions. Your clinical investigation may not have included all the patient types or conditions you now encounter post-market. Complaints from those populations provide real-world clinical evidence.

Any indication that your instructions for use, warnings, or contraindications may be insufficient. If users are making errors because instructions are unclear or if adverse events occur in populations you thought were excluded, that reflects on your clinical safety assessment.

If a complaint touches any of these areas, it carries clinical information. That information must be assessed. It must be documented. And it must feed into your ongoing clinical evaluation.

How complaint data changes your clinical evaluation

When you integrate complaint data properly, it does several things.

First, it provides real-world evidence of device performance outside the controlled conditions of a clinical investigation. Clinical trials have inclusion and exclusion criteria. They have protocol-defined procedures. They have close monitoring. The real world does not. Complaint data shows you how your device actually behaves when those controls are removed.

Second, it helps identify emerging safety signals early. A single adverse event may not trigger a vigilance report. But three similar events over six months may indicate a trend. Complaint data analyzed systematically can surface those trends before they become serious incidents.

Third, it validates or challenges assumptions you made during pre-market clinical evaluation. You may have assumed that certain risks were mitigated by design features or user training. Complaint data tells you whether that assumption holds in practice.

Fourth, it strengthens your benefit-risk analysis by grounding it in post-market experience. A CER based only on literature and pre-market data is weaker than one that integrates real-world evidence from your own surveillance system.

Finally, it demonstrates to Notified Bodies and competent authorities that you have a functioning PMS-to-CER feedback loop. This is not just a regulatory checkbox. It is evidence that you are actively monitoring and reassessing your device in light of new information.

But none of this happens unless complaint data actually reaches your clinical evaluator.

Building the bridge between complaints and clinical evaluation

The solution is a defined process within your quality system that includes these steps.

First, establish criteria for clinical relevance. Define what types of complaints must be flagged for clinical assessment. This can be built into your complaint handling procedure so that when a complaint is logged, the handler checks whether it meets any of the clinical relevance criteria.

Second, assign responsibility. Designate who performs the clinical assessment. This is usually your clinical affairs specialist or medical officer. They review flagged complaints and determine whether the data affects your clinical evaluation.

Third, document the assessment. Even if the conclusion is that the complaint does not change your benefit-risk profile, document that assessment. This creates a trail showing that you considered the data clinically.

Fourth, update your CER when needed. If complaint data reveals new safety information, performance limitations, or emerging trends, incorporate that into your clinical evaluation report. Reference specific complaints or complaint categories in your analysis.

Fifth, close the loop with PMCF. If complaints suggest knowledge gaps or areas where your clinical data is weak, adjust your PMCF activities to address those gaps. Complaint data should inform what you investigate next.

This process does not need to be complex. It needs to be systematic. Every clinically relevant complaint must be seen by someone with clinical judgment. And the outcome of that review must be documented.

Key Insight
The bridge between complaints and clinical evaluation is not built with sophisticated software. It is built with a clear procedure, defined responsibility, and disciplined documentation. The key is ensuring that clinically relevant complaints are seen by the right person and assessed in the right context.

What Notified Bodies look for during audits

When a Notified Body audits your technical file, they do not review complaints and clinical evaluation separately. They look for connection.

They will open your complaint log and identify complaints involving adverse events, performance issues, or use errors. Then they will open your CER and look for any mention of those issues. If the complaint log shows a pattern and the CER does not address it, that is a finding.

They will ask to see your process for integrating PMS data into clinical evaluation. They want evidence that complaint data is systematically reviewed for clinical relevance. They want to see documentation showing that someone with clinical expertise assessed the data.

They will check whether your PMCF plan addresses knowledge gaps revealed by complaints. If your complaints suggest that certain patient populations experience more adverse events, they expect to see that reflected in your PMCF strategy.

They will verify that your benefit-risk analysis includes post-market data. If your CER concludes that risks are acceptable but your complaint log shows unaddressed safety concerns, that creates a credibility problem.

The expectation is not perfection. The expectation is that you have a functioning system and that you use the data you collect. If complaint data is never analyzed clinically, your PMS system is incomplete under MDR.

The cost of ignoring this connection

Failing to integrate complaint data into clinical evaluation has consequences.

First, you may miss safety signals. Patterns that should trigger investigation go unnoticed because no one with clinical judgment is looking at the data. By the time the pattern becomes obvious, you may face serious incidents that could have been prevented.

Second, your CER becomes outdated faster. If you only update the CER during periodic reviews and ignore real-time complaint data, your clinical evaluation lags behind reality. When something changes post-market, your documented benefit-risk assessment no longer reflects current knowledge.

Third, you create audit findings. Notified Bodies and competent authorities will identify the gap. You will receive non-conformities. You will need to implement corrective actions retroactively, which is more costly and disruptive than building the right process from the start.

Fourth, you weaken your regulatory defense. If a serious incident occurs and an authority investigates, they will review your complaint history. If prior complaints suggested a safety concern and you never assessed that concern clinically, it becomes difficult to argue that you maintained a robust PMS system.

The risk is not just regulatory. It is clinical. Real patients and users are reporting real experiences with your device. If that information never reaches the people responsible for evaluating clinical safety and performance, you are operating with incomplete knowledge.

Moving from reactive to proactive

Most manufacturers handle complaints reactively. A complaint comes in. The system responds. The case is closed. The focus is on resolving the immediate issue.

Clinical evaluation requires a proactive mindset. You are not just resolving complaints. You are learning from them. You are using them to refine your understanding of how your device performs in the real world. You are feeding that understanding back into your clinical evaluation so that your benefit-risk assessment evolves with your experience.

This shift does not require massive resources. It requires awareness. It requires someone in your organization to recognize that complaint data is not just quality data. It is clinical data when it touches safety, performance, or real-world use conditions.

It requires your clinical evaluator to have access to complaint information and to actively seek it out when updating the CER. It requires your quality team to flag complaints that carry clinical meaning. And it requires your leadership to understand that clinical evaluation is not a one-time report. It is an ongoing process fed by multiple data streams including complaints.

When you make that shift, your clinical evaluation becomes more credible. Your PMS system becomes more integrated. And your regulatory position becomes stronger because you can demonstrate that you actually use the data you collect.

The next time a Notified Body asks where complaint data appears in your clinical evaluation, you will have an answer. Not because you scrambled to create one during the audit, but because it was there all along.

Peace,
Hatem
Clinical Evaluation Expert for Medical Devices
Follow me for more insights and practical advice.

Frequently Asked Questions

What is a Clinical Evaluation Report (CER)?

A CER is a mandatory document under MDR 2017/745 that demonstrates the safety and performance of a medical device through systematic analysis of clinical data. It must be updated throughout the device lifecycle based on PMCF findings.

How often should the CER be updated?

The CER should be updated whenever significant new clinical data becomes available, after PMCF activities, when there are changes to the device or intended purpose, and at minimum during annual reviews as part of post-market surveillance.

What causes CER rejection by Notified Bodies?

Common reasons include inadequate equivalence demonstration, insufficient clinical data for claims, poorly structured SOTA analysis, missing gap analysis, and lack of clear benefit-risk determination. Structure and logical flow are as important as the data itself.

Which MDCG guidance documents are most relevant for clinical evaluation?

Key documents include MDCG 2020-5 (Equivalence), MDCG 2020-6 (Sufficient Clinical Evidence), MDCG 2020-13 (CEAR Template), MDCG 2020-7 (PMCF Plan), and MDCG 2020-8 (PMCF Evaluation Report).

Need Expert Help with Your Clinical Evaluation?

Get personalized guidance on MDR compliance, CER writing, and Notified Body preparation.

Peace, Hatem

Your Clinical Evaluation Partner

Follow me for more insights and practical advice.

References:
– Regulation (EU) 2017/745 (MDR) Articles 61, 83, Annex III, Annex XIV Part A
– MDCG 2020-8: Post-Market Surveillance (PMS) for medical devices
– MDCG 2020-5: Clinical Evaluation – Equivalence