Why Complaint Data Keeps Failing Clinical Evaluation Review
A manufacturer submits a clinical evaluation report with a complaint summary table showing zero serious incidents and low complaint rates. The Notified Body issues a major non-conformity. The manufacturer is confused. The data was accurate. But accuracy alone does not meet MDR requirements.
In This Article
- The Regulatory Framework for Complaint Data in Clinical Evaluation
- What Most Manufacturers Include and Why It Is Not Enough
- The Data Points That Actually Matter
- How to Structure Complaint Analysis in the CER
- The Link Between Complaint Data and PMCF
- What Happens When Complaint Analysis Is Missing
- The Practical Reality
I see this deficiency pattern repeatedly in clinical evaluation reviews. Manufacturers include complaint data in Section 6 of the CER, present clean numbers, and assume the requirement is met. Then the reviewer flags it. Not because the data is wrong, but because the analysis is missing.
The MDR requires post-market data to inform clinical evaluation continuously. Complaints are post-market data. But the regulatory expectation is not just reporting complaints. It is analyzing them in a way that supports or challenges the benefit-risk profile.
Most manufacturers treat complaint analysis as a passive data dump. Reviewers expect active investigation of what the complaints reveal about clinical safety and performance.
The Regulatory Framework for Complaint Data in Clinical Evaluation
Article 61(11) of the MDR requires manufacturers to update clinical evaluation throughout the lifecycle of the device. Annex XIV Part A states that clinical evaluation must include analysis of currently available product information, including information from post-market surveillance.
MDCG 2020-13 on clinical evaluation clarifies that post-market clinical follow-up data, vigilance data, and complaint data must feed into the clinical evaluation. This is not optional. It is a continuous obligation.
The question is not whether to include complaint data. The question is what data points actually matter for clinical evaluation purposes.
Complaint data in a CER is not the same as complaint data in a vigilance report. The CER analysis must link complaints to clinical claims, intended use, and benefit-risk conclusions.
What Most Manufacturers Include and Why It Is Not Enough
The typical complaint section in a CER includes total complaint numbers, complaint rates per unit sold, and a breakdown by category. Some manufacturers add a trend analysis showing the rate over time.
This is descriptive. It is not evaluative.
Reviewers want to see how complaints inform your understanding of the clinical profile. They want to see whether the complaint pattern aligns with the risks you identified in your clinical evaluation. They want to see whether complaints reveal gaps in your clinical data or challenge your equivalence assumptions.
A clean complaint rate does not automatically mean clinical safety is confirmed. It might mean your surveillance system is not sensitive enough. It might mean users are not reporting issues. It might mean the complaint categories are too broad to detect clinical signals.
The Missing Link Between Complaints and Clinical Conclusions
Here is what goes wrong in most submissions. The complaint section appears in the CER as a standalone data block. It does not connect to the risk analysis. It does not challenge or confirm the clinical claims. It does not reference the PMCF findings or literature evidence.
The reviewer reads the complaint data and then reads the benefit-risk conclusion. There is no analytical thread between them. The complaint data becomes a compliance checkbox instead of an evidence input.
That is the deficiency.
Complaint data is presented without linking complaint patterns to specific residual risks, clinical claims, or performance endpoints. The data exists in isolation from the benefit-risk evaluation.
The Data Points That Actually Matter
When I review complaint data for clinical evaluation purposes, I look for specific data points that inform clinical safety and performance. These are not administrative metrics. They are clinical signals.
1. Complaint Type Mapped to Residual Risks
Every complaint should be categorized in a way that maps back to your risk analysis. If your risk management file identifies “tissue damage during insertion” as a residual risk, your complaint categories should allow you to detect complaints related to that risk.
Generic categories like “device malfunction” or “user error” do not help clinical evaluation. You need categories that reflect clinical outcomes and performance failures relevant to your intended use.
The question is not how many complaints you received. The question is whether the complaint pattern matches the risk profile you predicted.
2. Serious Complaints vs. Expected Adverse Events
If your clinical evaluation identified specific adverse events as known and acceptable risks, your complaint data should show whether those events are occurring at the expected frequency.
A high number of complaints about a known risk is not necessarily a problem. But if the frequency is higher than what the literature predicted or what your equivalence device experienced, that is a clinical signal.
Reviewers expect you to compare complaint rates to the adverse event rates you cited in your literature review or equivalence data. If there is a discrepancy, you must explain it.
3. Complaints That Challenge Clinical Claims
This is the data point manufacturers most often miss. If your clinical evaluation claims that the device performs equivalently to a predicate device, your complaint data should support that claim.
If you claim clinical benefit based on ease of use, but complaints show frequent user errors, that is a contradiction. If you claim safety based on equivalence, but complaints reveal failure modes not seen in the equivalent device, your equivalence argument weakens.
Complaint data can validate or invalidate clinical claims. The analysis must address both possibilities.
If a complaint reveals a clinical outcome that was not identified in your risk analysis or predicted in your literature review, that is a gap. It must be investigated and explained in the CER.
4. Root Cause Analysis for Clinically Relevant Complaints
Not every complaint requires deep investigation. But complaints that involve harm, performance failure, or unexpected clinical outcomes must have documented root cause analysis.
The CER should reference this analysis. If the root cause is design-related, the clinical evaluation must assess whether the design issue affects the benefit-risk profile. If the root cause is user-related, the clinical evaluation must assess whether the instructions for use are adequate.
Reviewers look for this link. They want to see that complaint investigations feed back into clinical understanding.
5. Trend Analysis Over Device Lifecycle
A single snapshot of complaint data is not enough. The CER must show complaint trends over time, especially if the device has been on the market for multiple years.
Are complaint rates stable? Are they increasing? Are new complaint types emerging? Are previously frequent complaints decreasing after corrective actions?
Trend analysis shows whether your post-market surveillance is functioning and whether your understanding of the clinical profile is evolving.
How to Structure Complaint Analysis in the CER
The complaint section in a CER should not be a passive report. It should be a clinical analysis that interprets complaint data in the context of the benefit-risk evaluation.
Start with the total complaint data. Then break it down by clinically relevant categories. Then map those categories to your risk analysis, clinical claims, and PMCF findings.
For each significant complaint type, ask and answer these questions:
Does this complaint pattern match what we predicted in the risk analysis?
Does this complaint challenge any clinical claim made in the CER?
Does this complaint reveal a risk that was not previously identified?
Does this complaint suggest a need for additional clinical data or PMCF investigation?
If the answer to any of these questions is yes, the CER must address it explicitly.
The complaint section includes data but does not interpret it. There is no statement of whether the complaint pattern confirms, contradicts, or challenges the clinical evaluation conclusions.
The Link Between Complaint Data and PMCF
Complaint data should inform your PMCF plan. If complaints reveal gaps in clinical understanding, your PMCF should be designed to investigate those gaps.
If complaints show higher than expected adverse events, your PMCF should include specific endpoints to monitor those events. If complaints suggest user issues, your PMCF should include usability assessments.
Reviewers check this alignment. They want to see that your PMCF is responsive to post-market signals. If your complaint data shows clinical concerns but your PMCF plan does not address them, that is a deficiency.
What Happens When Complaint Analysis Is Missing
When a Notified Body reviews a CER and finds that complaint data is presented without analysis, the typical outcome is a major non-conformity. The reviewer will state that post-market data has not been adequately integrated into the clinical evaluation.
The corrective action is not just adding more data. It is rewriting the complaint section to show clinical interpretation. It is linking complaints to risks, claims, and PMCF. It is demonstrating that the manufacturer understands what the complaint data reveals about the device’s clinical profile.
This is not a documentation exercise. It is a clinical reasoning exercise.
The Practical Reality
Most manufacturers have complaint data. Most manufacturers include it in the CER. But the quality of the analysis varies widely.
The manufacturers who pass review are the ones who treat complaint data as clinical evidence. They interpret it. They challenge their own assumptions with it. They use it to refine their understanding of benefit-risk.
The manufacturers who struggle are the ones who treat complaint data as a regulatory checkbox. They report numbers without interpretation. They assume clean data is sufficient.
It is not.
Complaint analysis in clinical evaluation is not about proving everything is fine. It is about demonstrating that you are actively monitoring, investigating, and learning from post-market experience.
That is what the MDR requires. That is what reviewers expect.
If your complaint section does not do that, it will be flagged.
Peace,
Hatem
Clinical Evaluation Expert for Medical Devices
Follow me for more insights and practical advice.
Frequently Asked Questions
What is a Clinical Evaluation Report (CER)?
A CER is a mandatory document under MDR 2017/745 that demonstrates the safety and performance of a medical device through systematic analysis of clinical data. It must be updated throughout the device lifecycle based on PMCF findings.
How often should the CER be updated?
The CER should be updated whenever significant new clinical data becomes available, after PMCF activities, when there are changes to the device or intended purpose, and at minimum during annual reviews as part of post-market surveillance.
What causes CER rejection by Notified Bodies?
Common reasons include inadequate equivalence demonstration, insufficient clinical data for claims, poorly structured SOTA analysis, missing gap analysis, and lack of clear benefit-risk determination. Structure and logical flow are as important as the data itself.
Which MDCG guidance documents are most relevant for clinical evaluation?
Key documents include MDCG 2020-5 (Equivalence), MDCG 2020-6 (Sufficient Clinical Evidence), MDCG 2020-13 (CEAR Template), MDCG 2020-7 (PMCF Plan), and MDCG 2020-8 (PMCF Evaluation Report).
Need Expert Help with Your Clinical Evaluation?
Get personalized guidance on MDR compliance, CER writing, and Notified Body preparation.
✌
Peace, Hatem
Your Clinical Evaluation Partner
Follow me for more insights and practical advice.
– Regulation (EU) 2017/745 (MDR), Article 61(11), Annex XIV Part A
– MDCG 2020-13 Clinical Evaluation Assessment Report Template





