Your Vigilance Data Is Not a Safety Report. It’s Evidence.
I see it in almost every CER that reaches me for review. Vigilance data sits in a separate section, reported but not interpreted. Incident numbers are listed. Root causes are summarized. Then the document moves on. The reviewers do not.
In This Article
The disconnect happens because most teams treat vigilance as a compliance requirement, not as a clinical input. You collect incidents because Article 87 of MDR 2017/745 requires it. You report serious incidents and field safety corrective actions because that is how the system works. But when it comes to clinical evaluation, that data becomes an afterthought.
It should not.
Vigilance data is clinical evidence. It comes from real-world use. It reflects device performance under conditions your clinical investigations may never capture. And under MDR, it must feed directly into your benefit-risk assessment, your safety profile, and your conclusion on clinical performance.
This is not about copying incident reports into your CER. It is about analyzing the clinical meaning behind them.
Why Vigilance Data Belongs in Clinical Evaluation
MDR Article 61 on clinical evaluation requires manufacturers to continuously update their evaluation with all relevant clinical data. MDCG 2020-13 on clinical evaluation makes it explicit: post-market data, including vigilance reports, must be systematically reviewed and integrated.
The reasoning is simple. Clinical investigations are controlled environments. They have inclusion criteria, follow-up protocols, and monitored endpoints. Real-world use does not. Your device may perform differently in broader populations, under varied clinical settings, and over longer timeframes.
Vigilance data captures what happens when those variables change.
Vigilance data is not a reporting obligation separate from clinical evaluation. It is real-world clinical evidence that must inform your benefit-risk assessment and update your clinical performance conclusions.
When you report a serious incident to your competent authority, you provide details on what happened, the root cause, and corrective actions. That is the vigilance side. But the clinical evaluation side asks a different question: what does this incident tell us about device safety and performance across the entire marketed population?
That question does not get answered by filling out a form. It requires analysis.
What Reviewers Look For
When I review a CER, I do not look for a vigilance section that lists incidents. I look for integration of that data into the clinical reasoning.
Here is what that means in practice.
1. Incident Trends, Not Just Counts
A table showing five incidents over two years tells me nothing unless I see the denominator. How many devices were in use? What was the incident rate compared to your pre-market safety profile? Is the rate stable, increasing, or decreasing?
If your clinical investigation showed a 2% complication rate and your post-market data now shows 5%, that is not just a vigilance issue. It is a clinical signal that your benefit-risk conclusion may need updating.
Listing incident counts without exposure data or trend analysis. Reviewers cannot assess whether the safety profile has changed if you do not provide context.
I see manufacturers present incident data in isolation. Seven device malfunctions. Three injuries. One near-miss. But no analysis of what those numbers mean against the volume of devices sold, the duration of use, or the comparator data from similar devices.
That is reporting, not evaluation.
2. Root Cause Patterns, Not Just Individual Fixes
Every incident has a root cause. You investigate it, implement corrective action, and close the case. But clinical evaluation requires you to step back and ask whether multiple incidents share a common pattern.
If three separate incidents trace back to user error in a specific clinical setting, that is a clinical finding. It tells you something about how the device performs in real-world conditions versus controlled studies. It may also tell you that your instructions for use need clinical re-evaluation, not just a wording update.
Root cause analysis is often treated as a quality function. It is. But it also has clinical implications that belong in your CER.
3. Residual Risk Updates
Your risk management file under ISO 14971 includes a list of identified hazards and residual risks. Some of those risks were estimated based on pre-market data. Vigilance data tells you whether those estimates were accurate.
If you estimated a 0.5% risk of a particular complication and post-market data shows 2%, your residual risk profile has changed. That change must feed into your clinical evaluation and your benefit-risk assessment.
Vigilance data validates or challenges the residual risk assumptions you made at the pre-market stage. If the real-world risk is higher, your clinical evaluation must address it.
I have reviewed files where the risk management file and the clinical evaluation report do not speak to each other. The risk file is updated after incidents. The CER references pre-market risk estimates. Neither document acknowledges the disconnect.
That does not pass a Notified Body review.
How to Integrate Vigilance Data Into Your CER
Integration is not about adding a section. It is about weaving vigilance findings into your clinical reasoning at every relevant point.
Start With the Data You Have
Pull all vigilance reports for the evaluation period. Include serious incidents, field safety corrective actions, and trend reports. If you manufacture multiple devices under the same generic device group, include incidents from equivalent devices where relevant.
Calculate exposure. How many devices were sold or implanted? What is the patient-year exposure? This denominator is essential for any meaningful rate calculation.
Categorize by Clinical Relevance
Not all incidents carry the same weight in clinical evaluation. A packaging defect that led to a recall has regulatory consequences, but limited clinical implications if no patient harm occurred. A device malfunction that caused a surgical complication does.
Group incidents by clinical endpoint: adverse events, device malfunctions affecting safety, use errors leading to patient harm. This categorization allows you to assess each group against your pre-market safety and performance claims.
Compare Against Pre-Market Data
Your clinical investigation or equivalence data established a baseline safety and performance profile. Vigilance data either confirms that profile or signals a deviation.
If the types of incidents align with the known risks you disclosed, that is confirmation. If new incident types emerge or rates increase, that is a signal for further analysis.
Failing to compare post-market vigilance data against pre-market safety endpoints. Without this comparison, you cannot demonstrate that your benefit-risk profile remains valid.
Update the Benefit-Risk Assessment
This is where vigilance data must land. Your benefit-risk conclusion in the CER is not static. Every update cycle must revisit that conclusion in light of new clinical data, including vigilance findings.
If incident rates remain consistent with expectations and no new hazards emerged, state that clearly and provide the supporting data. If rates have increased or new risks appeared, explain what that means for your overall benefit-risk balance.
Do not hide behind corrective actions. Corrective actions address specific incidents. Clinical evaluation addresses whether the device as a whole still meets its intended performance and safety claims.
What Happens When You Skip This Step
I have seen CERs where vigilance data was reported but not analyzed. The manufacturer listed incidents, confirmed corrective actions were taken, and concluded that post-market surveillance confirmed device safety.
That conclusion is not supported unless the analysis is shown.
Notified Bodies and competent authorities do not accept vigilance data by declaration. They expect to see the clinical reasoning. They look for incident rates, trend analysis, comparison to pre-market data, and explicit updates to benefit-risk conclusions.
When that reasoning is missing, the file gets flagged. The manufacturer is asked to resubmit with proper integration. That delays certification, extends review cycles, and in some cases triggers additional audits.
A vigilance system that functions perfectly from a compliance perspective can still fail the clinical evaluation if the data is not translated into clinical evidence and integrated into your benefit-risk reasoning.
Practical Challenges and How to Address Them
I understand the operational difficulty. Vigilance teams and clinical evaluation teams often work separately. Incidents are managed by quality and regulatory. Clinical evaluation is handled by clinical affairs or external consultants.
That separation creates gaps.
The solution is not organizational restructuring. It is process discipline. Your CER update procedure must include a formal step where vigilance data is reviewed by the clinical evaluator, not just referenced.
That review should answer specific questions: Did any incidents involve clinical endpoints we claimed in our pre-market evaluation? Did incident rates exceed our expected ranges? Did root cause analysis reveal clinical use patterns we did not anticipate?
If the clinical evaluator cannot answer those questions, the vigilance data has not been integrated.
Final Consideration
Vigilance data is not an administrative burden you manage separately from clinical evaluation. It is real-world evidence that either confirms or challenges the clinical claims you made when you placed your device on the market.
Every incident, every trend, every root cause pattern is a data point about how your device performs outside the controlled environment of a clinical study. That data must inform your ongoing clinical evaluation, your benefit-risk conclusions, and your state-of-the-art analysis.
If your CER treats vigilance as a compliance checklist, you are not meeting MDR requirements. You are also missing the clinical signals that matter most for patient safety.
Integration is not optional. It is foundational.
Peace,
Hatem
Clinical Evaluation Expert for Medical Devices
Follow me for more insights and practical advice.
Frequently Asked Questions
What is a Clinical Evaluation Report (CER)?
A CER is a mandatory document under MDR 2017/745 that demonstrates the safety and performance of a medical device through systematic analysis of clinical data. It must be updated throughout the device lifecycle based on PMCF findings.
How often should the CER be updated?
The CER should be updated whenever significant new clinical data becomes available, after PMCF activities, when there are changes to the device or intended purpose, and at minimum during annual reviews as part of post-market surveillance.
What causes CER rejection by Notified Bodies?
Common reasons include inadequate equivalence demonstration, insufficient clinical data for claims, poorly structured SOTA analysis, missing gap analysis, and lack of clear benefit-risk determination. Structure and logical flow are as important as the data itself.
Which MDCG guidance documents are most relevant for clinical evaluation?
Key documents include MDCG 2020-5 (Equivalence), MDCG 2020-6 (Sufficient Clinical Evidence), MDCG 2020-13 (CEAR Template), MDCG 2020-7 (PMCF Plan), and MDCG 2020-8 (PMCF Evaluation Report).
Need Expert Help with Your Clinical Evaluation?
Get personalized guidance on MDR compliance, CER writing, and Notified Body preparation.
✌
Peace, Hatem
Your Clinical Evaluation Partner
Follow me for more insights and practical advice.
– MDR 2017/745 Article 61 (Clinical Evaluation)
– MDR 2017/745 Article 87 (Vigilance Reporting)
– MDCG 2020-13 Clinical Evaluation Assessment Report Template





