Why vigilance data sits unused in your clinical evaluation
I reviewed a clinical evaluation update last month. The manufacturer had reported three serious incidents through their vigilance system. None of them appeared in the clinical evaluation. When I asked why, the response was always the same: ‘We thought those were separate processes.’ They are not. And Notified Bodies know this.
In This Article
Under MDR, vigilance and clinical evaluation are not parallel tracks. They are interconnected. Every safety signal that enters your vigilance system must feed back into your clinical evaluation. Every incident, every trend, every emerging risk pattern must be reflected in how you assess the benefit-risk profile of your device.
But in practice, I see these two systems treated as independent. The vigilance team reports to authorities. The clinical affairs team updates the CER on schedule. The link between them is weak or nonexistent. And this creates a gap that reviewers notice immediately.
The Regulatory Expectation
Article 61 of the MDR states that manufacturers must have a post-market surveillance system that collects and analyzes data. This includes vigilance data under Article 87 and following. That data does not exist in isolation. It must inform the clinical evaluation.
MDCG 2020-7 on Post-Market Clinical Follow-up clarifies this. It states that PMCF activities include the continuous evaluation of clinical data, including from incident reports and vigilance activities. MDCG 2020-10-1 on Significant Changes explicitly mentions that certain vigilance signals may trigger a significant change to the clinical evaluation.
The expectation is clear. Vigilance data must flow into the clinical evaluation. It must be analyzed. It must be reflected in the benefit-risk assessment. It must drive decisions about labeling, instructions for use, risk mitigation, and PMCF priorities.
Yet many manufacturers still treat vigilance as a standalone reporting obligation. They file the reports. They respond to authorities. But they do not integrate the data back into their clinical evaluation cycle.
Vigilance reports are filed on time, but the incidents are not mentioned in the clinical evaluation update. The benefit-risk analysis remains unchanged despite repeated adverse events. This signals a disconnect that raises concerns about the effectiveness of the PMS system.
Where the Disconnect Happens
The disconnect happens at the handoff. Vigilance teams work under time pressure. They receive an incident report. They classify it. They report it to the competent authority. They close the file. The focus is on compliance with reporting timelines.
The clinical evaluation team works on a different cycle. They update the CER annually or when triggered by a significant change. They review literature. They analyze PMCF data. But they do not always receive structured input from the vigilance system.
There is no formal process for vigilance data to trigger clinical evaluation review. There is no regular meeting where incident trends are discussed with clinical evaluators. There is no shared database where both teams work from the same information.
The result is that vigilance data gets reported but not analyzed in a clinical context. The clinical evaluation progresses without knowledge of real-world safety signals. And when a reviewer looks at both documents side by side, the gap becomes obvious.
The Timing Problem
Another issue is timing. Vigilance reports happen in real time. A serious incident is reported within days. But the clinical evaluation is updated annually. By the time the next CER update happens, six or twelve incidents may have been reported, none of them reflected in the previous clinical evaluation.
If those incidents reveal a pattern or a previously unrecognized risk, the delay in integrating them into the clinical evaluation can be significant. This is especially true if the incidents suggest a change in the benefit-risk balance or require updates to warnings, contraindications, or instructions for use.
Reviewers ask: when did you become aware of this trend? When did you assess it clinically? What did you do about it? If the answer is that the trend was not assessed until the annual CER update, it suggests a reactive rather than proactive PMS system.
Vigilance data must trigger clinical evaluation review outside of the regular update cycle when it signals a change in the risk profile. Waiting for the annual update is not always sufficient.
What Integration Looks Like in Practice
Integration means that every vigilance report is reviewed not only for regulatory reporting but also for clinical implications. This requires a process where vigilance and clinical affairs collaborate regularly.
First, every incident report should be reviewed by someone with clinical competence. Not just for classification, but to assess whether the incident reveals a new risk, confirms an existing risk, or suggests a shift in the frequency or severity of known risks.
Second, incident data should be aggregated and trended. A single incident may not be significant. But five incidents of the same type over six months may signal a pattern. That pattern must be assessed in the context of the clinical evaluation.
Third, any significant trend or signal should trigger an interim review of the clinical evaluation. This does not always mean a full CER update. But it does mean a documented assessment of whether the benefit-risk profile has changed, whether the device is still safe and performs as intended, and whether any action is needed.
Documenting the Link
The link between vigilance and clinical evaluation must be documented. This means that the clinical evaluation report should reference vigilance data. It should state how many incidents were reported during the period under review. It should describe the nature of those incidents. And it should provide a clinical assessment of what those incidents mean for the benefit-risk profile.
If the incidents confirm known risks and the frequency is within expected ranges, that should be stated. If the incidents reveal a new risk or a higher-than-expected rate of adverse events, that must be analyzed. And if the incidents require action such as labeling changes, risk mitigation, or additional PMCF, that must be documented in the clinical evaluation.
This is not about copying vigilance reports into the CER. It is about clinical interpretation. What does this vigilance data tell us about the safety and performance of the device in real-world use? Does it change our understanding of the benefit-risk balance? Does it require us to update our clinical evaluation?
The CER mentions that vigilance data was reviewed, but provides no summary of what was found and no clinical assessment of the findings. This is a procedural statement, not a clinical evaluation.
When Vigilance Data Triggers a CER Update
Not every vigilance report requires a CER update. But certain signals do. MDCG 2020-10-1 provides guidance on what constitutes a significant change. Changes to the risk-benefit profile based on post-market data can trigger a significant change.
This includes situations where vigilance data reveals a previously unidentified risk, an increase in the frequency or severity of a known risk, or a change in the understanding of how the device performs in certain patient populations.
When this happens, the manufacturer must update the clinical evaluation. This is not optional. The CER must reflect the current understanding of safety and performance based on all available data, including vigilance data.
But manufacturers often delay this. They wait for the next scheduled CER update. They argue that the individual incidents are not significant. They treat each report in isolation rather than looking at the aggregate picture.
Reviewers do not accept this. If vigilance data shows a pattern, and that pattern was evident months before the CER was updated, the question becomes: why did you not act sooner? What does this delay say about the effectiveness of your PMS system?
The Role of PMCF
PMCF is another critical link. Vigilance data may reveal questions that PMCF needs to address. If incidents suggest that the device performs differently in a certain population, PMCF should investigate. If incidents suggest a risk that was underestimated, PMCF should generate additional clinical evidence to refine the understanding of that risk.
This means that vigilance data should feed into the PMCF plan. The plan should be updated to address questions raised by vigilance signals. The PMCF report should analyze vigilance data alongside other post-market data to provide a comprehensive picture of real-world performance.
When this link is missing, PMCF becomes generic. It answers pre-defined questions but does not respond to emerging signals. And the clinical evaluation lacks the depth needed to demonstrate that the manufacturer understands the real-world behavior of the device.
Vigilance data should inform the PMCF plan. PMCF should generate evidence to address questions raised by vigilance signals. This creates a feedback loop that strengthens the clinical evaluation.
Building the Process
To make this work, you need a process. Not just a statement in the PMS plan that vigilance data will be considered. A documented process with defined roles, responsibilities, and triggers.
Start with a regular review cycle. Monthly or quarterly, depending on the volume of vigilance reports, the vigilance team and the clinical evaluation team should meet. They should review all incidents reported during that period. They should discuss trends. They should assess whether any signal requires clinical evaluation review.
Document these meetings. Record what was discussed, what trends were identified, and what actions were taken. If no action is needed, state why. If action is needed, define what will be done and by when.
Next, define clear triggers. What type of vigilance signal requires an interim clinical evaluation review? A serious adverse event that was not previously known? A series of incidents of the same type? A pattern that suggests a systematic issue?
Make these triggers explicit. Write them into your PMS procedures. Train your teams on them. And document your response when a trigger is met.
Cross-Functional Collaboration
This is not just a clinical affairs task. It requires collaboration across vigilance, regulatory, quality, and clinical teams. Each team has a piece of the picture. Vigilance sees the incidents. Quality investigates root causes. Regulatory assesses reporting obligations. Clinical evaluates the impact on benefit-risk.
Without collaboration, each team works in isolation. Vigilance reports without clinical context. Clinical evaluation proceeds without knowledge of vigilance trends. Quality fixes issues without asking whether the fix changes the clinical evaluation.
Integration requires that these teams work together. That they share data. That they discuss implications. And that their conclusions are reflected in the clinical evaluation.
Vigilance and clinical evaluation teams operate independently. There is no regular forum for discussing incidents in a clinical context. This results in fragmented PMS and gaps that reviewers identify immediately.
What Reviewers Look For
When I review a clinical evaluation, I look at the vigilance data. I check whether incidents are mentioned. I look for trends. I assess whether the clinical evaluator has interpreted the vigilance data in the context of the benefit-risk profile.
If the vigilance data shows repeated incidents of a certain type, and the CER does not discuss them, I ask why. If the CER states that vigilance data was reviewed but provides no detail, I ask for the analysis. If the incidents suggest a previously unrecognized risk and the CER does not address it, I flag it as a deficiency.
This is not about finding fault. It is about ensuring that the clinical evaluation reflects all available data. Vigilance data is clinical data. It shows how the device performs in real-world conditions. It reveals risks that may not have been apparent in pre-market studies. It must be part of the clinical evaluation.
Notified Bodies look for the same thing. They check whether vigilance data is integrated into the CER. They ask for the process by which vigilance signals trigger clinical evaluation review. They want to see documentation that the manufacturer is actively monitoring and responding to safety signals.
If that integration is missing, it raises a fundamental question: is the manufacturer truly conducting post-market surveillance, or just meeting minimum reporting requirements?
Closing the Loop
Linking vigilance data to clinical evaluation is not a one-time exercise. It is a continuous process. Every incident is an opportunity to learn. Every trend is a signal that must be assessed. Every clinical evaluation update must reflect the latest vigilance data.
This requires discipline. It requires collaboration. It requires a process that is documented, followed, and continuously improved.
But when done well, it strengthens the entire PMS system. It ensures that the clinical evaluation is based on complete data. It demonstrates that the manufacturer understands the real-world performance of the device. And it builds confidence that the device remains safe and effective throughout its lifecycle.
The question is not whether vigilance data should be linked to clinical evaluation. The question is whether your process makes that link clear, documented, and effective.
Next, we will look at how PMS outputs drive decisions about device modifications and labeling changes. Because surveillance is not just about collecting data. It is about acting on what you find.
– Regulation (EU) 2017/745 (MDR), Articles 61, 83-92
– MDCG 2020-7: Post-Market Clinical Follow-up (PMCF) Evaluation Report Template
– MDCG 2020-10-1: Significant Changes Regarding the Transitional Provision under Article 120 of the MDR
Peace,
Hatem
Clinical Evaluation Expert for Medical Devices
Follow me for more insights and practical advice.
Frequently Asked Questions
What is a Clinical Evaluation Report (CER)?
A CER is a mandatory document under MDR 2017/745 that demonstrates the safety and performance of a medical device through systematic analysis of clinical data. It must be updated throughout the device lifecycle based on PMCF findings.
How often should the CER be updated?
The CER should be updated whenever significant new clinical data becomes available, after PMCF activities, when there are changes to the device or intended purpose, and at minimum during annual reviews as part of post-market surveillance.
What causes CER rejection by Notified Bodies?
Common reasons include inadequate equivalence demonstration, insufficient clinical data for claims, poorly structured SOTA analysis, missing gap analysis, and lack of clear benefit-risk determination. Structure and logical flow are as important as the data itself.
Which MDCG guidance documents are most relevant for clinical evaluation?
Key documents include MDCG 2020-5 (Equivalence), MDCG 2020-6 (Sufficient Clinical Evidence), MDCG 2020-13 (CEAR Template), MDCG 2020-7 (PMCF Plan), and MDCG 2020-8 (PMCF Evaluation Report). MDCG 2020-7, MDCG 2020-10-1
Need Expert Help with Your Clinical Evaluation?
Get personalized guidance on MDR compliance, CER writing, and Notified Body preparation.
✌
Peace, Hatem
Your Clinical Evaluation Partner
Follow me for more insights and practical advice.
Deepen Your Knowledge
Read Complete Guide to Clinical Evaluation under EU MDR for a comprehensive overview of clinical evaluation under EU MDR 2017/745.





