When adverse events tell a story you’re not reading
A notification arrives from your vigilance team. Three events this quarter. All resolved. Your clinical evaluation report states: “No concerning trends identified.” The Notified Body reviewer highlights the section and writes: “Inadequate trending analysis. Pattern not addressed.” You reread the events. They seem unrelated. That’s exactly the problem.
In This Article
Most clinical evaluation reports treat adverse events as isolated incidents. Each event gets its case number, its resolution, its conclusion. The file closes. The CER updates a table. The count goes up.
But adverse events don’t exist in isolation. They emerge from underlying conditions: design weaknesses, use environment factors, patient population characteristics, or interaction patterns that the risk analysis didn’t fully capture.
The question isn’t whether you documented the events. The question is whether you recognized what they’re revealing about your device in the real world.
What MDR Actually Requires From Trending
MDR Article 61(1) requires manufacturers to have a systematic procedure for adverse event reporting, evaluation, and trending. Annex I requires post-market surveillance data to inform clinical evaluation continuously.
The regulation doesn’t specify how to perform trending analysis. That silence creates confusion. Some manufacturers count events by type and call it trending. Others compare quarterly numbers and look for increases.
Neither approach addresses what reviewers actually need to see: whether patterns in the data reveal risks or performance issues that weren’t fully understood at the time of initial clinical evaluation.
This matters because trending is not about statistics. It’s about signal detection. It’s about recognizing when seemingly minor events point to something the original assessment didn’t anticipate.
Trending analysis is signal detection. The goal is not to count events but to detect patterns that challenge assumptions made during initial risk assessment and clinical evaluation.
The Patterns Reviewers Expect You to Recognize
When I review trending sections in clinical evaluations, I look for evidence that the manufacturer understands what patterns matter. Not all patterns trigger action, but certain patterns always require justification.
Pattern 1: Clustering by Use Context
Three pressure injuries from your support surface. Each reported from a different hospital. Each patient had different risk factors. The events look scattered.
Then you notice: all three hospitals recently expanded ICU capacity due to COVID pressures. All three increased patient-to-nurse ratios. All three modified repositioning protocols to reduce staff exposure time.
The device didn’t change. The instructions didn’t change. But the use environment shifted, and the device is now being used under conditions that fall outside the assumptions in your risk analysis.
This is a pattern. It requires action even if event frequency hasn’t increased. Because the pattern reveals that your device performance depends on conditions you didn’t control for and possibly didn’t assess.
Pattern 2: Severity Drift
You track catheter dislodgements. They happen. They’ve always happened. The frequency is stable. Your trending analysis shows no increase.
But when you look at the severity of consequences, something shifts. Two years ago, dislodgements were detected early, usually during routine checks. Now, several cases involve delayed detection, leading to treatment interruptions or secondary complications.
The device hasn’t changed. But clinical practice has. Monitoring frequency decreased. Staff experience levels shifted. The buffer that made early detection reliable is eroding.
The event count says everything is fine. The severity trend says your device depends on a clinical environment that’s changing.
Trending sections that only analyze event counts without examining severity progression, time-to-detection changes, or consequence escalation patterns.
Pattern 3: User Error Repetition
The most dangerous phrase in vigilance files: “Root cause: user error.”
One user error is an error. Three user errors of the same type is a design problem.
I see this pattern repeatedly. A device has a setup sequence. Step five requires a specific action. Users skip it or perform it incorrectly. Events result. Each investigation concludes user error. Each corrective action involves retraining or clearer labeling.
If the same error happens across different users in different institutions, it’s not a user problem. It’s a usability problem. The design allows or invites the error. The interface doesn’t prevent it. The instructions don’t compensate for it.
This pattern requires action because it reveals that your human factors assessment underestimated the failure mode. MDR Annex I requires devices to be designed to minimize risks from foreseeable misuse. If the misuse keeps happening, it was foreseeable. Your design didn’t minimize it adequately.
What Action Actually Means
Here’s where many manufacturers stumble. They recognize the pattern, document it, and then write: “No design change required. Enhanced training implemented.”
Reviewers reject this for a simple reason: if the pattern reveals a gap in your original risk assessment or clinical evaluation, the action must feed back into those documents. Otherwise, you haven’t closed the loop.
Action doesn’t always mean device modification. But it always means reassessment.
The Reassessment Path
When trending reveals a pattern, the manufacturer must evaluate whether:
1. The risk analysis captured this scenario adequately. If not, update the risk analysis with the new use context or failure mode. Reevaluate severity and probability based on real-world data.
2. The clinical evaluation’s benefit-risk conclusion remains valid under the observed conditions. If the device performs differently in the real use environment than assumed in clinical evaluation, that changes the benefit-risk profile for specific patient groups or use contexts.
3. The IFU and training materials address the observed failure modes effectively. If they don’t, the question becomes whether modification of instructions is sufficient or whether design change is needed to eliminate reliance on user behavior.
This is what reviewers mean by “action.” Not reactive firefighting. Systematic reassessment that updates the knowledge base supporting your device’s safety and performance claims.
Action means closing the loop. Patterns detected in post-market surveillance must feed back into risk management, clinical evaluation, and design controls. Otherwise, the post-market surveillance system isn’t actually informing anything.
The Signal-to-Noise Problem
One challenge in trending is distinguishing meaningful patterns from random variation. Small numbers make this worse. You have five events this year, seven last year. Is that a trend?
Statistical significance testing isn’t the answer for medical device vigilance. With small denominators and rare events, you’ll rarely achieve statistical power.
Instead, focus on pattern coherence. Do multiple weak signals point in the same direction?
Example: You manufacture an infusion pump. This year you have:
- Two reports of flow rate deviations in cold environments
- One complaint about display readability in bright light
- A customer question about battery performance in air transport
None of these alone triggers alarm. But together, they suggest your device is being used in mobile or field environments that differ from your original intended use context, which assumed controlled hospital settings.
This coherence across weak signals is a pattern. It doesn’t require immediate design change. But it requires investigation into whether your performance specifications and risk analysis adequately cover these emerging use conditions.
How to Structure Trending Analysis for Review
When I write or review a clinical evaluation report, I expect the trending section to answer specific questions:
What data sources feed the trending analysis? Vigilance reports, complaints, PMCF feedback, literature surveillance. State the scope explicitly.
What analysis methods were applied? Don’t just count. Describe how you looked for patterns by event type, severity, patient population, use context, time periods, geographic distribution.
What patterns were identified? Even if you conclude no concerning trends, describe what you looked for and what you ruled out. This demonstrates systematic thinking.
For identified patterns: what is the underlying cause hypothesis? Every pattern requires an explanation. Device design, user interaction, environmental factors, patient characteristics, changes in clinical practice. State your hypothesis.
How does the pattern affect previous assessments? Risk analysis, clinical evaluation benefit-risk conclusion, equivalence claims if applicable. Show the connection.
What action was taken or is planned? Specify whether action involves design change, risk control measures, labeling updates, investigation, or additional data collection through PMCF.
Trending sections that present data tables without interpretation. The reviewer is left to figure out whether the manufacturer actually analyzed the patterns or just compiled numbers for compliance.
The Timing Problem
Trending analysis happens in the clinical evaluation update. That update might occur annually. But patterns emerge in real time. By the time you document them in the CER, the pattern might have evolved or worsened.
This creates tension. The CER captures a snapshot. But the Notified Body expects evidence that you’re monitoring continuously and acting when needed, not waiting for the annual update cycle.
The solution is making sure your post-market surveillance system has defined triggers for interim action. If vigilance trending reveals a pattern that challenges previous assumptions, that triggers risk management review immediately, not at the next CER update.
Document these triggers in your PMS plan. Show that trending feeds into an active decision-making process, not just into annual reporting.
When Trending Reveals Nothing
Sometimes trending analysis reveals no concerning patterns. Event rates are stable. No clustering. No severity drift. No repeated user errors.
Document that conclusion clearly. But also document what you specifically examined to reach it. Reviewers need to see that you performed actual analysis, not just recorded absence of obvious problems.
I’ve seen submissions where the trending section states: “No trends identified.” That tells me nothing. It could mean rigorous analysis found no issues, or it could mean no analysis was performed.
Instead, write: “Analysis of 23 adverse events over 18 months identified no clustering by patient population, use context, or time period. Severity distribution remained consistent with historical baseline. No repeated failure modes suggesting systematic design or usability issues. Event rate per device year shows stable performance within expected range based on pre-market clinical data.”
This demonstrates that you looked for patterns systematically and found none. That’s a valid conclusion when it’s supported by transparent methodology.
Linking Trending to PMCF
Trending analysis often reveals data gaps. You identify a pattern but lack the information to fully understand its significance. Patient demographics are incomplete. Use contexts are unclear. Outcome data is limited.
These gaps should feed directly into your PMCF strategy. If trending suggests your device performs differently in specific populations or settings, your PMCF should be designed to collect data that clarifies that pattern.
This connection between trending and PMCF demonstrates that your post-market surveillance system is integrated. You’re not collecting data for compliance. You’re collecting data to answer questions that emerged from real-world use.
Reviewers look for this connection. When they don’t find it, they question whether your PMCF is actually targeting the right clinical questions or just generating generic data.
Trending analysis and PMCF design should inform each other. Patterns detected through trending define the clinical questions your PMCF needs to address. This integration shows reviewers that your post-market system is responsive, not ritualistic.
The Reviewer’s Perspective
When I review trending analysis, I’m looking for evidence of critical thinking. Can this manufacturer recognize when their device is behaving differently than their pre-market assessment predicted? Do they understand the difference between noise and signal?
I’m not expecting perfection. Trending analysis involves judgment. Patterns aren’t always clear. But I need to see that the manufacturer applied systematic methodology and followed through when patterns emerged.
The submissions that fail are the ones that treat trending as documentation theater. They present tables and charts but show no evidence of interpretation, no hypothesis formation, no connection to decision-making.
The submissions that succeed are the ones where I can follow the thinking. The manufacturer describes what they looked for, what they found, what it means, and what they did about it. Even if I disagree with their conclusion, I can see their reasoning.
That transparency is what builds confidence that the manufacturer’s post-market surveillance system is actually functioning as the continuous learning system MDR envisions.
Trending analysis isn’t about proving there are no problems. It’s about proving you have the capability to detect problems when they exist and the judgment to distinguish problems from noise.
That capability is what keeps patients safe. And that’s what reviewers are ultimately assessing.
Peace,
Hatem
Clinical Evaluation Expert for Medical Devices
Follow me for more insights and practical advice.
Frequently Asked Questions
What is a Clinical Evaluation Report (CER)?
A CER is a mandatory document under MDR 2017/745 that demonstrates the safety and performance of a medical device through systematic analysis of clinical data. It must be updated throughout the device lifecycle based on PMCF findings.
How often should the CER be updated?
The CER should be updated whenever significant new clinical data becomes available, after PMCF activities, when there are changes to the device or intended purpose, and at minimum during annual reviews as part of post-market surveillance.
What causes CER rejection by Notified Bodies?
Common reasons include inadequate equivalence demonstration, insufficient clinical data for claims, poorly structured SOTA analysis, missing gap analysis, and lack of clear benefit-risk determination. Structure and logical flow are as important as the data itself.
Which MDCG guidance documents are most relevant for clinical evaluation?
Key documents include MDCG 2020-5 (Equivalence), MDCG 2020-6 (Sufficient Clinical Evidence), MDCG 2020-13 (CEAR Template), MDCG 2020-7 (PMCF Plan), and MDCG 2020-8 (PMCF Evaluation Report).
Need Expert Help with Your Clinical Evaluation?
Get personalized guidance on MDR compliance, CER writing, and Notified Body preparation.
✌
Peace, Hatem
Your Clinical Evaluation Partner
Follow me for more insights and practical advice.
– Regulation (EU) 2017/745 (MDR), Article 61, Annex I
– MDCG 2020-7: Post-market clinical follow-up (PMCF) evaluation report template
– MDCG 2020-8: Post-market surveillance (PMS) report template
Deepen Your Knowledge
Read Complete Guide to Clinical Evaluation under EU MDR for a comprehensive overview of clinical evaluation under EU MDR 2017/745.





