Why cardiovascular device CERs fail at the safety endpoint
I reviewed a clinical evaluation report last month for a drug-eluting stent. Three years of PMCF data. Positive outcomes. Good equivalence documentation. The Notified Body stopped at page 47. The reason? The safety analysis treated serious adverse events as binary checkboxes instead of time-dependent risks. The entire CER structure collapsed because the safety evaluation framework was fundamentally incompatible with how cardiovascular devices actually harm patients.
In This Article
- The temporal dimension of cardiovascular risk
- The equivalence trap in cardiovascular claims
- What safety endpoints actually mean in this space
- The state of the art for cardiovascular devices
- Building a safety-first clinical evaluation structure
- PMCF design for cardiovascular devices
- How reviewers actually assess cardiovascular device CERs
- Connecting safety evidence to benefit-risk conclusions
- What happens when safety evaluation is insufficient
This is not an isolated case. Cardiovascular devices live in a regulatory space where the consequences of failure are immediate and life-threatening. Yet many clinical evaluation reports approach safety assessment with the same methods used for lower-risk devices. The gap between what regulators expect and what manufacturers deliver is not about missing data. It is about misunderstanding what safety means in this context.
The temporal dimension of cardiovascular risk
Cardiovascular devices interact with dynamic biological systems. A vascular stent does not simply succeed or fail at implantation. It creates a cascade of risks that evolve: acute thrombosis in the first 24 hours, subacute thrombosis in the first month, late stent thrombosis beyond a year, neoatherosclerosis over multiple years.
Most CERs I review present adverse event tables that aggregate events across the entire follow-up period. They report percentages: 2.3% device-related serious adverse events. This tells you almost nothing about risk.
The critical question is not how many events occurred. It is when they occurred, under what conditions, and whether the risk profile changes over time. A 2% thrombosis rate concentrated in the first 30 days has completely different clinical implications than the same rate distributed evenly over five years.
Cardiovascular safety evaluation must address temporal distribution of risk. Kaplan-Meier curves, time-to-event analysis, and landmark analyses are not optional statistical embellishments. They are the minimum standard for demonstrating that you understand when your device poses risk.
The equivalence trap in cardiovascular claims
Equivalence claims for cardiovascular devices routinely fail because manufacturers mistake technical similarity for clinical equivalence. I see this pattern repeatedly: same material platform, same delivery system, similar radial force characteristics. The manufacturer concludes equivalence and relies heavily on literature data from the comparator device.
Then the questions arrive from the Notified Body. Why does your device have a different strut thickness? How does this affect endothelialization kinetics? What is the impact on flow dynamics at bifurcation points? Can you demonstrate that inflammatory response profiles are comparable during the critical first 90 days?
These are not theoretical concerns. Strut thickness differences of 30 micrometers have shown measurable impacts on neointimal hyperplasia. Drug release kinetics that differ by days can alter the local inflammatory environment enough to change thrombosis risk.
The technical and biological characteristics defined in Annex XIV of the MDR are not a checklist. They are a framework for understanding what actually drives cardiovascular device risk. For implantable cardiovascular devices, you cannot simply claim equivalence and lean on literature. You must demonstrate that the specific characteristics affecting biocompatibility, thrombogenicity, mechanical interaction with tissue, and temporal risk profile are sufficiently similar.
Equivalence demonstration that focuses on device design and materials but fails to address the biological interaction timeline. Reviewers will ask: where is your evidence that the early inflammatory response, the endothelialization process, and the long-term tissue remodeling match your equivalence device?
What safety endpoints actually mean in this space
Cardiovascular clinical evaluation demands clarity about what constitutes a safety endpoint and why it matters. I have reviewed CERs that list “no device-related deaths” as a primary safety conclusion. This is insufficient.
Death is an endpoint, but it is often too late and too insensitive for cardiovascular devices. The meaningful safety endpoints are the precursors: target lesion revascularization, stent thrombosis, myocardial infarction attributable to device failure, major bleeding requiring intervention, vascular complications at access sites.
Each of these endpoints has diagnostic criteria. Stent thrombosis, for instance, follows Academic Research Consortium definitions: definite, probable, possible. Your clinical evaluation must specify which definition you use, why, and how your data collection ensures reliable classification. If your PMCF plan collects “thrombosis events” without specifying ARC criteria and timing windows, you have not defined your endpoint.
This matters during review because regulators and Notified Bodies assess whether your safety conclusions are based on endpoints that can actually detect the risks your device creates. Vague endpoints produce vague conclusions. Vague conclusions fail review.
The state of the art for cardiovascular devices
SOTA analysis for cardiovascular devices is where many CERs show their weaknesses most clearly. The manufacturer reviews current guidelines, summarizes published meta-analyses, and concludes that their device meets or exceeds current performance standards.
But SOTA is not about showing your device is acceptable. It is about demonstrating you understand what the current generation of devices achieves and where the field is moving. For drug-eluting stents, SOTA includes understanding that newer-generation devices have reduced very late stent thrombosis rates compared to earlier generations. It includes knowing that biodegradable polymer platforms are changing the risk-benefit calculation. It includes recognizing that dual antiplatelet therapy duration recommendations have evolved based on device-specific thrombosis data.
Your CER must position your device within this evolving landscape. If your device uses a durable polymer when the field is moving toward biodegradable options, you must address why this choice is justified. If your recommended DAPT duration differs from current guidelines for similar devices, you must provide evidence supporting this recommendation.
Reviewers read SOTA sections to see whether you know your field. They can identify immediately when a SOTA section is assembled from abstracts rather than constructed from deep understanding of how cardiovascular device safety has evolved over the past decade.
SOTA for cardiovascular devices must address device-generation evolution, procedure-related risk reduction strategies, and changes in clinical management protocols that affect safety outcomes. A 2018 device cannot be evaluated against 2008 standards simply because equivalent devices from that era exist in literature.
Building a safety-first clinical evaluation structure
A safety-focused CER for a cardiovascular device starts with hazard identification that is genuinely specific. Not generic lists from ISO 14971 templates. Specific failure modes for your device: what happens if your polymer coating degrades faster than expected? What are the consequences if your deployment mechanism creates edge dissection at a bifurcation? What occurs if your radiopaque markers migrate?
Each identified hazard must connect to a safety endpoint in your clinical data. If you identify “incomplete apposition” as a hazard but your clinical studies do not assess apposition through imaging follow-up, you have not evaluated the safety risk. The gap will be identified during review.
The clinical data section must present safety results with appropriate time stratification. Early safety (0-30 days), mid-term safety (1-12 months), and long-term safety (beyond 1 year) are not arbitrary divisions. They correspond to different biological processes and different risk mechanisms.
For each time period, your analysis must address whether observed event rates align with predicted hazard frequency. If your risk management file estimates 1.5% acute thrombosis risk but your clinical data shows 0.2%, you must address this discrepancy. It might mean your risk estimation was conservative. It might mean your follow-up was incomplete. It might mean your event detection was inadequate. Unexplained discrepancies between predicted and observed risk undermine reviewer confidence.
PMCF design for cardiovascular devices
PMCF plans for cardiovascular devices cannot be generic registry protocols. They must be designed to answer specific safety questions that remain after pre-market clinical investigation.
I frequently see PMCF plans that propose to collect “real-world safety data” without specifying what safety questions remain unanswered. This is insufficient. Your PMCF plan must identify knowledge gaps: Are very late thrombosis rates in your device comparable to equivalent devices beyond three years? Does your device perform safely in heavily calcified lesions when used off-label? What is the reintervention rate in small vessel diameters?
Each knowledge gap should connect to a PMCF objective, which connects to specific endpoints, which connects to sample size justification and follow-up duration. If you cannot draw this chain of logic clearly in your CER, your PMCF plan is not adequately justified.
Reviewers will also assess whether your PMCF follow-up duration matches the temporal risk profile of your device. A permanent implant with known late complications requires long-term follow-up. Proposing 12-month PMCF follow-up for a device with known risks of very late stent thrombosis signals that you have not understood your own safety profile.
PMCF plans that collect data but do not define success criteria. Your plan must specify what findings would trigger a safety review, what event rates would be considered unacceptable, and what evidence would demonstrate acceptable long-term safety.
How reviewers actually assess cardiovascular device CERs
When a Notified Body reviews a cardiovascular device CER, they are not checking boxes. They are assessing whether you demonstrate understanding of cardiovascular risk mechanisms and whether your evidence addresses those mechanisms.
They look at your patient population. If your clinical study enrolled only straightforward lesions but your IFU allows use in complex anatomy, they will identify this gap. If your study excluded patients on novel anticoagulants but these patients will use your device post-market, the risk evaluation is incomplete.
They examine your follow-up completeness. Lost to follow-up rates above 10% at one year raise concerns. If patients with adverse events are more likely to be lost to follow-up, your safety data is biased. If you cannot demonstrate that lost patients have similar baseline characteristics to completed patients, your conclusions are questionable.
They assess your event adjudication process. Were events classified by independent reviewers? Were imaging studies evaluated by core labs? Were clinical events adjudicated by cardiologists not involved in the procedures? Without independent adjudication, cardiovascular event data lacks credibility.
These are not formalities. They reflect whether your evidence can support the safety claims you make in the CER. Weak evidence produces weak conclusions, which produce additional questions, which delay certification.
Connecting safety evidence to benefit-risk conclusions
The benefit-risk analysis in cardiovascular device CERs often feels like an afterthought. It appears as the final section, briefly states that benefits outweigh risks, and moves to conclusions.
This approach misses the point. Benefit-risk analysis for cardiovascular devices is a structured evaluation of trade-offs. Your device might reduce restenosis but increase early thrombosis. It might improve deliverability but increase vascular access complications. It might extend durability but require longer antiplatelet therapy.
The analysis must acknowledge these trade-offs explicitly and demonstrate why the balance favors your device for the intended patient population. If your device is indicated for high-risk patients with few alternatives, the acceptable risk threshold differs from devices intended for stable patients with multiple treatment options.
Reviewers evaluate whether your benefit-risk reasoning matches clinical reality. If you claim benefits based on surrogate endpoints but risks based on clinical events, the comparison is invalid. If you compare your device’s benefits to no treatment rather than to alternative treatments, you have not demonstrated clinical value.
The benefit-risk section should be the core of your CER, not an appendix. It is where you demonstrate that you understand what your device achieves, what it costs in terms of risk, and why that trade-off makes clinical sense.
What happens when safety evaluation is insufficient
The consequence of inadequate safety evaluation is not simply a request for additional information. It is a fundamental question about whether sufficient evidence exists to support certification.
I have seen certification processes stall for 18 months because the safety evidence could not support the intended use claims. The manufacturer had data, but the data did not address the right questions. Additional studies were required. The cost in time and resources was substantial.
More concerning is the post-market consequence. A device certified with inadequate safety evaluation enters the market without proper risk characterization. When adverse events occur, the manufacturer lacks baseline data to assess whether rates are acceptable. PMCF becomes reactive rather than systematic. Regulatory authorities may impose additional restrictions or suspend certification.
The path forward is not more data for the sake of data. It is structured thinking about what cardiovascular safety means for your specific device, which risks matter most, and how your evidence addresses those risks with appropriate temporal resolution and clinical relevance.
Safety-focused clinical evaluation is not a regulatory burden. It is the foundation for understanding whether your cardiovascular device works as intended when it matters most—when patient lives depend on it.
Peace,
Hatem
Clinical Evaluation Expert for Medical Devices
Follow me for more insights and practical advice.
Frequently Asked Questions
What is a Clinical Evaluation Report (CER)?
A CER is a mandatory document under MDR 2017/745 that demonstrates the safety and performance of a medical device through systematic analysis of clinical data. It must be updated throughout the device lifecycle based on PMCF findings.
How often should the CER be updated?
The CER should be updated whenever significant new clinical data becomes available, after PMCF activities, when there are changes to the device or intended purpose, and at minimum during annual reviews as part of post-market surveillance.
What causes CER rejection by Notified Bodies?
Common reasons include inadequate equivalence demonstration, insufficient clinical data for claims, poorly structured SOTA analysis, missing gap analysis, and lack of clear benefit-risk determination. Structure and logical flow are as important as the data itself.
Which MDCG guidance documents are most relevant for clinical evaluation?
Key documents include MDCG 2020-5 (Equivalence), MDCG 2020-6 (Sufficient Clinical Evidence), MDCG 2020-13 (CEAR Template), MDCG 2020-7 (PMCF Plan), and MDCG 2020-8 (PMCF Evaluation Report).
Need Expert Help with Your Clinical Evaluation?
Get personalized guidance on MDR compliance, CER writing, and Notified Body preparation.
✌
Peace, Hatem
Your Clinical Evaluation Partner
Follow me for more insights and practical advice.
– Regulation (EU) 2017/745 (MDR), Annex XIV on Clinical Evaluation
– MDCG 2020-6 on Sufficient Clinical Evidence for Legacy Devices
– MDCG 2020-13 on Clinical Evaluation Assessment Report Template
– MDCG 2022-21 on Periodic Safety Update Report (PSUR)





