How long is long enough? The implantable follow-up trap

Hatem Rabeh

Written by HATEM RABEH, MD, MSc Ing

Your Clinical Evaluation Expert And Partner

in
S

A manufacturer submits a clinical evaluation report for a hip implant with five years of follow-up data. The Notified Body issues a major non-conformity. Not because the data is incomplete. Not because adverse events were hidden. But because five years, in the reviewer’s view, is not long enough to demonstrate long-term safety. The manufacturer believed they had followed the guidance. The reviewer saw a critical gap.

This scenario repeats across different implantable device categories. Cardiac valves, spinal implants, breast implants, orthopedic prosthetics. The question is always the same: how long is long enough?

The problem is not that manufacturers avoid follow-up. Most understand that implantables require extended monitoring. The problem is that duration alone does not satisfy reviewers. The issue is whether the follow-up period captures the clinically relevant failure modes and degradation mechanisms specific to that device.

Let me explain what reviewers actually look for, why generic timelines fail, and how to build a defensible rationale for follow-up duration.

What MDR and MDCG guidance actually say

MDR Article 61 requires clinical evidence throughout the lifecycle of the device. For implantables, this includes long-term safety and performance data. But the regulation does not specify a minimum duration in years.

MDCG 2020-6 offers more detail. It states that follow-up must be sufficient to identify long-term risks, including device degradation, biological responses, and late complications. It emphasizes that the duration should reflect the intended lifetime of the device and the nature of the risks.

Here is the gap. The guidance provides principles, not timelines. Manufacturers often translate this into round numbers. Five years. Ten years. Fifteen years. They assume that hitting a threshold satisfies the requirement.

Reviewers do not think in round numbers. They think in failure modes.

Key Insight
Reviewers evaluate whether your follow-up period captures the biological and mechanical lifecycle of your device, not whether it reaches a conventional milestone.

Why five years is not automatically sufficient

Many manufacturers default to five years for implantables. This duration appears frequently in published literature. It seems reasonable. It feels like a safe choice.

But consider what happens in the first five years for different devices.

For a coronary stent, the critical risks emerge early. Thrombosis, restenosis, and acute device failure occur primarily within the first year. Five years may be more than sufficient to capture relevant long-term outcomes.

For a hip prosthesis, the picture is different. Wear-related complications, osteolysis, and implant loosening often develop after seven to ten years. Five years of data may show good short-term integration but miss the failure modes that matter most for long-term performance.

For a silicone breast implant, the risk profile extends even further. Capsular contracture can develop over a decade. Implant rupture rates increase significantly after ten years. Late seroma is being identified at fifteen years and beyond. Five years captures the early phase, but not the lifecycle.

So the question is not whether five years is long enough in general. The question is whether five years captures the failure modes your device is most likely to experience based on its materials, mechanism of action, anatomical site, and interaction with tissue.

Common Deficiency
Justifying follow-up duration by citing literature for unrelated devices or by stating that “five years is standard in the field” without linking it to your specific risk profile.

What reviewers actually evaluate

When I review clinical evaluation reports, I look for a structured rationale that connects follow-up duration to device-specific risks. This rationale should address three elements.

First, the biological interaction. How does the device interact with tissue over time? Is there chronic inflammation, encapsulation, bone remodeling, or tissue ingrowth? What is the expected timeline for these processes to stabilize or degrade?

If your device relies on osseointegration, you need data that extends beyond the initial integration phase to capture late remodeling and potential loosening. If your device is encapsulated by fibrous tissue, you need to track whether that capsule remains stable or whether contraction, calcification, or rupture occurs over time.

Second, the material degradation. What are the material properties of your device? Does it degrade, corrode, wear, or fatigue? What is the expected lifespan based on bench testing and accelerated aging studies?

If your device contains polyethylene that is known to wear over time, you need clinical follow-up that extends to the point where wear particles could trigger osteolysis. If your device uses a bioresorbable polymer, you need to track performance beyond complete resorption.

Third, the clinical endpoint timeline. When do the primary safety and performance endpoints become clinically relevant? When do complications typically manifest in comparable devices or procedures?

If literature shows that revision rates for similar devices increase significantly after year eight, your follow-up should extend at least to that point. If late complications are documented in registries at fifteen years, stopping at ten years leaves a blind spot.

Key Insight
The rationale for follow-up duration must be device-specific and tied to known or anticipated failure mechanisms, not generic timelines borrowed from other submissions.

The role of state of the art in setting expectations

Your state of the art analysis directly influences what reviewers expect for follow-up duration.

If your SOTA identifies that comparable devices have documented complications at ten years, reviewers will expect your clinical data to extend at least that far. If registry data shows increasing revision rates after twelve years, stopping at eight years creates a gap.

This is where many submissions fail. The SOTA presents long-term data from equivalent devices or competitor products, but the clinical evaluation stops short of the same duration. The manufacturer has effectively shown the reviewer what the benchmark is, then failed to meet it.

I have seen this repeatedly. A CER includes a detailed analysis of hip registry data showing revision rates at fifteen years. The manufacturer’s own clinical follow-up stops at seven years. The justification is that seven years is sufficient to demonstrate safety. But the SOTA has already set the standard higher.

The gap is not always obvious to the manufacturer. They see seven years as a significant investment. They see published data supporting safety at that point. But the reviewer sees the disconnect between the evidence standard set by the SOTA and the evidence provided by the manufacturer.

Common Deficiency
Presenting long-term data for equivalent or competitor devices in the SOTA, then stopping your own clinical follow-up well before that duration without a clear justification.

When shorter follow-up is defensible

Not all implantables require ten or fifteen years of data. Shorter follow-up can be justified if you can demonstrate that the relevant risks have been adequately captured.

This requires a clear argument based on device characteristics and clinical evidence.

For devices with short intended use duration, such as bioresorbable stents or absorbable sutures, follow-up should extend beyond complete resorption to confirm that no late inflammatory or structural complications occur. This may be two to three years, not ten.

For devices with well-characterized materials and extensive historical data, such as titanium alloy orthopedic plates, shorter follow-up may be acceptable if you can reference long-term performance data from similar devices with the same material and design features. But this only works if the equivalence is tight and the SOTA supports it.

For low-risk implantables with minimal tissue interaction, such as certain dental implants or fixation screws, shorter follow-up may be sufficient if the failure modes are known to occur early and the clinical evidence confirms stability within that period.

The key is that the justification must be explicit. It must reference the risk profile, the material science, the clinical literature, and the failure timeline. It cannot be implied.

Building a defensible rationale

The rationale for follow-up duration should be integrated into your clinical development plan and your PMCF plan. It should be stated clearly in your clinical evaluation report.

Start by identifying the critical risks for your device. Not the generic risks from ISO 14971. The specific failure modes tied to your materials, design, and intended use.

Then map those risks to a timeline. When do they typically manifest? What does the literature say? What does accelerated testing predict? What do registries show for similar devices?

Then justify your follow-up duration based on that mapping. If your critical risks emerge within five years and stabilize after that, state it clearly and support it with data. If your risks extend to ten or fifteen years, your follow-up must extend at least that far.

If you cannot provide long-term clinical data at the time of submission, outline your PMCF strategy to collect it. Be specific about timelines, sample sizes, and endpoints. Show that you understand the gap and have a plan to close it.

Key Insight
A well-justified shorter follow-up period with a clear PMCF commitment is stronger than a generic claim of ten years without supporting rationale.

What happens when you get it wrong

If your follow-up duration does not align with the expected risk timeline, you will face questions during assessment. These questions will not be theoretical. They will require evidence.

The Notified Body may issue a non-conformity asking you to extend follow-up, provide additional data, or justify why your current data is sufficient. This can delay certification by months or years.

If you cannot provide a defensible answer, you may be required to conduct additional studies, which means waiting for data that could take five or ten more years to mature.

In some cases, the device may be approved with conditions. The PMCF plan becomes a binding commitment to collect long-term data within a specified timeframe. Failure to deliver that data can result in suspension or withdrawal of the certificate.

This is not rare. I have reviewed CERs where the manufacturer proceeded with submission based on optimistic assumptions about follow-up duration, only to face major deficiencies that could not be resolved without additional clinical work.

The cost is not just time. It is credibility. A weak rationale signals to reviewers that the clinical evaluation was not sufficiently rigorous. It raises questions about other sections of the CER.

What this means for your next submission

If you are preparing a clinical evaluation for an implantable device, do not default to a generic timeline. Build your rationale from the ground up.

Start with your risk analysis. Identify the failure modes that matter most for long-term safety. Map those failures to a timeline based on material science, clinical literature, and registry data.

Review your SOTA carefully. If the benchmark devices have ten years of follow-up, you need to meet or exceed that, or justify why your device is different.

If your follow-up is shorter than the expected risk timeline, address it explicitly. Outline your PMCF plan. Show that you understand the gap and have a strategy to close it.

And understand that this is not a one-time decision. Your follow-up duration must be reviewed and updated as new data emerges, as the SOTA evolves, and as your PMCF progresses.

The question is not how long is standard. The question is how long is defensible for your device, your risks, and your evidence.

That is what reviewers evaluate. That is what determines whether your submission moves forward or stalls.

Peace,
Hatem
Clinical Evaluation Expert for Medical Devices
Follow me for more insights and practical advice.

Frequently Asked Questions

What is a Clinical Evaluation Report (CER)?

A CER is a mandatory document under MDR 2017/745 that demonstrates the safety and performance of a medical device through systematic analysis of clinical data. It must be updated throughout the device lifecycle based on PMCF findings.

How often should the CER be updated?

The CER should be updated whenever significant new clinical data becomes available, after PMCF activities, when there are changes to the device or intended purpose, and at minimum during annual reviews as part of post-market surveillance.

What causes CER rejection by Notified Bodies?

Common reasons include inadequate equivalence demonstration, insufficient clinical data for claims, poorly structured SOTA analysis, missing gap analysis, and lack of clear benefit-risk determination. Structure and logical flow are as important as the data itself.

Which MDCG guidance documents are most relevant for clinical evaluation?

Key documents include MDCG 2020-5 (Equivalence), MDCG 2020-6 (Sufficient Clinical Evidence), MDCG 2020-13 (CEAR Template), MDCG 2020-7 (PMCF Plan), and MDCG 2020-8 (PMCF Evaluation Report). MDCG 2020-6, MDR Article 61

Need Expert Help with Your Clinical Evaluation?

Get personalized guidance on MDR compliance, CER writing, and Notified Body preparation.

Peace, Hatem

Your Clinical Evaluation Partner

Follow me for more insights and practical advice.

References:
– MDR 2017/745 Article 61
– MDCG 2020-6: Regulation (EU) 2017/745: Clinical evidence needed for medical devices previously CE marked under Directives 93/42/EEC or 90/385/EEC

Deepen Your Knowledge

Read Complete Guide to Clinical Evaluation under EU MDR for a comprehensive overview of clinical evaluation under EU MDR 2017/745.