How Long Is Long Enough? The Implant Follow-Up Question
A manufacturer submitted a clinical evaluation for a hip implant with seven years of follow-up data. The Notified Body rejected it. Not because the data was poor. Not because the evidence was incomplete. But because the timeframe did not match the expected lifetime of the device. The submission treated seven years as sufficient. The reviewers saw it as a starting point.
In This Article
This mismatch happens more often than you would expect. Manufacturers believe they have demonstrated long-term performance. Regulators see a gap. The question is not abstract. It shapes your clinical evaluation, your PMCF plan, and your entire post-market strategy.
The issue is not about collecting more data for the sake of it. It is about demonstrating that you understand what long-term really means for your specific device.
The Regulatory Expectation Is Not Static
MDR Annex XIV Part A requires clinical evidence that covers the expected lifetime of the device and its intended use. That sounds straightforward until you try to define what lifetime means for an implant that is designed to stay in the body for decades.
The regulation does not give you a fixed number. No table tells you that a hip implant needs fifteen years and a spinal cage needs twenty. The expectation is built around the biological reality of the device. If you market a device as permanent, the follow-up must reflect permanence. If you claim durability for the lifetime of the patient, your data must support that claim across time.
Here is where many technical files fall short. The manufacturer presents data that feels adequate from an engineering perspective but does not align with the clinical claim. A device designed to last indefinitely is supported by data that stops at five or seven years. The gap is not filled with a plan. It is left open, and the reviewers notice.
Manufacturers justify short follow-up periods by pointing to available literature, but they do not acknowledge that their own device requires longer observation to substantiate the claims made in the IFU and labeling.
What Duration Actually Satisfies Reviewers?
The answer depends on several factors. Device type. Indication. Patient population. Mechanism of action. Materials. Degradation profile if the device is absorbable. Risk class and level of invasiveness.
For non-absorbable orthopedic implants such as hip or knee prostheses, the expectation is typically ten to fifteen years of clinical follow-up. Some Notified Bodies expect data extending beyond fifteen years, especially for younger patients who will live with the device longer. For spinal implants, the threshold is often similar, sometimes longer depending on the fusion claim.
For cardiovascular implants like stents or heart valves, the timeline varies with the device design. Drug-eluting stents require follow-up that captures late thrombosis and restenosis patterns, often extending to five to ten years. Transcatheter heart valves face scrutiny around structural valve deterioration, which may not manifest until after seven to ten years.
For absorbable devices, the follow-up must extend well beyond the degradation period. If your device is fully absorbed in eighteen months, that is not the endpoint. You need to demonstrate that the tissue response remains stable, that no delayed complications emerge, and that the mechanical function is maintained after resorption. Expect follow-up extending to three to five years or more depending on the indication.
For implants in pediatric populations, the duration becomes even more critical. A device implanted in a child must be followed through growth phases. The mechanical environment changes. Bone remodeling continues. A five-year dataset in an adult population does not transfer to a pediatric claim without specific evidence.
The duration must match the biological claim, not the availability of data. If your device is marketed as a lifetime solution, five years is not long-term. It is mid-term at best.
The Role of Literature and Equivalence
Many manufacturers rely on equivalence to fill the gap. They present their own data up to a certain point, then supplement with literature on equivalent devices that extends further in time. This approach can work, but only if the equivalence demonstration is solid.
The problem is that equivalence rarely holds as tightly as manufacturers assume. Small differences in material, design, or fixation method can affect long-term performance. A hip stem with a different coating behaves differently over time. A spinal cage with altered porosity may integrate differently. The literature you cite may cover a device that looks similar on paper but diverges clinically after several years.
Reviewers scrutinize this carefully. They do not accept general statements about equivalence. They look for specific justification that the long-term failure modes, the biological response, and the mechanical behavior will be the same. If you cannot demonstrate that, the literature does not close the gap.
When equivalence is weak, you are left with a choice. Either extend your own follow-up, or acknowledge the limitation and commit to a robust PMCF plan that will generate the missing data. The second option is acceptable if the plan is credible. But too often, the PMCF plan is vague. It promises long-term follow-up without specifying how the data will be collected, how compliance will be maintained, and how the timeline will be managed.
Why Short Follow-Up Creates Downstream Problems
When you submit a clinical evaluation with insufficient follow-up duration, you create more than a deficiency. You create doubt. Reviewers start questioning whether the manufacturer understands the risks. Whether the clinical claims are grounded. Whether post-market surveillance will actually function.
Short follow-up also weakens your state of the art analysis. If you claim your device is equivalent to others with longer data, but you do not have comparable follow-up yourself, the comparison becomes one-sided. You are comparing your device at five years to competitors at fifteen years. The SOTA stops being a balanced assessment and turns into a justification exercise.
This creates pressure on the PMCF plan. The plan must now compensate for the gap. But compensating for a ten-year gap in a PMCF plan is not straightforward. You need registries, long-term cohorts, and sustained engagement with clinical sites. Many manufacturers underestimate what that requires. They write a plan that sounds adequate but lacks the infrastructure to execute.
Reviewers see this pattern frequently. The clinical evaluation acknowledges the gap. The PMCF plan promises to fill it. But the timeline is unrealistic, the sample size is unclear, and the endpoints are not well defined. The result is another round of questions.
PMCF plans state that long-term follow-up will be conducted, but they do not specify the minimum duration, the follow-up intervals, or how patient retention will be managed over ten or fifteen years.
What Satisfies Regulators Is Not Just Duration
Duration alone is not enough. A fifteen-year follow-up that loses eighty percent of patients after three years does not demonstrate long-term performance. The quality of the data matters as much as the timeframe.
Reviewers look for consistent follow-up intervals. They want to see that patients were assessed regularly, not just at implantation and then once at the end of the study. They look for objective endpoints. Radiographic assessment. Functional scores. Revision rates. Complication incidence. Subjective patient-reported outcomes are valuable, but they need to be supported by objective measures.
They also look for completeness. If your study started with three hundred patients and ends with forty at the ten-year mark, the attrition undermines the conclusions. High dropout rates introduce selection bias. The patients who remain in the study may not represent the broader population. Those who experienced problems may have left the study or been lost to follow-up.
Transparency about attrition is critical. If your follow-up is incomplete, acknowledge it. Explain the reasons. Describe the efforts made to maintain contact with patients. Show that you understand the limitation and that your PMCF plan addresses it.
How to Structure Your Approach
If you are developing a clinical evaluation for a long-term implant, start by defining what long-term means for your device. Look at the intended use. Look at the claims in the IFU. Look at what competitors have published. Then ask whether your data covers that timeframe.
If it does not, be explicit about the gap. Do not hide it in general statements. State clearly how long your data extends, what it demonstrates, and what remains uncertain. Then build a PMCF plan that credibly fills the gap.
The PMCF plan should specify the minimum follow-up duration. Not vague language like “long-term follow-up will be conducted.” Specify the exact timeframe. For a hip implant, commit to fifteen years. For a spinal cage, commit to ten or more depending on the indication. Define the follow-up intervals. Annual assessments are often expected for high-risk implants.
Describe how you will maintain patient engagement. Long-term studies fail when patients are lost to follow-up. What systems will you use to track patients? How will you manage site compliance? How will you handle patients who move or change healthcare providers?
Include interim analysis plans. Do not wait fifteen years to review the data. Plan interim reviews at five years, seven years, and ten years. Use those reviews to detect signals early and update your clinical evaluation progressively.
A credible PMCF plan is not a promise. It is a structured system with defined timelines, clear endpoints, realistic sample sizes, and a described process for maintaining compliance over many years.
The Real Question Is Not Just Duration
The real question is whether you understand the lifecycle of your device in the human body. Whether you have thought through what happens at five years, ten years, fifteen years. Whether your evidence base reflects that understanding.
Regulators are not setting arbitrary timelines to make your life harder. They are asking whether your clinical evidence matches the biological reality of your device. A device that stays in the body for decades needs evidence that covers decades. If you do not have that evidence yet, you need a clear plan to generate it.
This is not a documentation issue. It is a strategic issue. It affects your clinical studies, your post-market commitments, your resource planning, and your relationship with your Notified Body.
When you submit a clinical evaluation with insufficient follow-up, the message you send is that you have not fully considered the long-term risks. That you are relying on assumptions rather than evidence. That the clinical evaluation is compliance-driven rather than patient-centered.
When you submit a clinical evaluation with appropriate follow-up, or with a credible plan to achieve it, the message changes. You demonstrate that you understand the device. That you have thought through the risks. That your post-market strategy is real.
That difference is what satisfies regulators. Not just the duration of your data, but the depth of your understanding.
Frequently Asked Questions
What is a Clinical Evaluation Report (CER)?
A CER is a mandatory document under MDR 2017/745 that demonstrates the safety and performance of a medical device through systematic analysis of clinical data. It must be updated throughout the device lifecycle based on PMCF findings.
How often should the CER be updated?
The CER should be updated whenever significant new clinical data becomes available, after PMCF activities, when there are changes to the device or intended purpose, and at minimum during annual reviews as part of post-market surveillance.
What causes CER rejection by Notified Bodies?
Common reasons include inadequate equivalence demonstration, insufficient clinical data for claims, poorly structured SOTA analysis, missing gap analysis, and lack of clear benefit-risk determination. Structure and logical flow are as important as the data itself.
Which MDCG guidance documents are most relevant for clinical evaluation?
Key documents include MDCG 2020-5 (Equivalence), MDCG 2020-6 (Sufficient Clinical Evidence), MDCG 2020-13 (CEAR Template), MDCG 2020-7 (PMCF Plan), and MDCG 2020-8 (PMCF Evaluation Report).
Need Expert Help with Your Clinical Evaluation?
Get personalized guidance on MDR compliance, CER writing, and Notified Body preparation.
✌
Peace, Hatem
Your Clinical Evaluation Partner
Follow me for more insights and practical advice.
– MDR 2017/745 Annex XIV Part A
– MDCG 2020-5 Clinical Evaluation Assessment Report Template
– MDCG 2020-13 Clinical Evaluation Assessment Report Template
Deepen Your Knowledge
Read Complete Guide to Clinical Evaluation under EU MDR for a comprehensive overview of clinical evaluation under EU MDR 2017/745.





