Why your clinical endpoints may not support your claims
A Class IIa device for wound healing had six clinical studies. All showed statistically significant results. The Notified Body still issued a major non-conformity. The reason? Every study measured time to wound closure. Not one addressed infection prevention, the primary claim on the label.
In This Article
This happens more often than you think. Manufacturers gather impressive-looking data. Strong p-values. Large sample sizes. Well-conducted studies. Then they face rejection because the evidence does not actually address what the device is supposed to do.
The disconnect sits between what was measured and what needs to be demonstrated. This is the surrogate endpoint problem.
What MDR Actually Requires
MDR Article 61 and Annex XIV make clear that clinical evaluation must demonstrate safety and performance for the intended purpose. Not for some related outcome. Not for a proxy measure. For the actual claim.
MDCG 2020-6 on sufficient clinical evidence reinforces this. The evidence must address the benefit-risk profile relevant to the device’s intended use and target population. If your device claims to reduce complications, you need data on complications. If it claims to improve quality of life, you need quality of life measures.
This sounds obvious when stated directly. Yet I routinely see clinical evaluations built entirely on surrogate endpoints that never connect back to the actual clinical benefit claimed.
A surrogate endpoint is a measurement that substitutes for a direct measure of how a patient feels, functions, or survives. It might correlate with patient outcomes, but correlation is not demonstration.
The Surrogate Endpoint Trap
Consider a cardiac monitoring device that claims to reduce hospital readmissions for heart failure patients. The manufacturer provides studies showing the device accurately detects fluid retention changes. Sensitivity 95%. Specificity 92%. Technically impressive.
But detection accuracy is a surrogate endpoint. It measures device function, not patient outcome. The relevant question for the claim is different: Does using this device actually reduce readmissions?
That requires evidence showing that patients monitored with the device had fewer hospital readmissions compared to patients managed with standard care. You need intervention studies, not just technical performance data.
The gap between these two is where many clinical evaluation reports fail.
Why Manufacturers Default to Surrogates
I understand the temptation. Surrogate endpoints are easier to measure. They require smaller sample sizes. They show results faster. They cost less to study.
Patient outcome studies are expensive and time-consuming. A study measuring actual reduction in myocardial infarction requires hundreds of patients and years of follow-up. A study measuring improvement in lipid profiles needs fifty patients and three months.
The regulatory pressure to get devices to market creates strong incentive to use the faster, cheaper option.
But if the surrogate does not reliably predict the patient outcome, the evidence is not sufficient for MDR purposes.
Using technical performance data as if it were clinical benefit data. Accuracy, precision, and reliability specifications do not demonstrate that patients are better off using your device.
When Surrogates Can Work
Not all surrogate endpoints are inadequate. Some have strong established relationships with patient outcomes. But this relationship must be demonstrated, not assumed.
Take blood pressure measurement devices. Reduction in blood pressure is technically a surrogate endpoint. The real patient outcomes are stroke, myocardial infarction, heart failure, death. But decades of evidence have established that blood pressure control reduces these outcomes. The link is validated.
So for a blood pressure monitor, demonstrating measurement accuracy is clinically relevant. The surrogate is accepted because the connection to patient benefit is proven by extensive literature.
Here is the key question: Is the link between your endpoint and the patient outcome established in the medical literature?
If yes, you can reference that literature to bridge from your surrogate data to the claimed benefit. If no, you need direct patient outcome data.
The Bridging Requirement
When you use surrogate endpoints, your clinical evaluation report must explicitly make this connection. You must show that changes in the surrogate endpoint translate to changes in patient outcomes for your specific application.
This requires systematic literature review. Not just citation of a few papers. A structured search and appraisal demonstrating that the endpoint you measured is a validated predictor of the outcome you claim.
I have seen reports that measure biomarkers with no evidence that changing those biomarkers improves patient health. The assumption is made but never supported. That does not satisfy MDR requirements.
Validation of the surrogate-outcome relationship must be specific to your patient population, disease stage, and intervention type. General correlation is not sufficient if your context differs from the validating studies.
Patient-Relevant Outcomes
What makes an endpoint patient-relevant? It directly reflects how the patient feels, functions, or survives.
Mortality is patient-relevant. Symptom relief is patient-relevant. Return to work is patient-relevant. Quality of life scores are patient-relevant if properly validated.
Laboratory values are usually not patient-relevant unless the link to clinical benefit is established. Imaging findings are usually not patient-relevant unless they predict outcomes. Technical specifications are never patient-relevant on their own.
Your intended purpose statement should guide you. If the device claims to improve mobility, you need mobility measures. If it claims to reduce pain, you need pain measures. If it claims to extend survival, you need survival data.
The claim and the evidence must align directly.
The Regulatory Reviewer Perspective
When I review a clinical evaluation report, I map every claim in the intended purpose to the evidence provided. I ask: Where is the data that supports this specific statement?
If the device claims to reduce complications but only provides data on procedure time, there is a gap. Faster procedures might reduce complications. They might not. Without evidence showing the actual complication rate, the claim is not supported.
Notified Bodies approach this the same way. They will identify every claim element and look for corresponding evidence. If the evidence does not directly address the claim, they will issue a non-conformity.
You cannot reason your way out of this. You cannot argue that the surrogate logically implies the outcome. You must demonstrate it with data.
Relying on clinical reasoning to bridge from surrogate endpoints to claimed benefits without supporting literature or data. Reviewers require evidence, not argumentation.
Composite Endpoints and Multiple Measures
Some devices affect multiple patient outcomes. A diabetes management system might reduce hypoglycemic events, improve HbA1c, and enhance quality of life. You have several relevant endpoints.
This is appropriate if all these outcomes are part of your intended purpose. But be careful with composite endpoints that mix patient-relevant outcomes with surrogate measures.
I have seen studies report a composite endpoint combining mortality, hospitalization, and biomarker changes. The first two are patient outcomes. The third is a surrogate. If the composite shows benefit but the patient outcomes alone do not, the evidence is weaker than it appears.
Always report patient-relevant outcomes separately. Do not bury them in composites where their individual contribution is unclear.
Prioritizing Outcomes
Some outcomes matter more than others. Mortality and major morbidity are primary concerns. Symptom improvement and quality of life are important but secondary. Convenience and efficiency are tertiary.
Your clinical evidence should address the primary outcomes first. If your device affects serious health outcomes, that is where your evidence must be strongest. You cannot compensate for weak safety or efficacy data with strong usability or satisfaction data.
The hierarchy of evidence importance must match the hierarchy of clinical importance.
PMCF and Endpoint Selection
Post-market clinical follow-up must continue measuring the endpoints that matter. If your pre-market evidence relied on surrogate endpoints with literature bridging, your PMCF should aim to gather direct patient outcome data.
This is where you confirm that the predicted relationship actually holds in real-world use. That the surrogate-outcome link validated in controlled studies applies to your device in routine clinical practice.
PMCF is not just about collecting more of the same data. It is about addressing the gaps and uncertainties that remained after pre-market evaluation. If you relied on surrogates, confirming patient outcomes is a key gap to address.
The PMCF plan should explicitly identify which outcomes are being measured and why. What questions are you answering? What uncertainties are you reducing? What claims are you confirming?
PMCF provides the opportunity to transition from surrogate endpoints to patient outcomes over time. Use it strategically to strengthen your clinical evidence base and reduce reliance on proxies.
Practical Guidance for Choosing Endpoints
Start with your intended purpose statement. Write down every claim. For each claim, identify the patient outcome that directly reflects that claim.
Ask: What would a patient or clinician consider meaningful improvement or benefit? That is your target endpoint.
Then evaluate whether you have data on that endpoint. If not, assess whether a surrogate is acceptable. This requires literature review to validate the surrogate-outcome relationship.
If the literature does not support the surrogate for your specific context, you need to plan studies that measure the patient outcome directly. This might delay market entry, but it prevents later regulatory problems.
I know this is not always commercially convenient. But the MDR does not make exceptions for inconvenience.
Documenting the Rationale
Your clinical evaluation report must explain why the chosen endpoints are appropriate. This is not optional. Reviewers will question endpoint selection if it is not clearly justified.
If you use surrogate endpoints, document the validation evidence. Cite specific studies showing the relationship between the surrogate and patient outcomes. Explain why this relationship applies to your device and population.
If you use patient-relevant outcomes, explain how they reflect the claimed benefits. Show that the measures are validated and appropriate for your target population.
This documentation creates transparency. It shows you made deliberate, evidence-based choices rather than simply using whatever data was convenient.
When Evidence Gaps Exist
Sometimes you genuinely cannot obtain direct patient outcome data pre-market. The studies would take too long, cost too much, or require patient numbers you cannot access. This happens.
In these cases, you must acknowledge the gap. Explain why direct outcome data is not available. Justify the use of surrogate endpoints with whatever validation evidence exists. And commit to gathering patient outcome data post-market through PMCF.
This is not ideal, but it is honest. Reviewers can accept this if you are transparent about the limitation and have a clear plan to address it.
What they will not accept is pretending the surrogate data fully demonstrates patient benefit when it does not. That is where manufacturers get into serious trouble.
Failing to acknowledge limitations in the clinical evidence. If your data has gaps, admit them and explain how they will be addressed. Silence on limitations raises more concern than transparency.
Final Considerations
The choice between surrogate endpoints and patient outcomes is not purely scientific. It is regulatory, clinical, and commercial. You must balance feasibility against evidence quality against regulatory acceptability.
But the MDR sets a clear standard. The evidence must demonstrate the claimed benefits for the intended purpose. If your endpoints do not directly reflect those benefits, you need to bridge that gap with validated relationships or acknowledge the limitation.
There is no shortcut around this. Technical performance is necessary but not sufficient. Clinical benefit must be demonstrated, not inferred.
When you build your clinical evaluation strategy, make endpoint selection an early priority. Map claims to outcomes. Identify which data you have and which gaps exist. Make deliberate choices about surrogates based on validation evidence, not just convenience.
This approach will strengthen your clinical evaluation and reduce the risk of late-stage regulatory issues. It aligns your evidence with what regulators actually need to see.
And it ensures that your clinical claims are grounded in evidence that matters to patients, not just evidence that is easy to collect.
Peace,
Hatem
Clinical Evaluation Expert for Medical Devices
Follow me for more insights and practical advice.
Frequently Asked Questions
What is a Clinical Evaluation Report (CER)?
A CER is a mandatory document under MDR 2017/745 that demonstrates the safety and performance of a medical device through systematic analysis of clinical data. It must be updated throughout the device lifecycle based on PMCF findings.
How often should the CER be updated?
The CER should be updated whenever significant new clinical data becomes available, after PMCF activities, when there are changes to the device or intended purpose, and at minimum during annual reviews as part of post-market surveillance.
What causes CER rejection by Notified Bodies?
Common reasons include inadequate equivalence demonstration, insufficient clinical data for claims, poorly structured SOTA analysis, missing gap analysis, and lack of clear benefit-risk determination. Structure and logical flow are as important as the data itself.
Which MDCG guidance documents are most relevant for clinical evaluation?
Key documents include MDCG 2020-5 (Equivalence), MDCG 2020-6 (Sufficient Clinical Evidence), MDCG 2020-13 (CEAR Template), MDCG 2020-7 (PMCF Plan), and MDCG 2020-8 (PMCF Evaluation Report).
Need Expert Help with Your Clinical Evaluation?
Get personalized guidance on MDR compliance, CER writing, and Notified Body preparation.
✌
Peace, Hatem
Your Clinical Evaluation Partner
Follow me for more insights and practical advice.
– Regulation (EU) 2017/745 (MDR), Article 61 and Annex XIV
– MDCG 2020-6: Regulation (EU) 2017/745: Sufficient clinical evidence for legacy devices
– MDCG 2020-13: Clinical evaluation assessment report template





