Why your visual acuity endpoint keeps failing CER review
I reviewed a clinical evaluation report last month for an intraocular lens. The manufacturer claimed clinical benefit based on 20/40 vision in 85% of patients at six months. The Notified Body rejected it. The reason? The endpoint was clinically irrelevant for the intended use, and the manufacturer had no idea why.
In This Article
This is not an isolated case. Ophthalmic devices face a particular challenge in clinical evaluation. Vision is measurable, but not every measurement proves clinical benefit. Many teams select outcome measures based on what the literature reports, not on what the MDR actually requires.
The result is a clinical evaluation that looks complete but does not demonstrate the safety and performance claims made in the technical documentation.
The regulatory expectation for ophthalmic devices
Ophthalmic devices span a wide range: refractive lenses, intraocular implants, surgical instruments, diagnostic equipment, and drug-device combinations. Each has different claims. Each claim requires specific clinical evidence.
Under MDR Article 61 and Annex XIV, the clinical evaluation must demonstrate that the device achieves its intended performance and that the clinical benefits outweigh the risks. For ophthalmic devices, this means you must link your outcome measures directly to the intended clinical effect stated in your instructions for use and labeling.
The challenge is that vision outcomes are often reported in ways that satisfy ophthalmologists but do not satisfy regulators. A visual acuity measurement may be standard in clinical practice, but if your device claims to improve contrast sensitivity or reduce glare, then visual acuity alone is not sufficient.
The outcome measure must match the claim. If your device claims improved night vision, you need photopic and scotopic testing under specified conditions. If you claim reduced aberrations, you need wavefront analysis or contrast sensitivity data. Visual acuity is necessary but rarely sufficient on its own.
Most deficiencies I see in ophthalmic CERs come from a mismatch between what is claimed and what is measured. The manufacturer states that the device improves quality of vision, but the only data presented is Snellen acuity. That is not quality of vision. That is a single measure of resolving power under high contrast conditions.
What outcome measures actually matter
Ophthalmic clinical evaluation requires you to think in layers. You have anatomical outcomes, functional outcomes, and patient-reported outcomes. Each layer serves a different purpose in the clinical evaluation.
Anatomical outcomes include measurements like corneal thickness, intraocular pressure, endothelial cell count, or lens position. These are objective and repeatable, but they do not prove clinical benefit. They prove that the device behaves as intended at the tissue level.
Functional outcomes include visual acuity, contrast sensitivity, refractive error, aberrometry, and visual field testing. These are closer to clinical benefit, but they still require interpretation. A patient with 20/25 vision may have disabling glare. A patient with 20/40 vision may function perfectly well in their environment.
Patient-reported outcomes include quality of life measures, visual function questionnaires, and symptom scores. These capture what the patient experiences, not just what the clinician measures. For many ophthalmic devices, this is where clinical benefit becomes real.
Manufacturers present visual acuity as the primary endpoint without justifying why this measure reflects the claimed benefit. If your device claims to reduce halos or improve mesopic vision, visual acuity under photopic conditions does not demonstrate that claim. Reviewers will ask for the missing data, and if it does not exist, your equivalence or clinical investigation strategy falls apart.
Which measures are required depends on the device and claim
For refractive devices, you need uncorrected and best-corrected visual acuity, manifest refraction, contrast sensitivity, and often patient-reported outcomes like visual disturbances and satisfaction. For posterior segment devices, you may need retinal imaging, optical coherence tomography, and disease-specific functional measures.
For diagnostic devices, sensitivity and specificity are anatomical or diagnostic outcomes, but clinical benefit is demonstrated by how those diagnostics change clinical decision-making or patient outcomes. You cannot claim clinical benefit from improved imaging resolution unless you show that the improved resolution leads to better diagnosis or treatment.
The MDCG 2020-13 guidance on clinical evaluation clarifies that clinical benefits must be demonstrated in the context of the intended use and the medical condition. This means your outcome measures must be validated for the population and condition your device targets.
Why vision outcomes fail regulatory review
The most common reason is that the manufacturer selects outcomes based on convenience, not relevance. Visual acuity is easy to measure and widely reported. But if your device is intended to reduce optical aberrations, and you do not measure aberrations, the clinical evaluation is incomplete.
Another reason is that the manufacturer does not define the minimally important difference. An improvement of one line of visual acuity may be statistically significant, but is it clinically meaningful? For some devices and indications, yes. For others, no. You must justify the threshold you use.
A third reason is that the study design does not match the claim. If you claim long-term stability, but your pivotal study is six months, the claim is not supported. If you claim superiority over standard care, but your study has no comparator, the claim is not supported.
Notified Bodies and competent authorities expect to see a clear chain of reasoning: device characteristics lead to measured outcomes, measured outcomes lead to clinical benefits, clinical benefits outweigh residual risks. If any link in that chain is weak, the entire evaluation is questioned.
The clinical evaluation is not a summary of literature. It is a structured argument that connects device performance to patient benefit. Every outcome measure you cite must serve that argument. If it does not, it is noise.
How to select and justify vision outcome measures
Start with the intended use. What does the device claim to do? Improve visual acuity? Reduce refractive error? Restore accommodation? Enhance contrast sensitivity? Each claim requires specific evidence.
Next, identify the primary and secondary endpoints that directly measure that claim. For a multifocal intraocular lens claiming to reduce spectacle dependence, the primary endpoint might be uncorrected visual acuity at distance and near, and a secondary endpoint might be spectacle independence rate. For a corneal inlay claiming to improve near vision in presbyopes, defocus curves and patient-reported near vision function are critical.
Then, ensure that the outcome measures are validated. Use standardized tests with known repeatability and clinical relevance. If you use a custom questionnaire, you must demonstrate its validity and reliability. Notified Bodies will ask for this.
Finally, define what constitutes success before you collect data. What is the threshold for clinical benefit? What is the acceptable rate of adverse events? These decisions must be made in advance and justified based on clinical context and existing standards.
Linking outcomes to safety and performance
Performance is what the device does. Safety is what the device does not do. Both must be demonstrated with appropriate measures.
For performance, you need outcomes that reflect the device’s intended effect. For safety, you need outcomes that capture the device’s potential harms. For ophthalmic devices, this includes adverse events like infection, inflammation, elevated intraocular pressure, retinal detachment, endothelial cell loss, or visual disturbances.
You must also address the severity and duration of adverse events. A transient increase in intraocular pressure may be acceptable if it resolves without intervention. A persistent increase requiring medication or surgery is a different risk profile.
The benefit-risk determination under MDR Annex I requires that benefits outweigh risks when the device is used as intended in the target population. This is not a binary judgment. It is a reasoned assessment based on the totality of clinical data.
Manufacturers present pooled safety data from multiple studies without stratifying by device version, surgical technique, or patient population. This makes it impossible to assess the true risk profile of the device under evaluation. Reviewers will reject this and ask for device-specific data or a clear justification for pooling.
The role of patient-reported outcomes
Patient-reported outcomes are often undervalued in ophthalmic clinical evaluations. Many manufacturers treat them as secondary or exploratory, but for many devices, they are the only way to demonstrate real-world benefit.
Consider a premium intraocular lens that claims to provide high-quality vision across all distances. Visual acuity at fixed test distances may show good results, but does the patient experience glare, halos, or reduced contrast in daily life? A patient-reported outcome measure captures this.
The challenge is selecting a validated instrument. Generic quality-of-life questionnaires may not be sensitive enough to detect differences in visual function. Disease-specific or device-specific questionnaires are better, but they must be validated in the target population.
MDCG 2020-13 states that clinical benefits should be demonstrated in terms that are meaningful to patients and healthcare professionals. For many ophthalmic devices, this means including patient-reported outcomes as co-primary or key secondary endpoints.
When patient-reported outcomes are required
If your device claims to improve quality of life, reduce symptoms, or provide patient satisfaction, you must measure those claims directly. You cannot infer quality of life from visual acuity. You cannot assume satisfaction from refractive outcomes.
For devices where clinical benefit depends on subjective experience—such as multifocal lenses, accommodating lenses, or devices intended to reduce visual disturbances—patient-reported outcomes are not optional. They are part of the core evidence required to support the claim.
Practical steps for clinical evaluation teams
Before you finalize your clinical evaluation plan, map every claim in your intended use to a specific outcome measure. If you cannot identify a validated measure for a claim, either remove the claim or design a study to generate the data.
Review your literature search to ensure that the studies you cite actually measure the outcomes you claim. Do not assume that a study on a similar device with different endpoints supports your device. Equivalence requires comparable clinical data, not just comparable technology.
Work with clinical experts who understand both the regulatory requirements and the clinical context. An experienced ophthalmic surgeon can tell you which outcome measures are clinically meaningful and which are artifacts of study design.
Document your rationale for every decision. Why did you choose these endpoints? Why is this threshold clinically meaningful? Why is this follow-up duration sufficient? Notified Bodies expect to see reasoning, not just results.
The clinical evaluation report is not a compliance document. It is a scientific argument. Every section must build toward a clear conclusion: the device is safe, the device performs as intended, the benefits outweigh the risks. Outcome measures are the foundation of that argument.
What this means for your next submission
If you are preparing a clinical evaluation for an ophthalmic device, start by questioning your outcome measures. Are they aligned with your claims? Are they validated? Are they sufficient to demonstrate clinical benefit?
If you are responding to a Notified Body query about endpoints, do not defend weak measures. Acknowledge the gap and explain how you will address it—whether through additional literature, post-market data, or a clinical investigation.
If you are conducting a clinical investigation, design your study with regulatory review in mind. Select endpoints that will satisfy both clinical and regulatory scrutiny. Define success criteria in advance. Plan for long-term follow-up if your claims depend on durability.
The devices that succeed in regulatory review are not always the ones with the best clinical results. They are the ones where the clinical results clearly support the claims, and the outcome measures are appropriate to the intended use.
Vision outcomes are not interchangeable. They are specific, and they must be justified. That justification is what separates a defensible clinical evaluation from one that collapses under review.
Frequently Asked Questions
What is a Clinical Evaluation Report (CER)?
A CER is a mandatory document under MDR 2017/745 that demonstrates the safety and performance of a medical device through systematic analysis of clinical data. It must be updated throughout the device lifecycle based on PMCF findings.
How often should the CER be updated?
The CER should be updated whenever significant new clinical data becomes available, after PMCF activities, when there are changes to the device or intended purpose, and at minimum during annual reviews as part of post-market surveillance.
What causes CER rejection by Notified Bodies?
Common reasons include inadequate equivalence demonstration, insufficient clinical data for claims, poorly structured SOTA analysis, missing gap analysis, and lack of clear benefit-risk determination. Structure and logical flow are as important as the data itself.
Which MDCG guidance documents are most relevant for clinical evaluation?
Key documents include MDCG 2020-5 (Equivalence), MDCG 2020-6 (Sufficient Clinical Evidence), MDCG 2020-13 (CEAR Template), MDCG 2020-7 (PMCF Plan), and MDCG 2020-8 (PMCF Evaluation Report).
Need Expert Help with Your Clinical Evaluation?
Get personalized guidance on MDR compliance, CER writing, and Notified Body preparation.
✌
Peace, Hatem
Your Clinical Evaluation Partner
Follow me for more insights and practical advice.
– Regulation (EU) 2017/745 (MDR), Article 61 and Annex XIV
– MDR Annex I: General Safety and Performance Requirements
– MDCG 2020-13: Clinical evaluation assessment report template
– MDCG 2020-5: Clinical evaluation equivalence guidance





