When your SOTA benchmarks don’t actually benchmark anything
I’ve seen clinical evaluation reports with 15 pages labeled “State of the Art.” Tables full of competitors. Lists of standards. Summaries of technologies. And when you ask, “What is the benchmark for your device?” the team goes silent. Because the SOTA section wasn’t written to set benchmarks. It was written to fill space.
In This Article
The State of the Art (SOTA) analysis has become one of the most misunderstood requirements in clinical evaluation under MDR. Teams know they need it. Reviewers expect it. But very few submissions actually use it for what it’s designed to do.
The purpose of a SOTA analysis isn’t to prove you researched the market. It’s not a competitor overview. It’s not a literature dump. It exists to define the performance and safety benchmarks against which your device will be evaluated throughout its lifecycle.
And when it doesn’t do that, the entire clinical evaluation starts on unstable ground.
What the regulation actually requires
Annex XIV Part A of MDR 2017/745 requires a state of the art analysis as part of the clinical evaluation. Specifically, it’s meant to demonstrate knowledge of the current state of the art, including similar devices available on the market.
MDCG 2020-6 clarifies this further. The SOTA is expected to describe what is currently accepted as best practice for the treatment or diagnosis your device is intended for. It should include technological alternatives, clinical benchmarks, and where applicable, performance thresholds recognized in the field.
But here’s what gets lost.
The SOTA analysis isn’t just a requirement to check off. It’s the foundation for comparison. Without clear benchmarks, you have no reference to say your device is safe and performs as intended. You can’t claim equivalence. You can’t interpret clinical data. You can’t evaluate complaints or trends in PMCF. The benchmarks you set here ripple through every section of your technical documentation.
“The SOTA section lists competitors but never states what performance or safety level is considered acceptable in the field. Reviewers cannot determine if your device meets, exceeds, or falls short of current standards.”
What goes wrong in most submissions
The first problem is scope confusion. Teams confuse describing the state of the art with listing everything tangentially related to the device. You get pages about the disease. Pages about anatomy. Pages summarizing guidelines. None of it answers the core question: What are the benchmarks?
I’ve reviewed SOTA sections that included five different imaging modalities when the device was a single-use instrument. The logic was that all of them are part of modern care. Technically true. Clinically irrelevant. None of those modalities set a benchmark for the device in question.
The second problem is failing to distinguish between technological alternatives and clinical benchmarks. A SOTA analysis might list three competitor devices, describe their features, and move on. But features aren’t benchmarks. What’s the complication rate for these devices? What’s the diagnostic sensitivity? What’s the mean time to result? Those are benchmarks. Those are what your device will be judged against.
The third problem, and the most damaging, is writing the SOTA in isolation. It’s often assigned to someone unfamiliar with the clinical evaluation plan or the risk management file. So the SOTA describes one set of devices, the equivalence discussion references another set, and the clinical data comes from a third population. Reviewers see the disconnect immediately.
Why this happens
In many projects, the SOTA is written late. The clinical evaluation is nearly complete. The literature review is done. Someone realizes the SOTA section is missing and fills it quickly. At that point, it becomes a summary exercise rather than a foundational analysis.
Or it’s written early, but in isolation. The SOTA author doesn’t know which claims the device will make. Doesn’t know which endpoints matter. Doesn’t know how clinical data will be structured. So they write broadly, cover everything, and hope it fits later.
Neither approach works.
The SOTA analysis must be written with the clinical evaluation endpoints already defined. If you don’t know what you’re measuring, you can’t identify relevant benchmarks.
How benchmarks should actually be set
The starting point is the intended purpose and the claims you plan to make. Those determine which benchmarks matter. If your device is a diagnostic, you need sensitivity and specificity benchmarks from current practice. If it’s a therapeutic device, you need complication rates, healing times, or functional outcomes from established treatments.
The benchmarks must be specific and measurable. Saying “current devices are safe and effective” is meaningless. Saying “current endoscopic procedures for this indication report perforation rates between 0.5% and 2% in published case series” is a benchmark. Now you have a reference. You can interpret your data. You can communicate your safety profile. You can update your risk analysis.
Next, you need to document where those benchmarks come from. Clinical guidelines. Published systematic reviews. Real-world evidence from registries. Competitor IFUs if publicly available. Regulatory databases like MAUDE if the pattern is clear and documented. The source matters because benchmarks must be credible and current.
But here’s what most people miss.
Benchmarks are not static. The state of the art evolves. A benchmark that was appropriate at the time of your initial submission may not be appropriate five years later. That’s why your SOTA analysis must be reviewed and updated as part of PMCF. If the benchmark moves, your safety and performance evaluation changes with it.
When there is no established benchmark
Sometimes you’re working with a novel device. There is no equivalent technology. No clear comparator. No published threshold for performance. This doesn’t mean you skip the SOTA. It means you document the gap.
You describe the current standard of care. You explain why no direct benchmark exists. And you propose a threshold based on clinical reasoning, risk-benefit considerations, and input from your clinical experts. That becomes your benchmark, and you justify it.
Notified Bodies understand this scenario. What they don’t accept is the absence of reasoning. If you have no benchmark and no explanation for how you’ll evaluate safety and performance, you’re not ready for clinical evaluation.
“No benchmark is provided for the primary endpoint. The CER states the device is ‘comparable to existing devices’ but does not specify the performance threshold that defines comparability.”
How this connects to the rest of the CER
Once you have benchmarks, they must be threaded through the entire clinical evaluation. In the appraisal of clinical data, you compare your results to the benchmarks. If your device performs better, you document that. If it performs similarly, you document that. If it underperforms, you explain why and what mitigation is in place.
In the risk-benefit analysis, benchmarks inform acceptable risk levels. A complication that would be unacceptable in a low-risk diagnostic might be tolerable in a high-risk intervention if the benefit exceeds that of current alternatives. But you can’t make that determination without knowing what current alternatives deliver.
In PMCF, benchmarks define what you’re monitoring. If the state of the art moves, your PMCF plan should detect that. If your device trends toward the edge of acceptable performance, PMCF should trigger investigation. But that only works if you defined acceptable performance in the first place.
This is why the SOTA analysis isn’t a standalone section. It’s infrastructure. Everything else depends on it.
What reviewers look for
When a Notified Body reviews your SOTA, they’re checking three things. First, did you identify the right benchmarks. Not just any devices or studies, but the ones that matter for your intended purpose and claims.
Second, are the benchmarks current. If your SOTA references studies from 15 years ago and there’s been significant progress since, that’s a gap. The state of the art isn’t historical. It’s present tense.
Third, did you actually use the benchmarks. If your SOTA defines a complication rate but your risk-benefit section never references it, the analysis is decorative. Reviewers notice when sections don’t connect.
They also notice when the benchmarks are too convenient. If every benchmark you cite makes your device look favorable, and you didn’t acknowledge any data that challenges your position, the analysis loses credibility. The SOTA should be objective. It’s not advocacy. It’s context.
A credible SOTA analysis acknowledges when competitors outperform your device on specific endpoints. That doesn’t disqualify your device. It informs the risk-benefit discussion and focuses your PMCF.
Practical steps to get this right
Start the SOTA analysis early, but finalize it after your clinical evaluation plan is stable. You need to know what you’re claiming and what endpoints matter before you can identify relevant benchmarks.
Work with your clinical experts. They know what thresholds are considered acceptable in practice. They know what guidelines recommend. They know what patients and clinicians expect. The SOTA shouldn’t be a regulatory exercise. It should reflect clinical reality.
Document your search strategy. How did you identify the benchmarks? What sources did you consult? What inclusion criteria did you use? This doesn’t need to be a full systematic review, but it should be traceable and reproducible.
Be explicit about the benchmarks you’re setting. Don’t bury them in paragraphs. State them clearly. Use tables if needed. Make sure anyone reading your CER can immediately see what performance and safety levels are considered acceptable.
And finally, review your SOTA regularly. At a minimum, this should happen during your periodic safety update report (PSUR) and as part of your CER updates. If the state of the art has changed and your benchmarks are outdated, your entire evaluation is at risk.
Why this matters more under MDR
Under the Medical Device Directive, clinical evaluation was often lighter. The SOTA was less emphasized. Equivalence could be loosely argued. That’s over.
MDR expects rigor. Annex XIV makes the SOTA a formal requirement. MDCG guidance explains what it should contain. Notified Bodies are trained to challenge vague or incomplete SOTA sections. And if your SOTA doesn’t set clear benchmarks, the rest of your CER becomes difficult to defend.
This isn’t bureaucracy. It’s logic. You can’t evaluate a medical device in a vacuum. You need reference points. The SOTA provides them. And when it’s done properly, it doesn’t just satisfy a requirement. It makes your clinical evaluation stronger, clearer, and more defensible.
The question isn’t whether you need a SOTA analysis. You do. The question is whether your SOTA actually does what it’s supposed to do. Does it set benchmarks that guide your evaluation, your risk management, and your post-market surveillance? Or does it just fill space?
Most submissions I review fall into the second category. That’s the gap I see most often. And it’s fixable.
Peace,
Hatem
Clinical Evaluation Expert for Medical Devices
Follow me for more insights and practical advice.
Frequently Asked Questions
What is a Clinical Evaluation Report (CER)?
A CER is a mandatory document under MDR 2017/745 that demonstrates the safety and performance of a medical device through systematic analysis of clinical data. It must be updated throughout the device lifecycle based on PMCF findings.
How often should the CER be updated?
The CER should be updated whenever significant new clinical data becomes available, after PMCF activities, when there are changes to the device or intended purpose, and at minimum during annual reviews as part of post-market surveillance.
What causes CER rejection by Notified Bodies?
Common reasons include inadequate equivalence demonstration, insufficient clinical data for claims, poorly structured SOTA analysis, missing gap analysis, and lack of clear benefit-risk determination. Structure and logical flow are as important as the data itself.
Which MDCG guidance documents are most relevant for clinical evaluation?
Key documents include MDCG 2020-5 (Equivalence), MDCG 2020-6 (Sufficient Clinical Evidence), MDCG 2020-13 (CEAR Template), MDCG 2020-7 (PMCF Plan), and MDCG 2020-8 (PMCF Evaluation Report).
Need Expert Help with Your Clinical Evaluation?
Get personalized guidance on MDR compliance, CER writing, and Notified Body preparation.
✌
Peace, Hatem
Your Clinical Evaluation Partner
Follow me for more insights and practical advice.
– Regulation (EU) 2017/745 (MDR), Annex XIV Part A
– MDCG 2020-6: Regulation (EU) 2017/745: Clinical evidence needed for medical devices previously CE marked under Directives 93/42/EEC or 90/385/EEC
– MDCG 2020-13: Clinical Evaluation Assessment Report Template
Related Resources
Read our complete guide to SOTA analysis under EU MDR: State of the Art (SOTA) Analysis under EU MDR
Or explore Complete Guide to Clinical Evaluation under EU MDR





