State of the Art: What Reviewers Actually Want to See

Hatem Rabeh

Written by HATEM RABEH, MD, MSc Ing

Your Clinical Evaluation Expert And Partner

in
S

I see manufacturers spend weeks compiling literature searches, organizing hundreds of references, and presenting them in detailed tables. Then the Notified Body comes back: “The state of the art section does not establish the benchmark.” The file had literature. It had citations. What was missing?

The confusion around State of the Art (SOTA) runs deeper than most manufacturers realize. It is treated as a literature review exercise when it should be treated as a benchmark-setting exercise.

This is Part 3 of the series on Mastering the Clinical Evaluation Report. After covering equivalence and clinical data gaps, we now address what may be the most misunderstood section of the entire CER: the State of the Art.

The regulatory expectation is clear in MDCG 2020-6 and MEDDEV 2.7/1 Rev 4. The SOTA must define what is currently accepted in clinical practice for the medical condition or treatment your device addresses. It is not about listing what exists in literature. It is about establishing what the standard of care actually is.

Yet most SOTA sections fail this test.

Why Literature Dumps Are Not State of the Art

Manufacturers often approach SOTA as a literature collection task. They search databases, extract studies, summarize findings, and present them in tables. The assumption is that more references equal better coverage.

This approach misses the regulatory purpose entirely.

Reviewers are not looking for proof that you can search PubMed. They are looking for proof that you understand the clinical context in which your device will be used. They want to see that you know what treatments are available, what outcomes are expected, what safety standards apply, and how clinical decisions are made in practice.

A literature dump gives none of that.

Common Deficiency
“The SOTA section contains multiple references but does not synthesize what the current benchmark is for safety, performance, or clinical outcomes. It is unclear what standard the device is being compared against.”

This is the most common feedback I see. The file has content, but it does not establish a position. It lists options without defining what is accepted. It presents data without interpreting standards.

Reviewers cannot evaluate your device if they do not know what the reference point is.

What the Benchmark Actually Includes

When MDCG 2020-6 requires you to describe the state of the art, it is asking for a structured analysis of the current medical standard. This includes four core elements.

Clinical Guidelines and Protocols

What do the professional societies recommend? What are the treatment pathways? What are the diagnostic criteria? If your device is used in cardiovascular intervention, what do the ESC or ACC/AHA guidelines say about the procedure? If it is a diagnostic device, what are the current imaging protocols?

This is not optional context. It defines the clinical environment your device enters.

If the guidelines recommend a specific approach and your device deviates from it, that deviation must be addressed. If the guidelines are silent on the technology your device uses, that gap must be acknowledged. Either way, the benchmark must be stated.

Alternative and Comparator Treatments

What else is available for the same condition? What do clinicians currently use? What are the outcomes, risks, and limitations of those alternatives?

This is where manufacturers often hesitate. They worry that describing alternatives will weaken their case. The opposite is true. Reviewers expect you to know the competitive landscape. If you do not describe it, they assume you are unaware of it.

The SOTA must include what is standard, what is emerging, and what has been tried and abandoned. Each alternative sets a piece of the benchmark.

Key Insight
Describing alternatives does not weaken your device. It proves you understand the clinical decision-making context and can position your device within it.

Safety Expectations

What adverse events are considered acceptable for this type of intervention? What complication rates are reported in clinical practice? What are the known risks of existing treatments?

This is one of the most critical benchmarks. Your device will be judged against what is already tolerated in practice. If current surgical approaches have a 2% infection rate, and your device is meant to reduce invasiveness, reviewers will expect infection rates below 2%. If it is equal or higher, you must explain why that is acceptable.

The SOTA defines the safety threshold your device must meet or exceed.

Outcome Standards

What clinical outcomes are considered successful? How are they measured? What follow-up duration is standard? What endpoints matter to clinicians and patients?

If your device is a joint implant, the SOTA must describe survivorship rates at 5, 10, and 15 years. If it is a wound dressing, it must define healing time benchmarks. If it is a monitoring device, it must clarify what detection accuracy is expected.

Without outcome standards, there is no way to interpret your clinical data.

How Reviewers Read the SOTA Section

Reviewers do not read the SOTA to learn about the disease. They read it to assess whether you understand the clinical context well enough to evaluate your own device.

They ask three questions.

First: Does the manufacturer know what the current standard of care is?

Second: Does the manufacturer understand what safety and performance levels are expected?

Third: Can the clinical data for this device be interpreted against a defined benchmark?

If any of those answers is unclear, the SOTA has failed its purpose.

This is why synthesis matters more than volume. A well-structured SOTA might reference 15 key sources and still be stronger than one that lists 150 without interpretation.

Common Deficiency
“The SOTA section does not explain how the described treatments and outcomes relate to the intended use of the device. The benchmark is unclear.”

This feedback signals a disconnect. The manufacturer presented information but did not connect it to the device evaluation. The reviewer cannot tell whether the manufacturer understands the implications of what they cited.

Every element in the SOTA must be there for a reason. If it does not help define the benchmark, it does not belong.

What Happens When SOTA Is Weak

A weak SOTA creates cascading problems throughout the CER.

If the benchmark is not defined, the clinical data section cannot show where your device stands relative to it. Claims about safety or performance have no reference point. Equivalence arguments lack context. Benefit-risk conclusions become subjective.

Reviewers will send the file back, often with vague feedback, because the foundation is missing.

I have seen manufacturers spend months generating new clinical data to address deficiencies, only to realize later that the real issue was not the data. It was the missing benchmark. The SOTA never established what the data should be compared against.

This is expensive. It delays certification. It frustrates teams.

But it is avoidable.

What a Strong SOTA Looks Like in Practice

A strong SOTA is concise, structured, and purposeful. It does not try to cover everything. It focuses on what matters for evaluating the specific device.

It starts with the medical condition and the current treatment paradigm. It describes what clinicians do, what outcomes they expect, and what risks they accept.

It then covers alternatives, not as competitors to dismiss, but as reference points to understand. It explains what each approach offers and where the gaps are.

It defines safety benchmarks based on real-world practice and published standards. It sets outcome expectations based on clinical guidelines and literature consensus.

Finally, it connects everything back to the device. It shows how the device fits into the clinical context and what benchmark it will be measured against.

This is not a literature review. It is a clinical landscape analysis.

Key Insight
The SOTA should read like a clinical briefing for a reviewer who understands medicine but does not specialize in your specific field. It should prepare them to evaluate your device intelligently.

How to Build This Section Correctly

Start by asking: What does a clinician need to know to use this device appropriately?

Then ask: What does a reviewer need to know to evaluate whether this device is safe and performs as intended?

The answers to those questions define your SOTA scope.

Search clinical guidelines first. Professional societies publish treatment recommendations that define the standard. These are your primary references.

Then look at systematic reviews and meta-analyses that describe treatment outcomes and safety profiles. These give you the statistical benchmarks.

Include real-world evidence where guidelines are outdated or silent. Registries, observational studies, and post-market data from established devices show what actually happens in practice.

Do not include every study you find. Include the ones that help define the benchmark.

Then synthesize. Do not just list findings. Explain what they mean. State the benchmark clearly.

If current guidelines recommend a specific imaging resolution, state it. If the complication rate for standard surgical approaches is 3-5%, state it. If the 10-year survivorship for existing implants is 92%, state it.

Make it impossible for the reviewer to misunderstand what the reference point is.

Why This Matters for the Rest of Your CER

The SOTA is not just another section to complete. It is the foundation that makes every other section interpretable.

Your appraisal of clinical data depends on having a defined benchmark. Your benefit-risk analysis depends on knowing what is acceptable in practice. Your PMCF plan depends on understanding what outcomes need long-term monitoring.

Without a solid SOTA, the rest of the CER is built on unclear ground.

Reviewers know this. That is why they focus on it.

When the SOTA is strong, the rest of the evaluation flows logically. When it is weak, everything downstream becomes questionable.

This is not about writing more. It is about defining the standard clearly so your device can be evaluated fairly against it.

Next in this series, we will cover how to appraise clinical data once the benchmark is set. Because having a SOTA is only useful if you know how to use it.

Frequently Asked Questions

What is a Clinical Evaluation Report (CER)?

A CER is a mandatory document under MDR 2017/745 that demonstrates the safety and performance of a medical device through systematic analysis of clinical data. It must be updated throughout the device lifecycle based on PMCF findings.

How often should the CER be updated?

The CER should be updated whenever significant new clinical data becomes available, after PMCF activities, when there are changes to the device or intended purpose, and at minimum during annual reviews as part of post-market surveillance.

What causes CER rejection by Notified Bodies?

Common reasons include inadequate equivalence demonstration, insufficient clinical data for claims, poorly structured SOTA analysis, missing gap analysis, and lack of clear benefit-risk determination. Structure and logical flow are as important as the data itself.

Which MDCG guidance documents are most relevant for clinical evaluation?

Key documents include MDCG 2020-5 (Equivalence), MDCG 2020-6 (Sufficient Clinical Evidence), MDCG 2020-13 (CEAR Template), MDCG 2020-7 (PMCF Plan), and MDCG 2020-8 (PMCF Evaluation Report). MDCG 2020-6, MEDDEV 2.7/1 Rev 4

Need Expert Help with Your Clinical Evaluation?

Get personalized guidance on MDR compliance, CER writing, and Notified Body preparation.

Peace, Hatem

Your Clinical Evaluation Partner

Follow me for more insights and practical advice.

References:
– MDR 2017/745 Annex XIV Part A
– MDCG 2020-6: Clinical Evaluation and Post-Market Clinical Follow-up
– MEDDEV 2.7/1 Rev 4: Clinical Evaluation

Related Resources

Read our complete guide to SOTA analysis under EU MDR: State of the Art (SOTA) Analysis under EU MDR

Or explore Complete Guide to Clinical Evaluation under EU MDR