Clinical Decision Support Software: Where Evidence Gaps Close Fast
I reviewed a clinical evaluation for a CDS software that suggested treatment protocols. The manufacturer cited three publications. All three described algorithms. None validated clinical outcomes. The Notified Body stopped the review in round one.
In This Article
This is not rare. Clinical decision support software carries a specific burden that many manufacturers underestimate until they face a Major Non-Conformity.
The issue is not the algorithm. It is the claim. When your software influences clinical decisions, you are entering a domain where evidence requirements accelerate. Not because regulators are strict. Because the risk is direct.
What Makes CDS Different
Clinical decision support software sits at the intersection of information and action. It does not just present data. It interprets. It recommends. It guides.
Under MDR Article 2, the qualification as a medical device depends on the intended purpose. If your software provides information used to make clinical decisions about diagnosis, prevention, monitoring, prediction, prognosis, treatment, or alleviation of disease, you are in scope.
But the real differentiation starts when you assess the level of influence. Does your software provide options for the clinician to consider? Or does it drive the decision?
The more your software narrows the clinical pathway, the higher the evidence bar. A suggestion is not the same as a directive. Reviewers know the difference.
MDCG 2019-11 clarifies the qualification criteria. It also reminds us that the intended purpose defines everything. If your marketing materials, instructions for use, or clinical claims suggest your software improves outcomes, you must demonstrate it.
Not theoretically. Not through algorithm validation alone. Through clinical evidence that the use of your software leads to the claimed benefit in real clinical conditions.
Where Manufacturers Stumble
I see the same pattern in submission after submission. Manufacturers build solid technical documentation. They validate the algorithm. They test accuracy against reference datasets. They run sensitivity and specificity analyses.
Then they submit a clinical evaluation that stops at algorithm performance.
The Notified Body asks: Where is the evidence that using this software improves clinical outcomes? Where are the studies showing that clinicians who follow your recommendations achieve better patient results than those who do not?
The manufacturer responds with more algorithm data. The loop continues until someone realizes the question was never about the software’s calculations. It was about clinical impact.
Confusing algorithm validation with clinical validation. Proving your software calculates correctly does not prove it helps patients. Reviewers see this gap immediately.
This is where evidence requirements accelerate. For many medical devices, you can build a clinical evaluation on surrogate endpoints, bench testing, and equivalence arguments. For CDS software that claims to improve clinical decisions, the evidence must close the loop between recommendation and outcome.
The Evidence Hierarchy for CDS
When I build a clinical evaluation for CDS software, I structure the evidence in layers. Each layer addresses a specific question that reviewers will ask.
First layer: Does the underlying clinical knowledge base reflect current state of the art? This means clinical practice guidelines, consensus statements, peer-reviewed literature supporting the recommendations your software generates. If your software suggests antibiotic choices, which guidelines does it follow? Are they current? Are they recognized by relevant clinical societies?
Second layer: Does the algorithm accurately implement this knowledge? This is where technical validation belongs. Sensitivity, specificity, positive and negative predictive values. Performance against reference standards. But this layer only proves the software does what it was designed to do.
Third layer: Does using the software in clinical practice lead to the claimed benefit? This is the layer most manufacturers miss. It requires clinical studies or real-world evidence showing that clinicians who use your software achieve better outcomes than those who do not.
Or at minimum, evidence that your software does not lead to worse outcomes while providing the claimed efficiency or consistency benefits.
If you cannot demonstrate clinical benefit in the third layer, you must reconsider your claims. Notified Bodies will not accept algorithm performance as a proxy for clinical effectiveness when you claim to support clinical decisions.
The MDCG 2020-1 Reality
MDCG 2020-1 addresses clinical evaluation for legacy devices. But it also clarifies a principle that applies to all medical devices, including software: clinical data must be relevant to the intended purpose and the claims made.
For CDS software, this means your clinical evaluation must address how the software is used in the clinical workflow. Not in isolation. Not as a theoretical tool. In the hands of real clinicians treating real patients.
I reviewed a CDS application for oncology treatment planning. The algorithm was impressive. The clinical evaluation cited twenty publications about treatment protocols and survival outcomes. None of them studied the software. None of them compared outcomes with and without the tool.
The manufacturer argued that the algorithm implemented evidence-based protocols, so the evidence supporting those protocols was sufficient.
It was not. Because the claim was not just that the protocols work. The claim was that using this software to apply the protocols improves outcomes or reduces variability or increases efficiency.
That claim requires evidence about the software itself.
Citing literature about clinical methods without demonstrating that your software implementation of those methods delivers the claimed benefit. Reviewers distinguish between validating the science and validating the tool.
When Equivalence Arguments Collapse
Some manufacturers try to build equivalence arguments. They identify another CDS software with similar intended purpose and claim equivalence.
This rarely survives review. CDS software is inherently specific. The algorithm logic, the user interface, the integration into clinical workflow, the intended user population—all of these factors influence clinical impact.
Small differences in how information is presented can change clinical decisions. Different integration approaches can affect adoption and adherence. Different user populations may interpret recommendations differently.
Equivalence requires demonstrating that these differences do not affect clinical performance. For CDS software, that demonstration usually requires clinical data specific to your device.
Not impossible. But not straightforward.
What Accelerated Evidence Looks Like
So what does adequate clinical evidence for CDS software actually look like?
In an ideal world, you conduct a prospective clinical study. You compare outcomes in patients treated by clinicians using your software versus patients treated by clinicians using standard practice. You control for confounders. You measure clinically relevant endpoints.
In reality, many manufacturers cannot afford this before market entry. So they build phased evidence generation strategies.
Initial clinical evaluation relies on literature supporting the clinical knowledge base, algorithm validation studies, usability testing, and pilot data showing feasibility and preliminary safety. The claims are carefully limited. The risk class may be lower if the software provides information rather than directives.
Then PMCF generates real-world evidence. Registry data. Observational studies. Comparative effectiveness research. Over time, the evidence base strengthens. The claims can expand. The clinical evaluation becomes more robust.
But the key is planning this from the beginning. Not treating clinical evidence as an afterthought. Not assuming algorithm validation is sufficient.
PMCF for CDS software must measure clinical impact, not just technical performance. Your PMCF plan should specify how you will assess whether clinicians achieve better outcomes using your software. Design this before you launch.
The Notified Body Perspective
When a Notified Body reviews a CDS clinical evaluation, they are looking for a logical thread from claim to evidence. They ask: What does the manufacturer claim this software does? What benefit does the clinician or patient receive? What evidence supports that benefit?
If your claim is that the software increases diagnostic accuracy, they want studies showing that clinicians using your software diagnose more accurately.
If your claim is that the software reduces treatment variability, they want data showing that variability decreases when clinicians use your tool.
If your claim is that the software improves patient outcomes, they want outcome data.
They do not accept proxies. They do not accept assumptions. They do not accept algorithm validation as clinical validation.
This is not rigidity. This is logic. The claim defines the evidence requirement. If you claim clinical benefit, prove clinical benefit.
Writing broad claims to make the product sound more valuable, then struggling to find evidence for those claims. Start with the evidence you have. Write claims you can defend. Expand later as evidence grows.
Practical Implications for Clinical Affairs
If you are preparing a clinical evaluation for CDS software, here is what changes in your approach:
First, your literature search must target studies of similar CDS tools, not just studies of the clinical methods your software implements. You need evidence that software-based decision support works in your clinical domain. Not just that the clinical protocols work.
Second, your appraisal must address usability and integration. Clinical effectiveness depends on whether clinicians actually use the tool correctly. Usability data becomes clinical data. Training needs become part of the clinical evaluation.
Third, your PMCF plan must include clinical endpoints, not just technical metrics. User satisfaction is not enough. Adoption rates are not enough. You need data on patient outcomes, clinical workflow impact, error rates, adherence to recommendations.
Fourth, your state of the art assessment must go beyond clinical guidelines. You must review other CDS tools, published studies of decision support interventions, implementation science literature. The state of the art is not just the clinical science. It is the practice of clinical decision support.
This requires more work. It requires broader expertise. It requires collaboration between clinical specialists, software engineers, and clinical evaluation experts who understand the regulatory expectations.
But it is the only path to a clinical evaluation that survives review.
The Risk-Benefit Tension
One more dimension complicates CDS clinical evaluation: the risk-benefit balance is dynamic. As clinicians become more reliant on the software, the potential harm from incorrect recommendations increases. As the software influences more critical decisions, the evidence bar rises.
A CDS tool that flags potential drug interactions carries different risk than a CDS tool that recommends chemotherapy regimens. The evidence expectations scale accordingly.
This means your clinical evaluation is not static. As your software evolves, as your claims expand, as adoption increases, the evidence requirements evolve. Your PMCF must track this. Your clinical evaluation updates must address it.
Manufacturers sometimes treat the first clinical evaluation as the last. They update technical documentation as the software changes. They forget that clinical claims also change. New features imply new benefits. New benefits require new evidence.
Notified Bodies catch this during surveillance audits. They compare your current marketing materials to your clinical evaluation. They ask: Where is the evidence for these new claims?
If you cannot answer, you have a non-conformity.
Where We Go From Here
Clinical decision support software is not going away. The pressure to improve clinical outcomes, reduce costs, and standardize care is increasing. CDS tools will proliferate.
The regulatory framework is adapting. MDCG guidance is clarifying expectations. Notified Bodies are developing expertise. The evidence bar is rising.
Manufacturers who understand this early will build evidence generation into their development process. They will design studies before launch. They will collect the right data from day one. They will write defensible claims based on available evidence and expand as the evidence base grows.
Manufacturers who treat clinical evaluation as a compliance exercise will face delays, non-conformities, and market access barriers.
The choice is not about more or less regulation. The choice is about understanding what evidence your claims require and planning to generate it.
For CDS software, that planning cannot start at submission. It starts at concept development. Because once you claim to support clinical decisions, the evidence requirements accelerate.
And the only way to keep pace is to anticipate them.
Peace,
Hatem
Clinical Evaluation Expert for Medical Devices
Follow me for more insights and practical advice.
Frequently Asked Questions
What is a Clinical Evaluation Report (CER)?
A CER is a mandatory document under MDR 2017/745 that demonstrates the safety and performance of a medical device through systematic analysis of clinical data. It must be updated throughout the device lifecycle based on PMCF findings.
How often should the CER be updated?
The CER should be updated whenever significant new clinical data becomes available, after PMCF activities, when there are changes to the device or intended purpose, and at minimum during annual reviews as part of post-market surveillance.
What causes CER rejection by Notified Bodies?
Common reasons include inadequate equivalence demonstration, insufficient clinical data for claims, poorly structured SOTA analysis, missing gap analysis, and lack of clear benefit-risk determination. Structure and logical flow are as important as the data itself.
Which MDCG guidance documents are most relevant for clinical evaluation?
Key documents include MDCG 2020-5 (Equivalence), MDCG 2020-6 (Sufficient Clinical Evidence), MDCG 2020-13 (CEAR Template), MDCG 2020-7 (PMCF Plan), and MDCG 2020-8 (PMCF Evaluation Report). MDCG 2019-11, MDCG 2020-1
Need Expert Help with Your Clinical Evaluation?
Get personalized guidance on MDR compliance, CER writing, and Notified Body preparation.
✌
Peace, Hatem
Your Clinical Evaluation Partner
Follow me for more insights and practical advice.
– MDR 2017/745 Article 2 (Definition of Medical Device)
– MDCG 2019-11: Guidance on Qualification and Classification of Software
– MDCG 2020-1: Guidance on Clinical Evaluation (MDR) / Performance Evaluation (IVDR) of Medical Device Software
Deepen Your Knowledge
Read Complete Guide to Clinical Evaluation under EU MDR for a comprehensive overview of clinical evaluation under EU MDR 2017/745.





