When training becomes a risk control, evidence rules change
You mark training as a risk control measure in your risk analysis. The reviewer asks: where is the clinical evidence that users can actually apply this training under real conditions? Most technical files have no answer. They assume training is inherently effective. It is not.
In This Article
Training appears in almost every risk management file I review. It shows up as a mitigation measure for use errors, misuse scenarios, and situations where device design alone cannot fully control the risk.
But here is what happens next. The Notified Body or competent authority reviewer looks at that training measure and asks a simple question: do you have evidence that this training actually works?
Most manufacturers are not prepared for this question. They treat training as a checkbox. They assume that providing instructions, holding a session, or issuing a certificate is enough to claim risk reduction.
It is not. When you declare training as a risk control, you are making a claim about user performance. And claims require evidence.
Training as a Risk Control: What It Actually Means
Under MDR Annex I, manufacturers must implement risk control measures according to a hierarchy. The first priority is to eliminate or reduce risks through design. The second is protective measures in the device itself or in the manufacturing process. The third is information for safety, including training.
Training falls at the bottom of that hierarchy. It is the least reliable form of risk control because it depends entirely on human behavior in unpredictable environments.
When you list training as a mitigation measure, you are stating that after this training, users will perform specific actions correctly, recognize critical situations, and avoid errors that could lead to harm.
That is a performance claim. And under MDR Article 61 and MDCG 2020-6, performance claims require clinical evidence.
The moment you link training to risk reduction in your risk management file, you convert it from an administrative activity into a claimed risk control measure. That claim must be substantiated with evidence showing that the training achieves the intended reduction in real use conditions.
This is where most files fail. The risk analysis says training reduces probability or severity. But the clinical evaluation report does not address whether users trained according to the IFU or training program actually demonstrate competence in practice.
What Evidence Actually Looks Like
Evidence for training effectiveness is not the training material itself. It is not the agenda of a workshop. It is not a certificate of attendance.
Evidence is data showing that after completing the training, users perform the critical tasks correctly in conditions that reflect real clinical use.
This can come from several sources. Formative usability testing during development can show whether users understand instructions and apply them correctly. Summative human factors validation can demonstrate that trained users meet acceptable performance thresholds for critical tasks.
Post-market surveillance data can reveal whether trained users in real settings make the errors your training was supposed to prevent. Complaint analysis, serious incident reports, and field observations all contribute.
If your device requires specialized training beyond what a typical user in that setting would already know, you need data showing that your training program actually bridges that gap.
Risk management files list “user training” as a mitigation measure but provide no evidence that the training changes user behavior or reduces error rates. The clinical evaluation report does not reference any usability data, human factors studies, or post-market evidence related to training effectiveness. The manufacturer assumes training works because they provided it.
I see this pattern constantly. The risk file says training mitigates risk ID 27. The clinical evaluation discusses device performance and clinical outcomes. But nowhere does it address whether users, after training, actually avoid the specific error that risk ID 27 was meant to control.
How Reviewers Assess Training Claims
When a reviewer sees training listed as a risk control, they follow a logical sequence.
First, they check the nature of the risk. Is this a high-severity risk? Is the residual risk still significant even after design controls? If yes, the reliance on training becomes a major concern.
Second, they assess the user population. Are these highly trained specialists in controlled settings, or are these general practitioners, patients, or caregivers in variable environments? The more variable the user and the setting, the less reliable training becomes.
Third, they look for evidence. They go to the clinical evaluation report. They check whether human factors testing included trained users. They look at post-market data to see if the errors the training was supposed to prevent are still occurring.
If the evidence is missing, they issue a deficiency. The deficiency will ask you to either provide evidence that the training is effective, or revise your risk management to acknowledge that the training does not sufficiently reduce the risk.
This is not theoretical. I have worked on files where the entire approval timeline stalled because the manufacturer could not demonstrate that users followed the training instructions in real practice.
When Training Cannot Be a Sole Risk Control
There are situations where training alone is not acceptable as a risk control, regardless of the evidence you provide.
If the potential harm is death or serious irreversible injury, and the only thing preventing that harm is user adherence to training, most Notified Bodies and competent authorities will not accept that approach.
This is especially true when the error is foreseeable and the user is under time pressure, distracted, or fatigued. Training does not reliably prevent errors in those conditions. Design controls, forcing functions, and interlocks do.
MDCG 2020-6 on clinical evaluation makes this clear. When a risk control relies on user behavior, the manufacturer must justify why that behavior can be expected in the intended use environment. If the justification is weak, the risk control is insufficient.
I have reviewed files where manufacturers tried to mitigate a critical alarm misinterpretation risk with a training module. The reviewer rejected it outright and required a design change to make the alarm unambiguous. No amount of training evidence would have changed that outcome.
Training becomes less acceptable as a risk control when the harm is severe, the user is not a specialist, the environment is unpredictable, or the task is performed under stress. In those cases, reviewers expect the risk to be controlled by design, not by hoping the user remembers a training session.
How to Structure Evidence for Training Controls
If your risk analysis includes training as a mitigation measure, the clinical evaluation report must explicitly address it. This means a dedicated section or subsection discussing user competence after training.
Start by defining what success looks like. What specific tasks must the user perform correctly? What errors must they avoid? What threshold of performance is acceptable?
Then present the evidence. Describe the usability studies that tested trained users. Provide data on task success rates, time to complete critical steps, and error rates. Reference post-market surveillance findings that show whether trained users are making the errors your training was designed to prevent.
If you conducted training effectiveness studies, include them. If you relied on literature showing that similar training programs for similar devices work, cite it. If you have real-world data from trained users in your post-market surveillance, analyze it.
The goal is to show a logical chain: the risk exists, training is part of the mitigation strategy, evidence demonstrates that trained users perform as expected, and post-market data confirms that the risk remains controlled.
Without that chain, the claim is unsupported.
The clinical evaluation report includes a generic statement that “users will be trained” but provides no analysis of training effectiveness, no usability data involving trained users, and no post-market evidence showing that trained users avoid the predicted errors. This gap is flagged immediately during review.
What This Means for PMCF
If training is a risk control, your PMCF plan must track whether that training continues to work over time.
This means monitoring for errors that the training was supposed to prevent. It means analyzing complaints and incidents to see if trained users are still making mistakes. It means checking whether training programs in the field are being delivered as intended, or whether sites are skipping steps, abbreviating sessions, or substituting their own materials.
I have seen cases where the manufacturer’s training program was excellent, but hospitals were not using it. They had developed their own shortcuts. The risk control was not functioning. The manufacturer only discovered this through a root cause analysis after a serious incident.
Your PMCF activities should include periodic assessments of training delivery and user competence. This can be done through surveys, site audits, incident analysis, or follow-up usability studies with users who have been in practice for months or years after initial training.
If the data shows that training effectiveness is declining, you have a corrective action obligation. Either the training must be improved, or the risk management must be revised to add design-based controls.
Final Considerations
Training is not inherently ineffective. In many cases, it is a necessary and appropriate part of risk control, especially when combined with strong design controls and appropriate user selection.
But it is not automatic. The moment you rely on it to reduce risk, you take on the burden of proving it works.
This means planning your usability studies to include training. It means structuring your clinical evaluation to address training effectiveness. It means designing your PMCF to track whether training continues to function as intended in the field.
Most importantly, it means being honest in your risk management. If a risk is high and the user population is variable, and you are relying on training alone, expect the reviewer to challenge you. Be prepared with evidence, or be prepared to redesign.
When I review a technical file, one of the first things I check is consistency between the risk management file and the clinical evaluation report. If the risk file says training controls a risk, and the clinical evaluation is silent on training effectiveness, I know the manufacturer is not ready for submission.
Do not let that be your file.
Next in this series, I will address how post-market data should feed back into usability and human factors claims, and what happens when real-world use contradicts your validation results.
Peace,
Hatem
Clinical Evaluation Expert for Medical Devices
Follow me for more insights and practical advice.
Frequently Asked Questions
What is a Clinical Evaluation Report (CER)?
A CER is a mandatory document under MDR 2017/745 that demonstrates the safety and performance of a medical device through systematic analysis of clinical data. It must be updated throughout the device lifecycle based on PMCF findings.
How often should the CER be updated?
The CER should be updated whenever significant new clinical data becomes available, after PMCF activities, when there are changes to the device or intended purpose, and at minimum during annual reviews as part of post-market surveillance.
What causes CER rejection by Notified Bodies?
Common reasons include inadequate equivalence demonstration, insufficient clinical data for claims, poorly structured SOTA analysis, missing gap analysis, and lack of clear benefit-risk determination. Structure and logical flow are as important as the data itself.
Which MDCG guidance documents are most relevant for clinical evaluation?
Key documents include MDCG 2020-5 (Equivalence), MDCG 2020-6 (Sufficient Clinical Evidence), MDCG 2020-13 (CEAR Template), MDCG 2020-7 (PMCF Plan), and MDCG 2020-8 (PMCF Evaluation Report). MDR Annex I, MDCG 2020-6
Need Expert Help with Your Clinical Evaluation?
Get personalized guidance on MDR compliance, CER writing, and Notified Body preparation.
✌
Peace, Hatem
Your Clinical Evaluation Partner
Follow me for more insights and practical advice.
– Regulation (EU) 2017/745 (MDR), Article 61, Annex I
– MDCG 2020-6: Guidance on sufficient clinical evidence for legacy devices
Deepen Your Knowledge
Read Complete Guide to Clinical Evaluation under EU MDR for a comprehensive overview of clinical evaluation under EU MDR 2017/745.





