Clinical Evaluation Report (CER) Under MDR: Comprehensive Guide and Common Deficiencies
1. Introduction
Every medical device marketed in the EU must undergo a clinical evaluation as part of its regulatory approval under the Medical Device Regulation (MDR, EU 2017/745). This process is essential to demonstrate that the device achieves its intended purpose safely and effectively. The findings are documented in a Clinical Evaluation Report (CER), which is a pivotal component of the device’s technical documentation.
In fact, MDR Article 61 explicitly mandates that each device’s clinical evaluation be documented in a CER as evidence of conformity. Compared to the previous Directives, the MDR places a much greater emphasis on clinical evaluation. Notified Bodies (NBs) scrutinize CERs rigorously, so manufacturers must ensure their clinical evaluation process is thorough, up-to-date, and compliant.
A key concept in MDR clinical evaluations is the “state of the art.” Manufacturers are expected to compare their device’s performance and safety to the current standards of therapy and technology. MDR and guidance documents from the Medical Device Coordination Group (MDCG) repeatedly stress that benefit-risk assessments for a device must be made in light of the generally acknowledged state of the art in medicine.
In this blog post, we will walk through the clinical evaluation process under MDR – from understanding state-of-the-art requirements to preparing the Clinical Evaluation Plan and Report – and highlight common pitfalls (with real-world examples) identified by NBs. This comprehensive guide is structured to provide clear, actionable insights and is optimized with relevant keywords (e.g. Clinical Evaluation Report CER under MDR, MDCG guidance for clinical evaluation, medical device clinical evaluation under MDR, notified body deficiencies in CER) for easy reference.
2. Understanding the State of the Art
What is “state of the art”? In the context of clinical evaluation, state of the art (SOTA) refers to the current accepted best practices and technologies available for a given medical condition or treatment. It does not necessarily mean the newest or most high-tech solution, but rather what is generally acknowledged as good practice today.
Notably, the MDR uses the term “state of the art” numerous times (over a dozen) without formally defining it.
Instead, regulators point to internationally accepted definitions – for example, the IMDRF defines state of the art as the “developed stage of current technical capability and/or accepted clinical practice… based on the relevant consolidated findings of science, technology, and experience.”.
In simpler terms, state of the art encompasses the current standard of care, prevailing treatment guidelines, and the benchmark to which new devices will be compared.
MDCG guidance on state of the art:
MDCG documents (such as MDCG 2020-6) echo this concept, emphasizing that “state-of-the-art embodies what is currently and generally accepted as good practice in technology and medicine”.
Manufacturers should be careful not to confuse “state of the art” with cutting-edge innovation – a device can be state-of-the-art without being the most advanced on the market, as long as it aligns with what is widely considered effective and safe practice.
MDCG guidance also sometimes uses the phrase “generally acknowledged state of the art” to underline this point.
Regulatory requirements for state of the art:
Under the MDR, demonstrating compliance with safety and performance requirements inherently involves comparing the device’s benefits and risks against the current state of the art. For instance, Annex I (General Safety and Performance Requirements) requires that the device’s risks be acceptable “when weighed against the benefits to the patient and considering the state of the art.” Likewise, Annex XIV on clinical evaluation planning calls for “parameters to be used to determine, based on the state of the art in medicine, the acceptability of the benefit-risk ratio for the various indications and the device’s intended purpose.”
In practice, this means a clinical evaluation must include an up-to-date literature review of available alternative treatments or diagnostics and how the new device measures up. The MDR explicitly expects manufacturers to consider available alternative treatment options as part of confirming that a device’s benefit/risk profile is favorable.
Why keeping SOTA up-to-date is essential:
The state of the art is a moving target – new studies, technologies, or clinical guidelines can shift the standard of care over time. Manufacturers must maintain an up-to-date understanding of the medical field related to their device. An outdated view of the state of the art can lead to an inaccurate benefit-risk assessment and regulatory non-compliance. For example, if a new therapy becomes standard care and significantly improves patient outcomes, a device offering inferior results might no longer have an acceptable benefit-risk profile. Regulators and NBs will expect the CER’s state-of-the-art section to reflect the latest clinical knowledge and standards. It should include a review of current treatment methods, relevant medical guidelines, and any known gaps or unmet needs in the field.
Maintaining a current state-of-the-art analysis is also crucial for post-market surveillance – as new information (e.g. adverse event trends or literature) emerges, the clinical evaluation should be updated to ensure the device still meets safety and performance benchmarks.
In summary, “state of the art” in MDR is about knowing the current benchmark for clinical care and continuously measuring your device against that benchmark. A thorough, systematic state-of-the-art review lays the foundation for a compliant clinical evaluation, influencing everything from how you design clinical studies to how you justify your device’s risks and benefits.
3. The Clinical Evaluation Plan (CEP)
What is a Clinical Evaluation Plan? The Clinical Evaluation Plan (CEP) is a document that defines the strategy and scope of the clinical evaluation for a medical device. It is essentially the roadmap for how you will gather and assess clinical data to demonstrate your device’s safety and performance. Under the MDR, establishing and maintaining a CEP is a legal requirement (per Annex XIV, Part A). Manufacturers must create a detailed plan covering the objectives, data sources, and methods for the clinical evaluation.
In other words, the CEP describes what clinical evidence will be collected or consulted, how it will be assessed, and why those approaches are appropriate. Having a well-structured CEP is not just best practice, it’s explicitly required: “Manufacturers are required to document a clinical evaluation plan to meet the requirements of MDR Annex XIV Section 1a.”
Notified Body auditors often ask to see the CEP as proof that the clinical evaluation was properly planned from the outset – in fact, absence of a proper CEP has been a common MDR finding (more on that in Section 5).
Key components of a CEP:
MDR Annex XIV (Part A, Section 1) lays out elements that must be included in the Clinical Evaluation Plan. Below are the critical components to cover in your CEP (and what regulators expect for each):
Scope and objectives: Clearly state the intended purpose of the device, its intended target patient groups, and any specific indications or contraindications. The CEP should include a “clear specification of intended target groups with clear indications and contraindications.”This narrows the clinical evaluation to the relevant patient populations and uses of the device. Broad or vague intended use statements are not acceptable under MDR – be specific about the medical condition, disease stage/severity, and patient population your device is meant for. These defined indications will drive the rest of the evaluation (e.g. what literature to search, which outcomes matter). The CEP should ensure the device’s label and IFU (Instructions for Use) align with these specifications
Regulatory requirements mapping: Identify the relevant General Safety and Performance Requirements (GSPRs) from Annex I of the MDR that will be addressed through clinical data. Annex XIV requires an “identification of the relevant GSPRs that require support from clinical data”
. In practice, this means analyzing which safety and performance requirements (e.g. clinical performance, clinical safety, benefit-risk) need clinical evidence, then planning to collect evidence for each. For devices transitioning from MDD to MDR, this often involves a gap analysis: ensuring that any new or stricter requirements in the GSPRs (versus the old Essential Requirements) are covered by your clinical evidence
. Listing the GSPRs in the CEP helps show the NB that you’ve proactively planned to meet each relevant requirement via the clinical evaluation
Clinical background and state of the art: Provide a concise description of the medical condition and current treatment landscape (state of the art). This sets the context for your device’s clinical benefits and risks. According to MDCG guidance, the CEP should include identification of the standard of care and alternative therapies available. This context will later inform what level of benefit-risk is acceptable. For example, if the state of the art for a condition has a certain success rate or safety profile, your device’s data will need to be assessed against those benchmarks. By including state-of-the-art considerations in the CEP, you ensure your evaluation criteria (like success thresholds or key outcomes) are grounded in reality. Tip: Use the PICO framework (Patient, Intervention, Comparator, Outcome) when planning literature searches for state of the art – this helps define the question your literature review will answer (e.g. how does the current standard of care perform in terms of outcomes X, Y, Z).
Intended clinical benefits and outcomes: Define the clinical benefits your device is supposed to deliver and the clinical outcome measures that will be used to demonstrate those benefits. MDR requires a “detailed description of intended clinical benefits to patients with relevant and specified clinical outcome parameters” to be included in the CEP. For instance, if the device is a surgical tool, a clinical benefit might be reduced operative time or improved healing, measured by a specific outcome (like % reduction in procedure time or wound healing scores). It’s important to pre-specify how you will measure success. Regulators expect manufacturers to set acceptance criteria for these outcomes upfront – essentially, what constitutes an acceptable result that demonstrates the device’s benefit/risk is positive. These acceptance criteria should be based on state-of-the-art data or clinical guidelines. For example, if current therapy has a 90% success rate, you might set a performance goal that your device should meet or exceed that rate. Common NB finding is that manufacturers fail to define such criteria ahead of time, which makes the CER’s conclusions seem arbitrary. Thus, include in your CEP the target performance/safety levels (or ranges) that will signify success for your device’s claims
Methods for clinical data collection and appraisal: Describe how you will gather data (e.g. literature search, clinical studies, databases) and how you will appraise and analyze that data. The CEP should outline the systematic literature review strategy: databases to be searched, search terms (ideally using structured methodologies like PICO or MOOSE), inclusion/exclusion criteria for selecting relevant studies, and the approach for evaluating the quality of each data source. MDR (Annex XIV) expects the manufacturer to specify “the methods to be used for the examination of qualitative and quantitative aspects of clinical safety and performance” – in other words, your criteria for judging the weight and validity of each piece of evidence. This could involve predefined appraisal checklists or levels of evidence. By planning this in advance, you commit to an unbiased, reproducible process (and NBs do check if the literature search and analysis in the CER was done according to a pre-specified method). Don’t forget to plan for inclusion of unfavourable data as well – MDR insists that all relevant data, positive or negative, be accounted for.
Equivalence strategy (if applicable): If you intend to use data from an equivalent device to support your device (instead of or in addition to your own clinical data), the CEP must detail this approach. This includes identifying the candidate equivalent device(s) and laying out how you will demonstrate equivalence (covering all three aspects: technical, biological, and clinical characteristics – see Section 4). The plan should specify what evidence will be provided to show the devices are comparable (e.g. side-by-side tables of specifications) and whether you have access to the necessary information for the equivalent device. Given the MDR’s stringent equivalence criteria, failing to thoroughly plan an equivalence justification is risky. If it’s a competitor’s device, note that for high-risk devices you will need a contract to obtain their full technical data – the CEP should acknowledge if that’s in place or if another strategy is needed. Often, if true equivalence is not feasible, the CEP might pivot to planning a new clinical study instead. Equivalence is a valid pathway but only with a robust plan.
Need for clinical investigations: Based on the current evidence and risk class of the device, determine if new clinical studies are required. The CEP should include a clinical development plan covering any Clinical Investigations you plan to conduct pre-market (or even post-market). MDR Annex XIV specifically mentions that the CEP should cover “the rationale for the appropriateness of the clinical evaluation, including whether and what type of clinical investigations are needed” (paraphrasing indents 6-7 of Section 1a). For example, if your device is high-risk (Class III or implantable) and no acceptable equivalence route exists, a pre-market clinical investigation will be needed per Article 61(4) – the CEP should then reference a separate Clinical Investigation Plan (CIP) or at least outline the study design. If no new study is planned, the CEP should justify why existing data are sufficient (e.g. device is a well-established technology with ample literature). Regulators appreciate seeing this reasoning explicitly. ISO 14155 (the standard for GCP in medical device trials) and MDCG 2020-6 (guidance on sufficient clinical evidence) are key resources when considering investigations: for instance, MDCG 2020-6 provides a decision framework for when additional clinical evidence is required for legacy or well-known devices. We’ll discuss investigations more in Section 4, but the CEP is where you commit to if, when, and how you’ll run clinical studies.
Post-market clinical follow-up (PMCF) plan link: The CEP should not stop at the point of CE marking – it should also outline how you will continue to gather clinical data after market launch. MDR Annex XIV Part A requires that the clinical evaluation plan include or reference a PMCF plan (Annex XIV Part B) or a justification if PMCF is not deemed necessary. In many cases, especially higher-risk devices, some questions might remain at the time of approval (e.g. long-term performance, rare complications) and the PMCF plan is designed to address those. Make sure your CEP indicates how PMCF results will feed into future updates of the CER. This shows a lifecycle approach: clinical evaluation is continuous and will be regularly revisited with post-market data.
Structuring the CEP:
A well-structured CEP makes it easier for everyone (engineers, clinical experts, regulators, NBs) to understand the game plan. There is no one “correct” format, but a logical flow could be: introduction (device description, regulatory status), scope (intended use/indications and clinical claims), applicable MDR requirements (relevant GSPRs), state of the art summary, clinical evaluation questions/objectives, data identification (literature search strategy, databases, date ranges, etc.), data appraisal and analysis methodology, equivalence strategy (if used), plans for clinical investigation (with synopsis of study design or reference to CIP), plans for PMCF, and conclusion. Bullet points and tables (e.g. a table mapping GSPRs to sources of clinical evidence, or a table of equivalence criteria) can be very effective in a CEP for clarity.
Remember that the CEP is a living document. It should be updated as needed – for example, if during the evaluation you discover new risks that require expanding the literature search, or if you decide to add an indication, the plan should be revised. Under MDR, clinical evaluation is not a one-time task but a continual process, so maintaining an up-to-date CEP ensures that any new clinical evidence or changes in device usage are accounted for systematically.
Regulatory tip: Including references to authoritative guidance in your CEP (such as MEDDEV 2.7/1 rev.4 for literature search methods, or ISO 14155 for clinical study design) can signal to the NB that you are following recognized best practices. For example, MEDDEV 2.7/1 rev.4’s Appendix A7 provides helpful details on defining clinical benefits and outcome measures, which align with MDR requirements – you might cite this in the CEP when describing how you chose your endpoints. Just ensure that wherever MEDDEV (which was written for the old directives) is used, you cross-check it against MDR requirements and newer MDCG guidance.
4. The Clinical Evaluation Report (CER)
What is a Clinical Evaluation Report?
The Clinical Evaluation Report (CER) is the comprehensive document where you compile all the findings from the clinical evaluation and make the case that your device is safe and performs as intended. In essence, the CER is the output of executing the Clinical Evaluation Plan. It includes the data you gathered (e.g. literature, clinical study results, post-market data) and your analysis of those data, leading to conclusions about the device’s clinical safety and performance. The CER is a crucial part of the technical documentation for MDR compliance and is required for devices of all classes (Class I to III). Even low-risk devices must have a CER, though the depth of data may differ. Regulators view the CER as evidence that the manufacturer has systematically assessed all relevant clinical information and confirmed that the device meets the applicable GSPRs.
According to one definition, an effective CER “describes a structured appraisal and analysis of all available clinical evidence to assess the safety and performance of a medical device.”
It’s essentially a detailed scientific report. While the clinical evaluation is an ongoing process over the device’s life, the CER is a snapshot of that process at a given time (typically prepared for initial CE-marking and updated periodically or when new information dictates).
Regulatory significance: The CER directly supports the declaration of conformity for the device. Per MDR Annex XIV, “the results of the clinical evaluation shall be documented in a Clinical Evaluation Report which shall support the assessment of the conformity of the device.” Notified Bodies use the CER as a primary resource to decide whether a manufacturer has enough clinical evidence for the device’s intended use. If the CER is inadequate or shows gaps in evidence, the NB will issue nonconformities – which can delay or prevent certification. Therefore, producing a high-quality CER is not just an academic exercise; it is central to obtaining and maintaining market approval.
What information must be included in a CER:
Unlike the CEP, which is about planning, the CER is about execution and results. While MDR Annex XIV doesn’t spell out an exact table of contents, it implicitly requires the CER to cover all the steps and findings of the clinical evaluation. Based on MDR, MEDDEV guidance, and NB expectations, a typical CER should include:
Device description and context: A brief description of the device, its intended purpose, classification, how it works, and what it’s used for. Include identifying information (model numbers, etc.) and a summary of its regulatory status (new device or legacy device with prior approvals). This section sets the stage for the reader.
Scope of the CER: State which device and indications are covered by this evaluation. If the CER is covering a family of devices or variants, clarify that. Also mention the date of the literature search/data cutoff so it’s clear how current the evidence is.
Clinical Evaluation Plan summary: It’s wise to summarize the key points of the CEP in the CER – such as the clinical questions you set out to answer, the criteria for data inclusion, and whether equivalence or new studies were part of the strategy. You can even reference the CEP document. This assures the reader that the evaluation was planned and not retrospective cherry-picking. For example, if you pre-defined an objective to show non-inferiority to standard of care, state that in the CER.
State of the art: A dedicated section that provides the clinical background. This includes the nature of the condition the device addresses, current treatment methods, known risks/benefits of those alternatives, and any medical guidelines or consensus documents. It essentially answers: “What is the gold-standard or usual care for this medical issue, and what are its limitations?” This section should draw on literature and data (cited) to establish benchmarks. Importantly, do not just stop at a general background – make sure to highlight quantitative benchmarks (e.g. “Standard treatment X has a 80% success rate and 5% complication rate.”). NBs have noted that sometimes the state-of-the-art section is treated as a formality and not used in later analysis. A strong CER will later compare the device’s clinical data to these state-of-the-art benchmarks in the discussion and conclusions.
Clinical data (literature and other sources): This is typically the largest section. It includes the results of your systematic literature search and any other data sources:
- Literature review results: Describe the search strategy (databases, keywords, time frame) and the selection process (how many articles found, how many included/excluded and why). Present the relevant clinical studies or reports on the device (or equivalent device, if applicable). For each included study or data set, provide a summary of the methodology and key outcomes. Organize this logically, e.g. by outcome or by study type. It’s often helpful to tabulate the evidence. Also include any unfavorable or contradictory evidence (regulators want to see that you didn’t ignore negative results). If the device is new and no direct literature exists, you’ll be focusing on equivalent device literature or possibly analogous device data.
- Clinical investigation data: If you performed clinical studies on the device, summarize their protocol (or reference the full clinical investigation report) and present the results and statistical analyses. Ensure to cover safety outcomes (adverse events) and performance outcomes.
- Post-market data: For devices already on the market (legacy devices or those with some field experience), include an analysis of post-market surveillance (PMS) data. This could be complaint data, vigilance/adverse event reports, registry data, published real-world evidence, etc. MDR requires integrating post-market clinical data into the evaluation. For example, if the device was sold under the MDD, summarize the PMS and any Post-Market Clinical Follow-up (PMCF) results available. Even for new devices, you might include any usability studies or observational data from pilot programs here.
Appraisal of data and analysis: After listing out the data, the CER must appraise it – essentially, assess the quality and relevance of each data set – and then analyze what it all means for the device. An appraisal subsection might discuss the level of evidence (e.g. randomized trial vs. case series), bias or limitations in the data, and how confident we can be in the results. For example, you might use a scoring system or grading (some manufacturers use adaptations of academic evidence grading to show they critically evaluated each study). Then, the analysis part synthesizes the data: Does the totality of clinical evidence demonstrate the device’s performance claims? Is the safety profile acceptable compared to alternatives? This is where equivalence data would be explicitly brought in, if used: you must show that the equivalent device’s data are relevant and applicable to your device (because you have sufficiently proven the equivalence). According to MDR, you must “appraise all relevant clinical data by evaluating their suitability for establishing the safety and performance of the device”. MDCG 2020-13 (the Clinical Evaluation Assessment Report template used by NBs) expects to see a clear, unbiased appraisal – so consider structuring this section clearly (perhaps with subheadings for “Safety Data Appraisal” and “Performance Data Appraisal”).
Benefit-Risk analysis: This critical section explicitly weighs the device’s benefits against its risks, in light of the state of the art. Summarize the proven clinical benefits (with supporting data) and the known risks/side effects (with rates or frequencies observed). MDR requires that the CER include conclusions about the benefit-risk profile. The analysis should discuss whether the benefits outweigh the risks for the intended patient population and indications. If there are different indications or sub-populations, do a benefit-risk assessment for each. NBs often expect to see a table or clear narrative that, for each key benefit, lists relevant supporting data, and for each risk, notes its severity/frequency, culminating in a reasoned conclusion that overall the device achieves a positive benefit-risk balance. This section should also compare the device’s outcomes to the state-of-the-art: for instance, “Device A reduced pain scores by 50% whereas the standard treatment typically achieves ~30%; complication rates were similar to standard of care.” This explicitly addresses whether the device is on par with, or better than, existing options (or if not, why it’s still justified). If the device is essentially equivalent in performance to existing devices, you might justify it by other considerations (e.g. if it’s cheaper or easier to use – though those are usually not primary clinical considerations for regulators, it’s mainly about clinical performance and safety). Also include here any risk mitigations or warnings that will be communicated (linking to labeling if needed) to ensure risks are as low as possible.
Conclusions: Finally, the CER should end with a conclusion section that clearly states whether the clinical evidence is sufficient to declare conformity with relevant MDR requirements, and summarize how the device’s safety and performance have been established. This should tie back to the GSPRs you listed in the CEP. It’s good practice to explicitly mention that the device meets the criteria of Annex I based on the clinical evidence presented. For example, you might conclude that the device achieves its intended clinical benefits, that the performance objectives set out were met, and that any risks are acceptable and comparable to state-of-the-art alternatives. If there are any uncertainties or gaps, mention how they will be addressed (often through PMCF). Also, confirm that a PMCF plan is in place if required, to collect longer-term or additional data. Essentially, this section is your final argument that the benefits outweigh the risks and the device is fit for the market. Make it succinct and strong. NBs will read this carefully to see if you’ve ticked all the boxes.
Paths for clinical evaluation:
Now, within a CER, the type of clinical evidence used can come via different pathways. MDR recognizes a few paths to demonstrating clinical evidence for a device:
Equivalence pathway: This path involves demonstrating that your device is equivalent to another device that already has clinical data (and often, already CE marked). If equivalence is established, you can leverage the existing clinical data of that similar device to support your own. The MDR allows using equivalent device data, but the bar for equivalence is high. You must show equivalence in technical characteristics, biological characteristics, and clinical characteristics (per MDR Annex XIV, Part A, 3). MDCG 2020-5 guidance “Clinical Evaluation – Equivalence” goes into detail on this, highlighting differences from the older MEDDEV criteria. Under MDR, any differences between the devices must not be clinically significant, and you need to justify equivalence for each of the three aspects. For example, devices should be of similar design and use, made of similar materials (especially if in contact with the body), and have the same clinical purpose.
One major challenge of the equivalence route is access to data. MDR Article 61(5) and MDCG guidance state that if you are claiming equivalence to a device not owned by your company (i.e. a competitor’s device), and your device is Class III or an implant, you must have a contract in place to obtain the competitor device’s full technical documentation and clinical data. This is rarely feasible in practice – most companies will not share proprietary data. As a result, demonstrating equivalence to a competitor’s high-risk device is nearly impossible unless there is some agreement (e.g. licensed technology or a partnership). For lower-risk devices, you may use literature on competitor devices without full tech file access, but you still need enough publicly available information to justify equivalence on all three characteristics. Because of these hurdles, many manufacturers opting for equivalence use their own predicate device (e.g. an earlier generation product) as the equivalent device, since they have full access to its data.
Requirements and challenges: If you pursue equivalence, be prepared to provide a thorough equivalence demonstration in your CER. This usually includes a side-by-side comparison table of your device vs. the equivalent device, covering design, materials, principles of operation, indications, performance specifications, etc., with justifications that any differences do not adversely affect safety or performance. MDCG 2020-5 provides a template of sorts for how to structure such comparisons. All equivalence criteria from both MDR and MEDDEV 2.7/1 rev.4 need to be met. A common pitfall is to claim equivalence but provide insufficient evidence or reasoning – NBs will reject such attempts. For example, claiming equivalence to a device that has a different intended use or a different material without a strong justification would likely fail. Another challenge is that if the equivalent device is a competitor product, the NB might question how you obtained certain detailed information (lack of evidence could be a showstopper).
Equivalence ≠ skipping clinical evaluation: It’s important to note (and MDCG 2020-5 emphasizes) that using an equivalence strategy does not eliminate the need to perform a full clinical evaluation of your device. Equivalence just means you are allowed to incorporate the other device’s clinical data into your analysis. You still must critically evaluate that data as if it were your own. You also need to address any gaps – for example, if some risks are specific to your device (maybe due to a design difference) that weren’t present in the equivalent device’s data, you may need additional evidence or testing for those. In many cases, Post-Market Clinical Follow-up (PMCF) studies are expected when equivalence is used, especially for high-risk devices. MDR Article 61(4) essentially mandates that if you didn’t do your own pre-market clinical study for a new Class III or implant device (because you used equivalence), you must conduct a PMCF study after CE mark to further confirm safety and performance.
In summary, the equivalence pathway can be a viable route to gather clinical evidence without duplicating clinical trials, but it requires meticulous documentation and often additional post-market commitments. Manufacturers should follow MDCG 2020-5 guidance closely and be prepared for intense NB scrutiny on whether equivalence is truly justified.
Clinical investigations (clinical trials): This is the most direct pathway: generating new clinical data through a clinical investigation (or more than one) on your device. Under MDR, conducting a clinical investigation is expected for many new devices, particularly those in higher risk classes. Article 61(4) of MDR states that for Class III and implantable devices, a clinical investigation shall be performed unless the device falls under certain exceptions. Those exceptions (in Article 61(6)) mainly cover devices that have been CE marked before (legacy devices) or that can claim equivalence to such devices, or are well-established technologies (WET) – we’ll discuss these in “Exceptions” below. For novel Class III/implant devices, you should plan a clinical study to collect data on safety and clinical performance, because literature alone is unlikely to suffice. Similarly, for innovative lower-risk devices where no relevant equivalent data exists, a targeted clinical study may be needed (even if not strictly required by 61(4)).
When and why a clinical investigation is needed: Essentially, if you cannot fully satisfy the MDR’s requirement for “sufficient clinical evidence” using existing data, you need to do a clinical investigation. NBs will look for clinical investigations especially when a device involves new technology, new clinical claims, or high risk to patients. For example, a new implantable device for which only preclinical (lab and animal) testing exists must undergo a human clinical study to gather data on safety (e.g. complication rates) and performance (clinical outcomes). The purpose is to generate device-specific evidence of how it works in the intended patient population. Even if not mandated by class, any time there is a significant clinical question unanswered by literature (like “does the device actually improve outcome X in patients?”), a clinical investigation is the most robust way to answer it.
Designing the clinical investigation: MDR has its own requirements for how clinical investigations are to be conducted (see MDR Article 62-80 and Annex XV for detailed provisions). Additionally, ISO 14155:2020 is the internationally recognized standard for GCP (Good Clinical Practice) in device trials, and compliance with it is effectively expected. When planning a clinical study, manufacturers should develop a Clinical Investigation Plan (CIP) that meets Annex XV and ISO 14155 guidelines. Key elements include:
- Study design – clearly define if it’s randomized, controlled, blinded, etc., or an observational study, and why that design is appropriate.
- Study endpoints – what clinical outcomes will be measured to demonstrate performance/benefit, and what safety endpoints will be monitored (e.g. adverse event rates). These should align with the device’s intended clinical benefits and the state of the art (for example, use endpoints common in the literature for that medical field).
- Patient population – inclusion/exclusion criteria defining the target population. Ensure it matches the intended use (e.g. if device is intended for adults, don’t only study it in healthy volunteers or a different group). MDR also expects that subjects represent the European population if the device is for EU use. That could mean including European study sites or justifying that data from elsewhere (e.g. a US trial) is applicable to Europe.
- Sample size – how many subjects will be enrolled and the statistical rationale for that number. An underpowered study can be a big problem, so statistical justification is critical.
- Follow-up duration – the length of time patients will be followed to observe outcomes. This should correspond to the device’s risks and expected life; for instance, an implant might need 12-month or longer follow-up to capture healing and any complications, whereas a diagnostic test might only need short-term follow-up.
- Study locations – where the study will be conducted. If outside the EU, you should justify that the clinical practice and population are relevant to the EU context.
- Data analysis plan – statistical methods to be used, success criteria (e.g. hypothesis tests, non-inferiority margins, etc.), and handling of missing data.
- Overall ethical and GCP compliance – statements on following ISO 14155, obtaining ethics committee approvals, informed consent, etc.
In the CER, you will include the results of any such investigation. But for the purposes of MDR compliance, it’s important that the quality of the clinical investigation is high. NBs often check the CIP or summary of it. In fact, MDCG has been working on templates for CIP and Clinical Investigation reports to standardize this. As a manufacturer, demonstrating that your trial was well-designed and executed per ISO 14155 gives confidence in the data. MDCG 2020-6 (which deals with legacy devices’ clinical evidence) also reminds that new clinical data might be needed to meet MDR’s “sufficient clinical evidence” standard, and it provides a hierarchy of evidence to consider. If your evidence falls short, a new investigation might be top of that hierarchy to add.
Using investigation data in the CER: Ensure that the CER includes a summary of the clinical investigation results and a critical appraisal of that data. Highlight how the study outcomes support the device’s safety/performance and clinical claims. If the study had any shortcomings (e.g. a missed endpoint or an adverse event occurred), discuss how that is addressed or mitigated. Remember, even with a clinical study, the MDR expects you to consider if the data proves the device’s benefits in the context of current therapy.
Well-established technologies and other exceptions: The MDR provides some exceptions where extensive new clinical evidence might not be required, primarily for certain devices deemed well-established or for legacy devices that were already on the market under the old directives. Let’s break these down:
Legacy devices: These are devices that were CE marked under the previous directives (MDD/AIMDD) and are now transitioning to MDR. MDR Article 61(6)(a) basically says that if a device has been marketed under the old directives and there are no significant changes in its design or intended use, it may not require a new clinical investigation for MDR – provided that sufficient clinical data exist. However, manufacturers must still perform a clinical evaluation of the device against MDR’s requirements, using the data they have (e.g. years of post-market data, published studies, etc.). MDCG 2020-6 is dedicated to guiding such manufacturers on how to demonstrate sufficient clinical evidence for these legacy devices so they can meet MDR standards without starting from scratch. The key is that the evidence that supported the device under MDD needs to be reassessed: Is it still current? Does it cover all new MDR aspects (like clinical benefits, risk/benefit, etc.)? Often, legacy devices have a lot of post-market data that can be leveraged. The CER for a legacy device should focus on compiling the historical clinical data (pre-market studies, post-market surveillance, any literature) and showing that this body of evidence satisfies MDR’s criteria. If there are gaps (for instance, maybe under MDD the device never had a formal clinical study because it was low-risk, but MDR now expects more), the manufacturer might need to collect some new data or perform a literature search to update the state of the art. In any case, legacy devices aren’t exempt from clinical evaluation; they just might rely more on existing data, and MDCG 2020-6 helps interpret what is “sufficient” in that context.
Well-established technology (WET) devices: The MDR uses the term “well-established technologies” in Article 61(6)(b) as a subset of devices that might be exempt from the requirement of conducting clinical investigations, even if they are Class III or implantable. However, “well-established technology” is not explicitly defined in MDR – it’s generally interpreted to mean devices that have been used in clinical practice for many years with well-known safety and performance characteristics. Examples might include things like surgical sutures, hypodermic needles, or bone screws – devices that aren’t novel or high-risk in the sense that their clinical function is well understood and has been proven over time. According to MDCG 2020-6, the term WET is not strictly defined by MDR, but such devices are expected to have a long history of safe use and possibly a wealth of literature available. If your device qualifies as a WET, you may not need to perform a new clinical study if you can rely on clinical data from similar devices and other sources to demonstrate safety and performance. For instance, a manufacturer of a basic surgical instrument could gather published clinical data on that type of instrument (perhaps from many years of surgeries documented in literature) as evidence.
However, being a well-established technology does not remove the need for a CER or clinical data. You still have to compile the evidence that shows the device conforms. The difference is that regulators acknowledge that for WET devices, a lower level of clinical evidence might be justified – meaning you might not need as large or as new a data set as you would for an innovative device. For example, bench testing and decades of literature could suffice to confirm a certain device’s safety, where a brand new device would require fresh clinical trials. But caution: manufacturers should not self-declare their device as “well-established” without solid reasoning. The NB will expect an explanation of why the device is considered WET (e.g. “Device X has been in clinical use since the 1980s with only incremental changes, and its clinical performance is documented in 50+ publications”). Even then, the clinical evaluation must be done and documented. Often, NBs will scrutinize WET device CERs to ensure that any claim of not doing new studies is backed up by strong existing evidence.
Other special cases: MDR Article 61(10) allows, in exceptional cases for devices other than Class III/implants, that equivalence might not even be needed if the manufacturer can justify that clinical data is not deemed appropriate (for example, purely physical principle devices where bench/testing yields all necessary info). However, this is rarely invoked and would require a very solid justification in the CER and risk management. Also, devices that incorporate medicines or biologics have their own additional evidence requirements (but those are beyond our scope here).
In the CER, if you are using any exception (legacy device using prior data, WET device using literature, etc.), be explicit about it and cite the MDR clause (e.g. “This device qualifies for the exception in Article 61(6)(b) as a well-established technology. Therefore, no new clinical investigation was conducted; instead, the clinical evidence consists of literature and PMS data.”). Then, prove that the evidence is sufficient. MDCG 2020-6 Appendix III provides a “hierarchy of clinical evidence” which can help in arguing sufficiency – e.g. high-quality randomized trials on similar devices might be nearly as good as having data on your own device. Always err on the side of more evidence and more analysis, because even legacy and WET device CERs under MDR have frequently been found lacking if they rely on outdated or inadequate data.
Relevant guidelines for CER preparation:
In addition to MDR and MDCG guidance, manufacturers still often refer to MEDDEV 2.7/1 rev.4 (the clinical evaluation guidance from under the old directives) for practical instruction on how to write a CER. MEDDEV rev.4 provides a recommended outline for a CER and methods for literature review and appraisal. Many of those principles remain valid under MDR (systematic approach, appraisal of data quality, etc.). However, keep in mind MEDDEV has not been updated for MDR, so where there are differences (for example, MEDDEV’s equivalence criteria were slightly looser than MDR’s), MDR requirements prevail. Another useful document is MDCG 2020-13, which is the Clinical Evaluation Assessment Report (CEAR) template used by NBs. This template basically lists all the points an NB auditor will check in your CER. Reviewing MDCG 2020-13 while writing your CER can be very helpful to ensure you didn’t miss anything expected. For instance, the CEAR template asks if the CER considered state-of-the-art, if it addressed each GSPR that needed clinical data, if PMS data were included, etc. Aligning your CER with these points can preempt many NB questions.
To summarize, the CER is where all your planning and data collection come together to demonstrate your device’s compliance from a clinical perspective. Whether through new clinical trials, equivalence to an existing product, or literature on a well-established technology (or most likely a combination of these), the CER must convincingly show that the device is at least as safe and effective as the current state of the art. It should be a stand-alone document that regulators and NBs can read and understand the device’s clinical profile without needing to ask for a lot of additional explanation. Clarity, completeness, and correctness (with evidence to back every claim) are the hallmarks of a good CER.
5. Notified Body Expectations and Common Deficiencies
Notified Bodies have been reviewing MDR Clinical Evaluation Reports for a few years now, and several common deficiencies have emerged. Understanding these can help you avoid the same pitfalls. Below is a list of frequent issues NBs find in CERs, along with explanations and real-world examples of NB feedback:
Absence of a Clinical Evaluation Plan or inadequate CEP: One of the top findings is that manufacturers either did not have a Clinical Evaluation Plan at all, or the plan lacked key elements. NBs expect to see evidence of proper planning (some will ask for the CEP document in submissions). A lack of a CEP or a CEP missing critical details (like clinical benefits, acceptance criteria, or identification of risks to be addressed) is considered non-compliant
. Real-world example: An NB auditor reviewing a CER might note: “No Clinical Evaluation Plan was provided to demonstrate that the clinical evaluation was planned in accordance with MDR Annex XIV. The submission lacks predefined clinical objectives and criteria for success.” In some cases, the CEP exists but fails to meet MDR requirements – for instance, not specifying the GSPRs or not outlining how data gaps will be managed. To avoid this, always produce a CEP and ensure it’s updated and aligns with MDR’s Annex XIV (Part A) points. If auditors see that the CER references a robust CEP, it immediately gives them confidence in the process.
Inadequate state-of-the-art analysis: NBs frequently criticize CERs for weaknesses in the state-of-the-art section. Two common issues are: (a) not using a systematic methodology to establish the state of the art, and (b) not leveraging the state-of-the-art information in the actual device analysis
. For example, an NB might find that the manufacturer wrote a few general paragraphs about the disease, but did not do a proper literature review of current treatments or provide references to recent clinical guidelines – indicating the SOTA is not robust. Even more frequently, NBs see state-of-the-art sections that are well-written, but then the CER’s discussion and conclusions fail to compare the device outcomes to SOTA benchmarks. An auditor may comment: “The CER provides background on current therapies but does not utilize this information in the benefit-risk assessment. Please compare the device’s safety and performance to the current standard of care (state-of-the-art) and include this in the conclusions.” Manufacturers should ensure the SOTA is comprehensive (covering standards, guidelines, competitor devices, epidemiology, etc.) and systematically derived (with a cited search strategy). Additionally, they must explicitly reference those SOTA findings when evaluating whether their device’s performance is acceptable (for instance, saying “Our device’s 1% complication rate is below the ~2% rate reported for existing therapies.”).
Equivalence not properly justified: Many CERs have been flagged for claiming equivalence without meeting MDR’s strict criteria. Common sub-issues include:
- Failing to provide data access or a contract for a competitor device used as equivalent (for Class III or implants). NBs will flat-out reject an equivalence claim if you cannot demonstrate you have permission to use the competitor’s data
. One NB finding example: “The manufacturer claims equivalence to Device X (Competitor) but has not provided evidence of a contract or access to the technical documentation of Device X. In absence of such access, the equivalence claim is unsubstantiated.”
- Not thoroughly comparing technical/biological differences: If your equivalent device has any differences (and almost all do), NBs expect a detailed justification that these differences are not significant clinically. A superficial statement like “Device A is similar to Device B in design and materials” is not enough – you need specifics (dimensions, materials composition, mechanism of action, etc.) and rationale. A typical NB comment might be: “Equivalence: The CER lacks a comparative table of the subject device and the equivalent device detailing technical, biological, and clinical characteristics. Provide a full comparison and justify how differences (e.g., the subject device’s shaft length and coating) do not affect clinical performance or safety.”.
- Depending solely on regulatory clearance elsewhere (e.g., FDA clearance) as proof of equivalence. This is a mistake some make – e.g., including a U.S. FDA 510(k) substantial equivalence chart instead of an MDR-focused equivalence analysis. NBs will note that FDA’s criteria differ from MDR’s; you must re-frame it to MDR terms.
- Using non-CE-marked or non-EU market devices as equivalents: This adds complexity because you’d have to justify the relevance of their data even more. NBs often question these, as NAMSA notes: claiming equivalence to a device not sold in the EU can present “additional challenges.”.
Overall, a lot of NB findings boil down to: “Insufficient evidence to support equivalence claim – please conduct a clinical study or provide own-device data.” Manufacturers are learning that if equivalence is shaky, NBs will push back. The safest route when equivalence is questionable is to gather some clinical data on your own device (even if small-scale) to strengthen the evidence.
- Failing to provide data access or a contract for a competitor device used as equivalent (for Class III or implants). NBs will flat-out reject an equivalence claim if you cannot demonstrate you have permission to use the competitor’s data
Literature search deficiencies: No or poorly documented literature search strategy: The CER should normally include an appendix or description of the databases searched, date ranges, keywords, and results. NBs often find that manufacturers didn’t describe this, making the CER non-transparent. A finding could be: “The method for identifying literature is not provided. A systematic literature search per MEDDEV 2.7/1 rev.4 should be performed and documented, including search terms and selection criteria.” To avoid this, include a clear methodology (some CERs attach the actual search strings and flow diagram of study selection).
- Not including all device versions/accessories: If your device has multiple sizes or versions, the literature should address each, or you should justify why data on one version covers the others. NBs sometimes cite manufacturers for ignoring certain device variants in the literature search.
- Lack of qualitative appraisal of literature: NBs expect not just a summary of each article, but an appraisal of its quality and relevance. If a CER just lists findings without critiquing study robustness, an NB might say: “The CER does not appraise the quality of the clinical data (e.g., level of evidence, bias). Please include an appraisal per Annex XIV 1(c) and (d).”
- Language bias / missing unfavorable data: If all included papers are, say, positive or from one geography, NBs might question if the search was truly broad. They also check that unfavorable studies (if any exist) were not omitted. A robust CER should mention and explain any negative or contradictory results.
Basically, the literature review should be systematic, reproducible, and unbiased. Using a recognized methodology (as mentioned, PICO definitions, PRISMA-style flow diagrams, etc.) and referencing MEDDEV 2.7/1 rev.4 Section A.5 (for literature search methods) is helpful. One positive trend: we see more CERs now including a literature search report as part of the CER or technical file, which NBs appreciate because they can follow the trail of evidence.
Because literature reviews form the backbone of many CERs, NBs pay close attention to how they were conducted. Common deficiencies include:
Poor integration of Post-Market Surveillance/PMCF data: Under MDR, the clinical evaluation is supposed to continuously incorporate post-market findings. NBs have reported that many manufacturers have immature PMS and PMCF processes, and this shows in the CER. Common issues:
- No PMCF plan or report referenced: Especially for devices that got CE marked via the equivalence route, NBs expect to see a PMCF study plan and any interim results
. If a CER doesn’t mention PMCF at all, that’s a red flag. An NB might note: “The CER does not discuss post-market clinical follow-up. Given that the device did not undergo new clinical investigation, a PMCF plan is required (per Article 61(4) for Class III/implantable relying on equivalence)
. Provide the PMCF plan and describe how its results will be used to update the clinical evaluation.”
- Ignoring available post-market data: If the device (or similar devices) has been on the market, the CER should summarize adverse events, complaint rates, etc. NBs have found CERs where manufacturers failed to include known issue data (for example, if there were recalls or FSCA notices related to the device or similar devices, that context needs to be in the CER). As MDCG 2020-6 stresses, clinical evaluation must include relevant post-market data to be sufficient.
- No plan to update CER: NBs want to see that the CER is not a one-and-done. If a CER doesn’t state when it will be reviewed next or what triggers an update, they might ask for that. A good practice is to state, for example, “This CER will be updated at least annually (or biennially depending on class) and whenever new clinical evidence from PMCF or PMS significantly changes the benefit-risk assessment.” Some NBs even ask for the manufacturer’s procedure on clinical evaluation updates.
Essentially, show the NB that you have a robust post-market surveillance system that feeds into clinical evaluation. If you decided not to do PMCF (which is rare for anything but the lowest-risk well-known devices), you need a strong justification documented (Annex XIV Part B allows exceptions if you can argue why PMCF isn’t necessary).
- No PMCF plan or report referenced: Especially for devices that got CE marked via the equivalence route, NBs expect to see a PMCF study plan and any interim results
Unclear or unsupported clinical benefit and claims: Auditors often hone in on the claims made in the CER versus what is proven. Deficiencies include:
- Undefined or broad claims: If a manufacturer claims something like “improves patient quality of life” without specifics, NBs will question it. The CER needs concrete, measurable claims (linked to clinical benefits as defined in MDR Article 2(53)). And each claim should be backed by evidence. A finding example: “The clinical benefit ‘improves mobility’ is stated, but no specific outcome measure or acceptance criterion was defined to support this claim. Please define how this benefit is measured (e.g., improvement in walking distance, pain score reduction, etc.) and provide corresponding data.”
- Acceptance criteria not pre-defined: If the CER concludes “device meets performance requirements” but never stated what the target was, NBs are unhappy. They expect that you set an acceptance threshold in the CEP and then showed in the CER that you met it. Without a pre-defined target, any claim of success can seem arbitrary. Auditors have called out CERs for doing post-hoc justifications.
- Claims beyond the evidence: Sometimes marketing creeps into CERs – e.g., claiming the device is “gold standard” or “significantly better than competitors” without solid evidence. NBs will strike this down. Keep CER language scientific and grounded in data. Also ensure you’re not claiming indications that haven’t been studied. If your device was only tested for, say, moderate disease, don’t claim it works for severe cases without data.
A good approach is to list the device’s key claims/benefits in one section and map evidence to each. Also, have a medical writer or clinical expert ensure that any statement in the CER is backed by a citation or data point. NBs love to play “find the evidence for this sentence” – make it easy for them with references and logical flow.
General safety and performance requirements (GSPRs) not fully addressed: Under MDR, the CER should be seen in context of demonstrating conformity to the GSPRs that rely on clinical data. NBs often perform a GSPR check: For each relevant GSPR (like those related to clinical performance, benefit-risk, labeling of residual risks, etc.), is there evidence in the CER addressing it? A common deficiency is a gap in addressing one of these. For instance, GSPR 8 requires evaluating undesirable side-effects and weighing them against benefits – the CER’s benefit-risk section needs to clearly do this. GSPR 14 (for implants) might require specific clinical data on minimizing risks, etc. If a CER is written without explicitly considering the GSPR list, some requirements might be missed. An NB comment could be: “The CER does not explicitly confirm that the benefit-risk ratio is acceptable in accordance with GSPR 1 and 8. Please provide a statement and analysis fulfilling that requirement.” To avoid such issues, manufacturers increasingly include a GSPR compliance matrix in the technical file where they reference sections of the CER (or other docs) for each GSPR. Also ensure the CER’s conclusions explicitly mention conformity with relevant GSPRs.
Editorial and organizational issues: Although content is king, the presentation of the CER can also lead to NB questions if not clear. Some examples:
- Poor organization or missing sections: If information is hard to find or not where expected, NBs might assume it’s missing. Following a familiar structure (like the MEDDEV-recommended one or a logical variant of it) helps the auditor locate what they need. NAMSA recommends structuring the CER clearly per MEDDEV and MDR, which indeed can “minimize potential comments and queries related to not finding the required information.”.
- Lack of sign-off by qualified personnel: MDR requires that the clinical evaluation is done by suitably qualified individuals (with knowledge of the device’s clinical field etc., as per Annex XIV). NBs sometimes check if the CER states the qualifications of the evaluators or is signed by a medical expert. If not, they might ask who wrote/reviewed it and what their credentials are. It’s good practice to have a section naming the evaluators (or at least the fact that it was reviewed by clinical experts) and maybe append CVs in the tech file.
- Not referencing MDCG guidance or current standards: If an NB finds that you are unaware of a pertinent MDCG guidance, they may recommend it. For example, MDCG 2020-5 for equivalence or MDCG 2020-6 for sufficient clinical evidence – if you didn’t follow these where relevant, expect questions. An NB might say “Please consider MDCG 2020-5 requirements in your equivalence demonstration” if they see something misaligned. Similarly, if a harmonized standard or clinical guideline exists for your device’s evaluation, not mentioning it is a missed opportunity and could be a question mark (e.g. “did you follow the XYZ guideline for clinical assessment of this device type?”).
Insufficient clinical evidence / failure to demonstrate “sufficient clinical evidence”: Ultimately, the biggest deficiency is when the NB is not convinced that the device has enough clinical evidence to support safety and performance. This could be due to small sample sizes, lack of long-term data for an implant, or ignoring certain risk aspects. MDCG 2020-6 Appendix III provides a hierarchy that NBs consider – they will look at the highest levels of evidence first (like RCTs, well-documented studies). If your evidence is mostly low-level (case reports, expert opinions), you need a strong justification why that’s sufficient, or plans to gather more. NBs have been known to issue deficiencies like: “The clinical evidence provided is not sufficient to demonstrate conformity. The patient population in the studies does not fully cover the intended user group (e.g., no data in elderly patients). Provide additional clinical data or broaden the literature search to include XYZ.” Or “Only 10 patients were followed for 6 months in the clinical investigation, which is not sufficient to assess long-term safety of this implant with an expected 10-year lifetime. Extended follow-up data or a PMCF plan to gather this data is required.”. These kinds of comments can be tough, because gathering new data takes time and resources. The best way to avoid them is to proactively identify potential evidence gaps during planning (in the CEP) and address them either through pre-market studies or a robust PMCF strategy that you communicate to the NB.
Practical tips to meet NB expectations:
Use the MDCG 2020-13 (NB CEAR template) as a checklist: This template essentially is the NB’s review form
. Go through each section of it and ensure your CER addresses it. Some items include: proper definition of indications, clear literature search strategy, summary of clinical development (if any), analysis of each GSPR requiring clinical data, etc.
Reference guidance and standards in your CER: If you followed certain guidance (MDCG or MEDDEV) or standards (ISO 14155, disease-specific clinical guidelines), mention that. It demonstrates awareness of “state of the art” in doing the evaluation itself and can head off questions. E.g., stating “The clinical investigation was designed in accordance with ISO 14155:2020 and Annex XV of MDR”, or “Literature review methodology followed MEDDEV 2.7/1 rev.4 recommendations.”
Have fresh eyes review the CER: Preferably someone with NB or regulatory experience. They can spot ambiguous statements or logical gaps that an NB would likely question. A peer review can catch things like inconsistent data, missed references, or overly optimistic claims.
Provide traceability: Make it easy for the NB to trace each claim to evidence. Use lots of citations in the CER (just as we have cited regulatory sources in this blog). When you say “the device is safe and effective,” back it up with something like, “with no device-related serious adverse events reported in a 100-patient study.” If an NB can verify every key point via a reference you’ve given (and that reference is in your submission package), they’ll have fewer objections.
Learn from NB feedback loops: If one of your CERs (say for another device) got NB comments, apply those lessons to the next. NBs are somewhat aligned on these expectations due to MDCG guidance and exchanges, so common deficiencies are well-known (as listed above).
In conclusion, NBs expect CERs under MDR to be exhaustive, evidence-based, and clearly aligned with regulatory requirements. Manufacturers who approach CER writing as a rigorous scientific and regulatory exercise – rather than a marketing or paperwork formality – tend to fare better. By planning thoroughly (CEP), executing systematically, and self-critiquing the CER against the known NB concerns, you can significantly reduce the chances of receiving costly nonconformities related to clinical evaluation.
6. Summary and Key Takeaways
Conducting a successful clinical evaluation under the MDR is a multifaceted task. Let’s recap the essential steps and best practices:
Start with a strong plan: A Clinical Evaluation Plan (CEP) is the foundation of your clinical evaluation. Define your device’s intended purpose, indications, and clinical claims clearly. Map out how you will gather and assess clinical data, referencing state-of-the-art benchmarks and regulatory requirements. Include specific objectives, acceptance criteria for success, and identify any need for clinical studies or PMCF. A well-thought-out CEP ensures you address all MDR demands methodically
. Remember, planning isn’t optional – it’s mandated by MDR and scrutinized by auditors.
Keep the state of the art at the forefront: Always evaluate your device in the context of current medical practice. Maintain an up-to-date state-of-the-art section through systematic literature reviews. Use it to justify your device’s benefit-risk profile – showing that your device performs at least as well as, or better than, existing solutions. Update the state-of-the-art analysis regularly, as new treatments or data emerge, to remain compliant.
Gather robust clinical evidence: Whether through literature, equivalence, or new clinical investigations – collect sufficient and high-quality clinical data to support every claim. If leveraging literature, do it systematically and appraise each source. If using equivalence, ensure all three aspects (technical, biological, clinical) are convincingly demonstrated and be mindful of the strict conditions (like needing access to competitor data for high-risk devices). If generating new data via a clinical study, follow GCP (ISO 14155) and design the study to directly answer safety and performance questions relevant to your device. For well-established or legacy devices, capitalize on existing data but critically assess if it truly meets MDR’s “sufficient clinical evidence” standard.
Compile a thorough Clinical Evaluation Report: Your CER should tell the complete story of your device’s clinical profile. Include device context, state-of-the-art, methods, results, and most importantly a clear analysis that ties it all together. Ensure the CER addresses each relevant MDR requirement:
- Demonstrate how the device fulfills the pertinent safety and performance requirements with clinical evidence.
- Provide a balanced benefit-risk assessment, explicitly weighing benefits against risks and comparing to the state of the art.
- Document any residual risks or uncertainties and how they will be followed up (e.g. via PMCF).
- Verify that all types of data (pre-market, post-market, literature, etc.) have been considered.
- Ensure traceability – link each conclusion back to data. A reader (or auditor) should be able to verify claims via references in the CER.
Anticipate Notified Body expectations: Be proactive in avoiding the common CER pitfalls that NBs have identified. Always include a CEP; be explicit and quantitative in your state-of-the-art and benefit-risk discussions; justify equivalence thoroughly if you use it; describe your literature search methods; incorporate post-market data and plans for ongoing evaluation. Conduct an internal audit of your CER against an NB checklist (like MDCG 2020-13) before submission. In short, sweat the details in the CER – clarity and completeness are your friends.
Maintain a lifecycle approach: Clinical evaluation is not a one-time event for the initial CE mark. MDR expects it to be a continuous process. Post-market clinical data (from PMS reports, PMCF studies, registries, user feedback, etc.) should be continually collected and fed back into updated CERs. Set a schedule for periodic CER review (e.g. annually for high risk, or as per your PMS plan) and stick to it. Also, if there’s a significant change in the device or its risk profile, update the clinical evaluation accordingly. This ongoing vigilance helps ensure the device remains safe and effective throughout its market life and keeps you in compliance with MDR’s requirements for continuous oversight.
Document everything and stay current: Good documentation and version control for your CEP, literature searches, data analyses, and CER revisions is vital. Regulatory compliance is as much about providing evidence of your processes as it is about the outcomes. Also, keep abreast of new MDCG guidances, standards, and NB feedback trends. MDR is an evolving regulatory landscape (for instance, new guidance on clinical investigation reports or on specific device types may emerge) – integrating the latest expectations will strengthen your clinical evaluation.
A structured and well-documented clinical evaluation is your key to MDR compliance and a solid understanding of your device’s safety and performance. It’s not just about meeting regulatory requirements—it’s about ensuring your device is backed by strong clinical evidence that supports risk management, product improvement, and patient safety.
To stay ahead, focus on strategic planning, thorough literature analysis, and clear, objective reporting. Treat your Clinical Evaluation Report (CER) as a scientific dossier, not just a regulatory checkbox. When done right, this approach strengthens your submission, reduces regulatory pushback, and helps you maintain market access with confidence.
Want more expert insights on MDR compliance and clinical evaluation?
Subscribe to my newsletter for free, practical guidance.