Aligning with Last Year's CSA Guidance: 10 Essential Questions for Device Firms
In January, we broke down FDA's new CSA draft guidance. Now we're focusing on the questions you can use to conduct a gap analysis.
This issue is available in full for paid subscribers. If you’re not already a paid subscriber, you can upgrade here.
In September of last year, the FDA released a draft guidance that outlines the software assurance requirements for computer and data processing systems used in medical device production.
It will eventually be a supplement to the “General Principles of Software Validation,” replacing section 6, which focuses on the validation of automated process equipment and quality system software.
In this year’s January issue of this newsletter, we broke down this guidance in detail. If you’re a device professional, be sure to watch and read that presentation first if you haven’t already:
Today, we’d like to revisit this important guidance and identify some of the questions device teams can use to conduct a gap analysis against it.
First, a brief refresher on what the guidance put forth.
Guidance summary
The 2022 draft guidances introduce computer software assurance as a risk-based approach to ensure confidence in the software used in production or quality systems. Again, it’s intended to supplement, and in some cases supersede, the FDA's existing "General Principles of Software Validation" guidance.
More specifically, the guidance applies to software used directly in production or the quality system, and software that supports these systems. It does not address software that is a part of the medical device itself.
Risk-based approach to CSA: The guidance emphasizes a risk-based approach to software assurance, considering the potential impact on device safety and quality if the software fails. This approach aims to apply validation efforts proportionate to the risk, using a least-burdensome principle. FDA differentiates between high-process risk and high-process-risk software, guiding manufacturers to allocate resources and efforts accordingly.
Assurance activities and testing: The guidance suggests various testing and assurance activities, such as unscripted testing, error-guessing, exploratory testing, and more robust scripted testing, depending on the risk associated with the software. It also encourages manufacturers to use risk-based testing to manage, select, and prioritize those testing activities.
Documentation and record keeping: The guidance suggests maintaining sufficient documentation and records to demonstrate that software is assessed and performs as intended. This includes detailing the intended use, risk determination, testing conducted, and outcomes. It highlights the increasing role of digital technology and electronic records in documentation, suggesting a move away from manual or paper-based records.
The guidance includes examples demonstrating how the principles can be applied in practical scenarios, offering insight into how companies might adapt their processes.
As we mentioned in our initial analysis in January, this guidance has some practical implications for device teams. Companies need to have a thorough understanding of the intended use of their software and categorize them based on the level of risk they pose. They should adopt a flexible approach to software assurance activities, tailoring their efforts to the risk level of each software application.
Documentation and record-keeping practices should be updated to align with the guidance, and digital solutions can be leveraged for efficiency and compliance. The guidance implies a need for ongoing monitoring and updating of software assurance practices as technologies and software applications evolve.
Questions for self-assessment
Below, we’ve identified a few questions that device teams can use to run a general gap analysis against this guidance. Talk to us if you’d like to discuss a more tailored assessment and possible remediation engagement.
Understanding of Software Categories:
1. Do we have a clear understanding of which software applications fall under the 'high process risk' and 'not high process risk' categories?
2. How do we determine the risk level of each software application in our production and quality systems?
Differentiating between 'high process risk' and 'not high process risk' software is vital. This classification impacts the level of validation and testing needed. For example, software used in the automated manufacturing of a critical medical device component will almost certainly be a 'high process risk' due to its direct impact on device safety. Conversely, software used for scheduling equipment maintenance might be 'not high process risk' due to its indirect impact.
Taking a Risk-Based Approach: