Playback speed
×
Share post
Share post at current time
0:00
/
0:00

Applying AI in the Life Sciences with MasterControl's Matt Lowe

We sat down with Matt Lowe, Chief Strategy Officer at MasterControl, to explore the promising applications — and potential pitfalls — of AI in the life sciences industry.

In a recent episode of the Life Science Rundown podcast, The FDA Group’s Nick Capman spoke with MasterControl’s Chief Strategy Officer, Matt Lowe, about the rapidly evolving role of AI in quality management and manufacturing within the life sciences sector.

MasterControl is on a mission to bring life-changing products to more people sooner. The company offers leading solutions for product quality, helping highly regulated companies ensure quality and compliance in their life sciences operations. Their quality management system is the industry's most established and used QMS, used by organizations including the FDA, CDC, and ORA.

With 18 years at the company and prior experience in R&D in the medical device industry, Matt brings firsthand experience to the discussion on AI implementation. He sheds light on potential AI applications, from drug discovery to analyzing customer feedback — and discusses a risk-based approach to AI adoption, highlighting the importance of maintaining human oversight to address potential errors and support regulatory compliance efforts.

Apple | Spotify | YouTube | Web + Others

The FDA Group's Insider Newsletter is a reader-supported publication. To receive new posts and support our work, consider becoming a paid subscriber.

Summary, Key Points, and Practical Takeaways

This interview has been edited for clarity and length.

Nick Capman: From your perspective, what adds to the level of interest in AI for life science companies?

Matt Lowe: Well, I think it's hard not to be at least AI-curious. As much as it's in the news, everybody's talking about AI in any kind of business media that you look at today. How's it going to revolutionize your business? Is it going to replace us as humans? How much easier is life going to be with it? I think everyone is hearing that, and they're trying to process: Well, number one, where does it even make sense in my business? Where should I use it? And where shouldn't I use it? And probably most importantly, what is my competition doing if I don't use it? How will that impact my ability to stay competitive in the marketplace? So I know, as a company, MasterControl, we're looking at all kinds of use cases in our own internal business. How do we use AI? How do we enable our customer base to use AI through the solutions that we provide to the industry?

Can you explain where AI has applications in life sciences?

We're kind of seeing it all across the spectrum. Very forward-thinking use cases here. You know, we're headquartered in Salt Lake City, Utah. There's a company here called Recursion Pharmaceuticals, and Recursion utilizes algorithms and AI to generate compounds of interest. So they're using it in the hard science, pure discovery part of the process. And then completely on the opposite end of the spectrum, you've got your sentiment analysis; once a product is out in the marketplace, and looking at what's happening with your product, maybe combing through customer complaints or adverse events, things of that nature. So you go all the way from kind of the cradle end to the grave end of the product lifecycle and everything in between. We're looking at some very specific use cases, and it kind of plays into this idea of where you should use it and where you should not. How should you decide where those use cases lie in your company? It's something that we're very accustomed to in the life sciences industry — this idea of a risk-based approach to really everything we do. And what I'm proposing is, why should it be any different with AI? Should we not take a risk-based approach there?

What are the different pitfalls and considerations when implementing AI?

The thing about AI that we see is that when it gets things wrong, it gets really far along. And making sure that you have a way to mitigate that risk, and probably more importantly, is kind of getting your arms around what it's doing. So, we look for low-risk, high-value use cases. An example of that within our own solution set: There is an employee training component within our quality management system. We have all these SOPs and work instructions that constantly change. Our staff needs to be trained in those changes and revisions. So MasterControl can track all of that. Well, we know that the regulators like to see objective evidence that folks are actually trained to these changes that are occurring. And so oftentimes, folks will use an exam-based approach. Well, if you think about the level of effort and the resourcing that's required to create comprehension exams for every revision of an SOP or a work instruction that occurs, you know, pretty soon you've got a person whose full-time job it is to create those exams. We saw the early application of generative AI, which we could utilize to create comprehension exams. And now something that took, you know, two, three, maybe five hours on the high end by the time you get through all the stuff can be done in a matter of minutes. And, you know, this is high value because it reduces the man-hours required to do that. But it's also a fairly low risk because you get a draft back. You keep the human in the loop that is reviewing that. And now they say, 'Oh, no, that one's goofy. I don't know where it got that.' So this is a great example, I think, of taking a risk-based approach where we can learn about your low-risk, high-value use cases and then progress into things that may have more risk associated with them.

Can you give an example of a high-risk AI application?

Going to the opposite end of the spectrum would be leveraging this in the manufacturing environment, where the machine is making recommendations as to things you should change to increase yield, reduce cost, throughput, or whatever it may be. And it's a great use case. Because if you look at all the data that are collected during a production run, you know, whether you're in the device industry and you've got your device history records, or you're on the pharma side and you've got batch records, all of that data coming together over a long period of time, the machine is going to be really good at picking out patterns and trends in there and applying those to the current batch or lot that you're running. At the same time, if you don't understand what it's doing and what the implications of what it's recommending may mean, you could have a pretty disastrous outcome. It could produce a bad batch, or even worse, it could produce a bad batch that you don't know about. So, stepping into that through a series of lower-risk use cases, you learn more and more about it, and your business becomes more accustomed to it. And folks understand what to look out for.

What are some other low-risk, high-reward AI applications?

Another low-risk, high-value type situation that we see is summarization capabilities. This is something that generative AI is really good at today. So, think about your SOP and work instructions going through revisions. Well, it's very easy to leverage generative AI to create a summary of the changes. You know, previously, it took somebody understanding the content to a level and then crafting it into some words to be able to give someone, 'Hey, here's what's changed in this document.' Well, now you just say, 'Hey, here's two documents; tell me what's changed.' And it's very good at that. Another area where we see some applicability is the auto-categorization of incoming quality events. So you know, maybe I've got adverse events or customer complaints, nonconformances, whatever it may be. The machine can look through the content of those, compare it to the body of all the other ones that you've ever received to look at how you categorize those, and then recommend a category for those incoming events. Again, I think many of these cases are low-hanging fruit that simultaneously pose low risk. I think we'll get to this point, but I don't think we're there yet. This would be more of a high-risk, albeit high-value type use case. You know, I've got a CAPA. And I'm going to let the machine perform the investigation, recommend the corrective and preventive action, implement it, track and trend it, but I think we're pushing too far. It's a lot of depth.

Can you explain the "3% problem"?

Yeah, that's kind of going back to the 3% of the time that it gets it wrong. And the margin of error is closing there. And I think it will. The thing about AI is that you get to 98-99%. And that last 1% is really, really hard to close. And so whether it's 3%, or 1%, or 10%, what do you do when you're in that margin of error? And again, I think that just comes back to this idea of the human in the loop, making sure that you're able to pick that up and catch that, and not moving to a point of automation that you just blow right through that and bypass it and end up in a really unfortunate situation... Several years ago, at one of our customer events, we had Ken Jennings come to talk to us. Do you remember Ken from Jeopardy? Obviously, this was before ChatGPT, and AI was a normal word in the mainstream. But he talked about when he played against Watson. And he said it was really good, scary good. And we're talking, this is probably 10 or 12 years ago. But he came back to this idea of the 3% problem, he said, when it gets wrong, it's so wrong. And that's actually helpful because it makes it easier for us to spot. So, if you think about it in terms of traditional risk management and FMEA framework, your detectability is high in that case, right? With generative AI, I think it changes that a little bit. But the detectability goes down because it can be so believable. And this idea of hallucinations in the AI is very real. And I think it's going to get harder and harder to detect.

What would you like to leave the audience with?

Don't be afraid. AI, I think, is a wonderful tool in our toolbox. I don't believe it's going to replace us. I think it's going to make us better and more effective. I think it's going to allow us to create higher quality, lower-cost products. And I think that's exactly what the healthcare system worldwide needs. It's just understanding where we should use it, where we shouldn't, and what we should look out for, and hopefully, we can all figure that out together.


Matt’s key takeaways:

  • Apply a risk-based approach to AI implementation, similar to other processes in the life sciences industry. Start with low-risk, high-value use cases before progressing to more complex applications.

  • Keep humans in the loop when implementing AI solutions. This helps catch errors, ensures regulatory compliance, and maintains control over critical processes.

  • AI can be applied across the product lifecycle, from early-stage drug discovery to post-market sentiment analysis of customer complaints and adverse events.

  • Focus on applications like automated exam generation for employee training on SOPs and work instructions, which can significantly reduce man-hours while maintaining quality.

  • Consider leveraging AI for summarizing changes in documents like SOPs and work instructions, which can save time and improve efficiency in quality management processes.

  • Use AI for auto-categorizing incoming quality events, such as adverse events, customer complaints, or nonconformances, to streamline quality management processes.

  • Be cautious with high-risk applications, such as using AI for manufacturing recommendations. Ensure thorough understanding and validation of AI-generated suggestions before implementation.

  • Be aware that even highly accurate AI systems can make errors. Implement safeguards to catch and mitigate these errors, especially in critical processes.

  • Understand that AI can produce convincing but incorrect information. Develop strategies to detect and manage these "hallucinations" in AI outputs.

  • Start with simpler, lower-risk AI applications and gradually progress to more complex use cases as your organization gains experience and confidence.

  • Consider how AI implementation might affect your competitive position in the market. Balance the potential benefits against the risks of adoption.

  • Always make sure any AI implementation aligns with regulatory requirements. Be prepared to provide evidence of AI system validation and human oversight to regulators.

  • View AI as a tool to enhance human capabilities rather than a replacement for human expertise. Use it to make processes more efficient and effective while maintaining critical human judgment.

The FDA Group helps life science organizations rapidly access the industry's best consultants, contractors, and candidates.

Need expert help planning and executing compliance projects or connecting with a quality and compliance professional? Get in touch with us. If we haven’t worked together yet, watch our explainer video below or head to our company introduction page to see if we could help you execute your projects now or in the future.

Our service areas:

Quality Assurance | Regulatory Affairs | Clinical Operations | Commissioning, Qualification, and Validation | Chemistry, Manufacturing, and Controls (CMC) | Pharmacovigilance | Expert Witness

Our engagement models:

Consulting Projects | Staff Augmentation | FTE Recruitment | Functional Service Program

Our podcast:

Apple | Spotify | YouTube | Web + Others

0 Comments
The FDA Group's Insider Newsletter
The FDA Group's Insider Newsletter
Authors
The FDA Group