Subscribe to our podcast, The Life Science Rundown, if you haven’t already.
The FDA Group’s Nick Capman sat down with Maria Vassileva, PhD, Chief Science & Regulatory Officer at DIA (Drug Information Association), for a clear-eyed look at how regulators are weaving AI into drug, biologics, device, and diagnostics review: what’s working now, what’s coming next, and how trust and safety stay front and center.
Drawing on her role leading DIA’s Global Science Team and her background across nonprofit executive leadership, clinical research, and public health, Maria walks through concrete regulatory AI use cases, how agencies are structuring validation and oversight, and why “human in the loop” is not going away. She also highlights global trends and the emerging expectation that AI systems be auditable, risk-based, and designed for equity and privacy from the start.
This discussion is especially useful for Regulatory Affairs, Quality, Clinical, Data Science, and R&D leaders who are either building AI-enabled tools or trying to understand how regulators themselves are using AI (and what that means for submissions, inspections, and long-term compliance).
Apple Podcasts | Spotify | YouTube | Web + Others
If your time is short
Here are a few stand-out insights from this discussion:
AI is already in use inside regulatory agencies, by design. Regulators are using AI for document retrieval, summarization, label comparisons, protocol-review support, and early safety-signal detection. These tools are meant to increase efficiency, not replace scientific judgment. Humans remain fully responsible for high-impact regulatory decisions.
Validation must cover the entire workflow, not just the model. Maria emphasizes that credibility comes from validating the full end-to-end process: defining the specific context of use, testing accuracy for the intended task, and putting safeguards in place to catch errors or hallucinations. AI outputs must always be checked and documented before informing regulatory decisions.
Regulators are adopting risk-based tiers and lifecycle control. High-risk AI uses that influence approvals, labeling, or safety require strict validation, monitoring, and documented human oversight, while lower-risk administrative tasks may be more automated. Frameworks like TPLC and PCCP ensure agencies can oversee model updates, data changes, and performance over time.
No single “AI regulator” exists—yet themes are converging. Multiple authorities (privacy, competition, employment, finance, etc.) are asserting that their rules apply to AI. Globally, regulators are exploring risk categorization, explainability expectations, and targeted AI use for heavy document tasks. Common threads include documentation, risk-based control, and human accountability.
Ethics, equity, and privacy are foundational, not optional. Maria stresses that bias and privacy directly affect scientific validity and public trust. Regulators expect representative data, subgroup performance checks, documented data provenance, and strong governance for sensitive health data—standards applying both to industry submissions and to regulators’ own AI systems.
Collaboration is essential to responsible progress. DIA’s neutral-convening role enables regulators, sponsors, academia, and technology developers to align on shared taxonomies, assurance expectations, and validation templates. This collaborative approach helps harmonize understanding and accelerates responsible adoption across the ecosystem.
The near future will likely bring more AI use paired with more transparency and oversight. Expect expanded agency use of targeted AI tools, greater public communication about when and how AI is used, and ongoing emphasis on human-in-the-loop oversight for high-stakes decisions. The trajectory is toward a more efficient, more interconnected regulatory system that preserves rigor and trust.
One thing to bring back to your team
Pick one AI use case you’re considering (or already piloting) and map it to a risk tier. Then draft a simple, TPLC-style plan that answers:
What is the specific context of use?
What evidence shows the tool is fit for that purpose?
How will performance, bias, and data drift be monitored over time?
Where, exactly, does human review or approval remain required?
Even if you’re not yet submitting this to a regulator, this mindset aligns with how agencies are thinking, and helps future-proof your program.
Maria Vassileva, PhD, is Chief Science and Regulatory Officer at DIA (Drug Information Association) and a member of the organization’s Executive Leadership Team. She oversees the Global Science Team, research initiatives and partnerships, publications (including TIRS and DIA Global Forum), global scientific content for events and executive roundtables, and relationships with stakeholders worldwide.
She previously served as Senior Vice President of Science Strategy at the Arthritis Foundation, held senior roles in clinical research and public health at Navitas Clinical Research and Social & Scientific Systems, and founded MD Science Alliance Consulting to support nonprofits, biotech startups, CROs, and pharma on strategy, partnerships, and clinical research.
Maria holds a PhD in Biochemistry and Molecular Biology from the Johns Hopkins Bloomberg School of Public Health (with a Vaccine Science and Policy certificate) and a bachelor’s degree in Biology from Concord University, with minors in Chemistry and Mathematics. Her work has focused on immunity and inflammation, musculoskeletal and cardiometabolic disease, obesity and neurological conditions, and on advancing diversity, equity, and inclusion in clinical workforce development and trial participation.
Connect with her on LinkedIn here.
Who is The FDA Group?
The FDA Group helps life science organizations rapidly access the industry's best consultants, contractors, and candidates. Our resources assist in every stage of the product lifecycle, from clinical development to commercialization, with a focus on staff augmentation, auditing, remediation, QMS, and other specialized project work in Quality Assurance, Regulatory Affairs, and Clinical Operations.
With over 3,250 resources worldwide, over 250 of whom are former FDA, we meet your precise resourcing needs through a fast, convenient talent selection process supported by a Total Quality Guarantee.
Here’s why 17 of the top 20 life science firms access their consulting and contractor talent through us:
Resources in 75 countries and 48 states.
26 hours average time to present a consultant or candidate.
Exclusive life science focus and expertise.
Dedicated account management team.
Right resource, first time (95% success).
97% client satisfaction rating.
Talk to us when you're ready for a better talent resourcing experience and the peace of mind that comes with a partner whose commitment to quality and integrity reflects your own.
Subscribe to The Life Science Rundown:
Apple | Spotify | YouTube | Web + Others




