Although elementary AI arguably emerged in the 1950s with the Turing Test, its influence on pharmaceuticals is a recent acceleration.
The sector has “historically collected data from manufacturing and clinical operations to inform decisions and monitor the state of control”, says digital life sciences advisor Michael Louie. Now, however, there is a “shift towards using near real-time data to proactively adjust manufacturing parameters” – a role where AI’s capabilities are set to expand.
In clinical development, AI is already helping optimise inclusion and exclusion criteria in trial protocols, which can reduce patient dropout rates. Yet in manufacturing, it is arriving in an environment where established frameworks such as quality by design (QbD) and process analytical technology (PAT) are already deeply embedded. The question for the industry now is not whether these tools remain relevant, but how they are changing – and whether AI can help drive their next phase.
Introducing QbD and PAT
In 2002, the US Food and Drug Administration (FDA) released ‘Pharmaceutical Current Good Manufacturing Practices for the 21st Century’. This advanced QbD principles by promoting better scientific and risk-based regulatory approaches.
PAT, meanwhile, is a regulatory framework and systematic approach to designing, analysing and controlling manufacturing processes, ensuring real-time product quality and consistency. So how can those in pharmaceuticals fuse QbD and PAT with AI, which is becoming increasingly important? Nikolai Makaranka, formerly of Bristol Myers Squibb (BMS), who launched a pharma AI start-up after seeing a gap in the market, recommends noting the similarities of QbD, PAT and AI. Fundamentally, says Makaranka, who focuses on AI-powered quality management solutions for life sciences, all rely on similar foundational capabilities. These approaches demand: robust digital infrastructure, specialised technical talent and a data-driven culture.
“Each of these frameworks are model and data-dependent, whether you’re building a design space for a formulation, controlling real-time process parameters, or training AI algorithms to identify deviations,” says Makaranka.
So, where are they distinct? For one, AI is more accessible. Unlike QbD or PAT, often requiring regulatory engagement, validated systems and complex instrumentation, AI experimentation can begin with something as simple as a ChatGPT prompt. AI is also more affordable, with little upfront cost and applicability across functions. This low entry barrier makes it easy for teams to start piloting solutions.
“However, this ease has created misconceptions,” Makaranka points out. Even outside GxP (good practice regulations) contexts, making AI work involves: clean, labelled, representative data; robust infrastructure for storage and governance; and clear success metrics and risk management frameworks. So, embrace AI’s capabilities, but expect complexities.
Learning from past mistakes
Systematic gaps can cause hold-ups, argues Makaranka – when companies attempt to develop AI in-house, they encounter blockers. These tend to be fragmented data systems and disconnected platforms; inadequate data quality and metadata structures; and lacking interoperability between existing tools and AI frameworks. These mirror the issues that plagued QbD and PAT rollouts. Grace Cronin, senior director of MS&T systems and engineering at BMS, believes the industry must embrace cross-functional collaboration, citing MS&T, PD and BI&T alignment as examples, and regulatory engagement early in AI’s life cycle.
In January 2025, the FDA published ‘Considerations for the Use of Artificial Intelligence to Support Regulatory Decision-Making for Drug and Biological Products’. BMS has joined with BioPhorum and other pharmaceutical companies to review and provide feedback.
“Similar initiatives like BioPhorum’s PAT roadmap and ILM/RTR frameworks have been effective in the past,” Cronin notes, “as they offer valuable guidance on building business cases, managing model validation and aligning with regulatory expectations,”
BMS is integrating AI across drug discovery, clinical trials manufacturing and regulatory operations. Previous challenges include fragmented or unstructured data and multiple initiatives running in parallel without necessarily compatible technologies.
“We understand the speed of change in the technology or understanding or the capability of that technology. To start overcoming these challenges, we set a vision and developed a clear roadmap with a three-year horizon to identify milestones,” Cronin explains. Louie argues that barriers to success are often poor data quality. For effective AI, particularly in predictive and generative applications, data must be “standardised, curated and governed from the outset”. To maximise impact, companies should “prioritise use cases that are high-impact yet low-risk, address clearly defined operational pain points, have measurable KPIs and offer potential for scale”.
Success when learning also depends on engaging cross-functional stakeholders early, including scientific, manufacturing, quality, regulatory and digital representatives. Louie feels organisational change management and upskilling teams in AI concepts are critical to supporting adoption and driving long-term transformation. It’s essential to establish strong foundations of compliance with data integrity, information security and data protection standards, particularly for AI systems intended for use in GxP-regulated environments and subject to health authority oversight.
Issues include variability in data formats; lacking standardisation, incomplete information; and inconsistent definitions. Often, there aren’t adequate tools or processes to clean and prepare datasets, nor clear data governance structure to ensure accountability for bringing data to internal corporate standards. Additionally, organisations often don’t allocate resources to go back to source systems and correct underlying data quality issues. Louie warns this becomes especially problematic when the objective is to train models that provide insights and predictions based on specialised, historical or domainspecific content.
As data must be carefully prepared before system migrations, he recommends incorporating “data readiness stages” into AI projects. This should include selecting appropriate tools to analyse data quality, identify gaps or inconsistencies, and make corrections before data is used for AI training or reference. This is particularly important when planning to use generative AI models, such as large language models (LLMs).
Cronin acknowledges hesitation can stem from concerns about regulatory uncertainty and lacking experience in implementing transformative tools in GxP environments, alongside fear of disruption, changing workflows, retraining teams and rethinking legacy systems. Addressing these requires “clear vision, communication and accelerated pilot programmes”, she suggests.
Missed opportunities
Cronin says despite early adoption, QbD and PAT’s full potential remains underutilised, with technical, cultural and regulatory challenges. Using regression models to predict shelf life to speed time to file was previously limited by a lack of integration between predictive models and regulatory filings. Similarly, PAT technologies like pH control are widely used, but broader deployment across unit operations was hindered by validation complexity and regulatory uncertainty. However, QbD principles such as process understanding, risk management and continuous improvement remain important and increasingly supported by digital and AI tools, which may help realise its full potential. As Cronin recounts, AI quickly analyses large amounts of data to find patterns, helping better understanding of processes and predicting outcomes. It can aid with collecting and analysing data automatically, saving time and mistakes.
During development and scale-up, AI can optimise different process settings, making it easier to find the best ways to make products. AI tools can organise and share research data, making knowledge more accessible and streamlining regulatory documentation. In manufacturing, AI could watch processes in real time, spot problems early and make corrections.
Overall, says Cronin, AI makes QbD “more practical and effective by automating tasks, improving understanding and enabling real-time quality control, helping companies build quality into products from the start and keep improving”.
Louie points out that QbD and PAT are primarily applied during early stages of a product’s life cycle. While QbD establishes a design space, scientifically and statistically justified ranges for process parameters, PAT enables direct collect of real-time data from manufacturing operations. QbD is also used to define test methods for in-process analysis, quality control, product release and stability testing.
Over the past 15 years, Louie notices these approaches have successfully streamlined the regulatory review process by enabling efficient communication between applicants and regulators, and allowing flexible management of process variations within the established design space.
However, he argues many organisations “fail to allocate sufficient resources to extend QbD and PAT practices beyond initial drug registration”. Largely, it’s down to the difficulty in curating and maintaining process and testing knowledge. This is often distributed across multiple outsourced entities, such as contract development and manufacturing organisations and testing laboratories – not supported by robust knowledge management tools. Much of this information remains unstructured, residing in technical reports prepared for technology transfer and regulatory submissions. Recently, data science techniques have been employed to improve the curation and retrieval of this information. With the integration of AI and machine learning (ML), it’s now possible to model complex data relationships, correlate diverse data sources and develop comprehensive knowledge management systems. These tools can be used to generate insights, simulate manufacturing conditions and predict potential failings.
The future looks promising
Louie is particularly keen for the application of AI/ML for managing and controlling neurodegenerative disorders. Of pharmaceuticals’ capability, he says: “As advances in medical sciences continue to extend life expectancy, preserving quality of life becomes increasingly important. Neurodegenerative disorders remain major causes of memory loss and dementia, yet the underlying causes, diagnoses and early detections remain unsatisfactory. Especially promising is the correlation between Alzheimer’s progression and retinal imaging.”
At its manufacturing sites, BMS is investing in AI to help enable digital twins, real-time release testing, and advanced analytics for biologics and small molecules. The vision includes AI-assisted hybrid models of its manufacturing processes and predictive maintenance. BMS has also established joint Centres of Excellence in Process Data Analytics and Process Modelling.
Cronin acknowledges the rapidly changing technological landscape, feeling success hinges on “upskilling teams, establishing clear governance, and embedding AI into existing workflows”, plus “investing in domain-specific AI tools”.
She predicts AI will become a foundational layer across biopharma, with “increased use of generative models, real-time analytics and autonomous process control”, Cronin is most excited about AI’s “potential to accelerate innovation while improving quality and compliance”, and “move from reactive to proactive decision-making and use the extensive data” BMS collects. The current focus is efficiency, but ultimately it will change BMS’s development and manufacturing.
AI isn’t a “plug-and-play solution”, but requires “robust data infrastructure, cross-functional buy-in, and continuous validation”, Cronin would tell manufacturers. “Proactive engagement and transparent documentation can pave the way for broader adoption.”
How AI and ML can avoid the problems that plagued QbD and PAT

Nikolai Makaranka advises companies to:
- start with narrow, focused use cases that solve specific pain points
- define clear evaluation criteria and success metrics before deployment
- invest in data infrastructure, as algorithms are largely commoditised, but clean, contextualised data makes AI useful
- build cross-functional teams combining AI expertise with deep domain knowledge – those who understand both data and process