Teo Chee Loong*
*Corresponding author: anthony1109@hotmail.my
AI in Bio-Manufacturing: Innovation and Emerging Threats
The increased reliance on AI in bio-manufacturing introduces critical risks that threaten safety, product quality, and security. While AI systems revolutionize processes through enhanced speed and precision, their growing autonomy creates vulnerabilities that may not be fully understood. The 2024 AI Regulatory Landscape Report emphasizes that AI-powered biotechnology systems require stringent oversight to prevent catastrophic failures, particularly when handling hazardous materials, as mitigating catastrophic risks from AI-enabled Chemical, Biological, Radiological, and Nuclear (CBRN) hazards has become a top global priority. The report highlights that CBRN hazards present one of the most immediate and severe risks posed by AI, as it is already capable of lowering the barriers to developing biological and chemical weapons—an issue expected to escalate in the near future (Cheng & McKernon, 2024).
The landscape of biological risks has undergone significant changes over the past century, shaped by rapid scientific advancements, evolving geopolitical dynamics, and the rise of disruptive technologies like artificial intelligence (AI). In the past, the development and deployment of biological weapons were hindered by substantial technical and logistical challenges. However, with the increasing availability of advanced biotechnological tools—many of which are now enhanced by AI—these barriers are gradually diminishing, raising concerns about the potential for misuse and the urgent need for strengthened biosecurity measures (Wheeler, 2025).
One notable example of AI’s dual-use risks in biotechnology is its application in drug discovery. While AI-driven platforms are designed to identify therapeutic compounds, they can also be repurposed to generate harmful molecules. A study demonstrated that by modifying an AI system intended for drug discovery, researchers were able to produce 40,000 candidate molecules for chemical warfare—including both known and novel compounds—within just six hours. This highlights the potential for AI to be misused in developing hazardous substances, reinforcing the critical need for stringent oversight and regulatory safeguards (Wikipedia, 2024).
The integration of AI into bioengineering brings both groundbreaking advancements and significant risks, particularly due to its dual-use nature. While these technologies are designed to drive innovation in medicine, agriculture, and synthetic biology, they also have the potential to be misused for harmful purposes. AI lowers technical barriers in bioengineering, making sophisticated biotechnological tools more accessible and increasing the risk of exploitation by malicious actors. This growing concern highlights the urgent need for proactive safeguards and regulatory measures to ensure that AI-driven bioengineering is developed and applied responsibly (Wheeler, 2025).
AI Safety Challenges in Bio-Manufacturing: Errors, Cybersecurity, and Biases
Governments, international organizations, and media reports have increasingly highlighted concerns about the potential risks artificial intelligence poses to biosecurity. AI tools are recognized as facilitators that reduce informational barriers, accelerate the design of novel biological threats, and expand the capabilities of malicious actors. As AI continues to advance, its role in enabling biothreats underscores the urgent need for stringent oversight and comprehensive risk mitigation strategies (Batalis, 2023). Also, errors in AI models can lead to incorrect calculations, potentially resulting in unsafe biological products. Additionally, AI relies on vast datasets, which may contain inaccuracies or biases, ultimately influencing decision-making in unintended and potentially harmful ways.
Cybersecurity is another critical concern. AI-driven bio-manufacturing systems are vulnerable to hacking, which could expose sensitive information or even enable the development of dangerous bio-agents. The 2024 AI Regulatory Landscape Report warns that AI-controlled biological research systems, if misused, could significantly lower the barrier to developing biological weapons. AI is already capable of enabling the creation of biological and chemical weapons, and this risk is expected to escalate in the near future (Cheng & McKernon, 2024). This concern aligns with broader discussions on Chemical, Biological, Radiological, and Nuclear (CBRN) threats, as AI-driven automation in synthetic biology and genetic engineering may inadvertently increase these risks.
AI plays a crucial role in safeguarding digital health infrastructure from security threats, especially as cyberattacks on health systems continue to rise. By 2025, financial losses from these attacks are projected to reach USD 10.5 trillion, highlighting the urgent need for robust cybersecurity measures. Cybercriminals are increasingly leveraging AI to identify and exploit system vulnerabilities, making healthcare networks more susceptible to breaches. To counter these threats, health systems can adopt AI-driven security strategies used in other industries, enabling early threat detection, rapid response, and enhanced protection against cyber intrusions (OECD, 2024).
One of the greatest dangers arises when humans become overly reliant on AI and fail to adequately monitor its decisions. Undetected errors could lead to severe consequences, such as dangerous genetic mutations or contaminated pharmaceuticals. Even minor flaws in AI-generated biological designs may result in unsafe medical treatments, particularly as synthetic biology advances through the genetic modification of individual cells or organisms and the manufacture of synthetic DNA or RNA strands, known as synthetic nucleic acids (Cheng & McKernon, 2024). These risks emphasize the need for rigorous safety protocols, strict regulatory oversight, and careful AI integration in bio-manufacturing.
Another notable risk associated with AI in bio-manufacturing is its potential to introduce biases during the development of targeted medicines. Biases can emerge at multiple stages, from research and development to large-scale medicine production, leading to treatments that may be ineffective or even unsafe for certain patient groups. This issue highlights the importance of ensuring data integrity and implementing robust validation processes to mitigate unintended disparities in healthcare outcomes (Bjerregaard et al., 2024).
Machine learning AI relies on vast amounts of data to identify patterns and generate predictions, but if the data used for training are unrepresentative of the target population or of poor quality, the resulting models may introduce bias and lead to inaccurate, harmful, or even discriminatory outcomes. For instance, an AI system trained primarily on data from male patients may produce unreliable or unsafe recommendations when applied to female patients. Additionally, a lack of transparency and explainability in AI decision-making can erode trust, making healthcare professionals and patients reluctant to rely on its recommendations, ultimately limiting its effectiveness and adoption (OECD, 2024).
AI Failures in Bio-Manufacturing: Errors, Bias, and Quality Control Gaps
AI failures in bio-manufacturing have already led to serious real-world challenges, demonstrating the risks of over-reliance on automated systems. This section highlights three major categories of past mistakes:
(1) training data errors leading to toxic by-products,
(2) biases in AI systems affecting decision-making, and
(3) AI’s susceptibility to undetected errors.
First, AI miscalculations caused toxic by-products due to flawed training data, resulting in unsafe biological products. In one case, an AI-driven fermentation process failed to account for unexpected chemical interactions, leading to the recall of an entire batch of biologics. Second, biases in AI models have contributed to errors in bio-manufacturing. If an AI system is trained on incomplete or skewed datasets, it can misinterpret process conditions, leading to inconsistent production quality. Finally, AI’s susceptibility to undetected errors poses a major challenge. Automated quality control systems have failed to detect contamination because human operators placed excessive trust in AI-driven monitoring, assuming it was infallible. These cases underscore the necessity of maintaining human oversight, robust validation processes, and continuous AI model refinement to prevent costly and dangerous failures.
Bias in AI systems is another critical concern in bio-manufacturing, particularly in the development of targeted medicines, where AI-driven decision-making can inadvertently reinforce existing disparities. One common form of bias arises from imbalanced training data, where AI models are trained on datasets that do not fully represent diverse patient populations. For example, if an AI system used to optimize biologic drug formulations is primarily trained on genetic data from one demographic group, the resulting treatments may be less effective or even unsafe for other populations.
Another issue is bias in process optimization algorithms, where AI prioritizes cost efficiency over patient safety. For instance, some AI-driven drug discovery platforms have been found to favor formulations that are cheaper to produce but may not offer the same therapeutic efficacy across all patient groups. Additionally, bias in quality control systems can lead to the exclusion of valid data points that do not conform to an AI model’s expectations, potentially resulting in the rejection of effective treatments or approval of sub-optimal products. These risks highlight the urgent need for diverse, high-quality training data, rigorous validation processes, and ongoing human oversight to ensure AI systems in bio-manufacturing remain fair, effective, and aligned with patient safety standards (Dobson & Lee, 2024).
Even when AI is used to optimize bio-manufacturing, it remains susceptible to errors. AI-driven microbial fermentation processes have occasionally led to lower product quality or unexpected chemical by-products, posing risks to both safety and efficacy. These examples illustrate that while AI enhances bio-manufacturing, it is not infallible. Continuous human oversight is essential to identify and correct potential errors, ensuring the safety and reliability of AI-driven bio-manufacturing.
The Struggle to Regulate AI in Bio-Manufacturing: Legal and Ethical Dilemmas
Current biosecurity and biosafety regulations require clearer definitions and stronger enforcement to effectively address emerging risks. AI-enabled biological designs exist as digital predictions and pose no immediate physical threat until they are materialized in laboratory settings. While gain-of-function research—where pathogens are modified to become more dangerous—is already subject to existing policies, these regulations lack precise criteria for identifying what constitutes high-risk research. This ambiguity makes it challenging to interpret and implement safeguards effectively, highlighting the need for a standardized framework to govern AI-driven advancements in bio-manufacturing (Batalis, 2023).
In addition, regulating AI in bio-manufacturing is a complex challenge, as existing laws have struggled to keep pace with rapidly evolving technologies. Most current bio-manufacturing regulations were designed for traditional processes and fail to comprehensively address AI-driven risks such as algorithm errors, data bias, and automated decision-making failures.
According to the 2024 AI Regulatory Landscape Report, there is no dedicated global framework for AI safety in biology and chemistry, leaving many potential hazards unregulated. Despite this, existing and upcoming legislation on AI and CBRN hazards remains wholly insufficient given the scale of potential risks. Notably, neither the EU nor China currently has specific binding requirements addressing the development of AI models that could facilitate the creation of CBRN weapons, highlighting critical regulatory gaps in safeguarding AI-driven biological systems (Cheng & McKernon, 2024). The absence of clear legal guidelines allows companies to deploy AI-driven systems with minimal oversight. This also raises critical accountability questions—who should be held responsible when AI makes a harmful mistake? Should the blame fall on AI developers, system operators, or biotechnology companies?
One pressing challenge in AI regulation is ensuring data integrity in AI-driven analytical methods. In June 2022, BioPhorum identified data integrity as a major concern, particularly in AI-powered analytical techniques such as next-generation sequencing (Global AI Research Institute). Flaws in AI interpretation could compromise research findings, leading to inaccurate results and unsafe bio-manufacturing practices. This underscores the urgent need for updated regulatory frameworks that account for the unique risks posed by AI applications in the field.
Beyond regulatory gaps, AI in bio-manufacturing also presents significant ethical concerns. AI is increasingly used for genetic modifications, synthetic biology, and bio-production, but these technologies can be misused. AI-driven bio-manufacturing could enable unethical practices, such as illegal genetic enhancements or even bioterrorism. Additionally, the lack of global regulatory alignment makes it difficult to establish universal safety standards. Without international cooperation, AI safety enforcement will remain inconsistent, heightening the risk of AI failures in bio-manufacturing. Addressing these issues requires a coordinated global effort to implement robust policies, ensuring both innovation and safety in AI-driven bio-manufacturing.
Conclusion
AI is revolutionizing bio-manufacturing, enhancing efficiency and precision, but it also introduces risks such as algorithmic errors, data bias, cybersecurity threats, and ethical concerns. Real-world failures highlight the dangers of over-reliance on AI without human oversight, while regulatory gaps leave accountability and safety unaddressed. Without global cooperation, AI-driven bio-manufacturing remains vulnerable to misuse, including bioterrorism and unethical genetic modifications. To balance innovation and risk, robust regulations, stringent validation, and ethical safeguards are essential. AI must be a tool for progress, guided by responsible oversight to ensure safety, security, and sustainability in bio-manufacturing.
To ensure AI remains a force for good in bio-manufacturing, the author believes in upholding the five pillars of AI ethics: fairness, robustness, explainability, transparency, and privacy. These principles are essential in making AI systems more trustworthy, accountable, and secure, helping to minimize risks, prevent harmful outcomes, and build public confidence in AI-driven bio-manufacturing. In the next discussion, we will explore how these ethical pillars can be effectively implemented to strengthen AI safety, enhance regulatory compliance, and promote responsible innovation in the field.
References
-
Cheng, D., & McKernon, E. (2024). 2024 state of the AI regulatory landscape. Convergence Analysis, 67–75. Retrieved from https://convergenceanalysis.com/new-report-2024-state-of-the-ai-regulatory-landscape
-
Wheeler, E. F. (2025). Responsible AI in Biotechnology: Balancing Discovery, Innovation, and Biosecurity Risks. Frontiers in Bioengineering and Biotechnology, 13, 1537471. https://doi.org/10.3389/fbioe.2025.1537471
-
Wikipedia. (2024). Existential risk from artificial intelligence. Wikipedia, The Free Encyclopedia. Retrieved from https://en.wikipedia.org/wiki/Existential_risk_from_artificial_intelligence
-
Batalis, S. (2023). AI and Biorisk: An Explainer. Center for Security and Emerging Technology (CSET), Georgetown University. Retrieved from https://cset.georgetown.edu/
-
OECD. (2024). AI in Health: Huge Potential, Huge Risks. Organisation for Economic Co-operation and Development (OECD).
-
Bjerregaard, L. M., O’Neill, P. C., & Madsen, S. J. (2024). Exploring bias risks in artificial intelligence and targeted medicines manufacturing. BMC Medical Ethics, 25(7). https://bmcmedicalethics.biomedcentral.com/articles/10.1186/s12910-024-00892-3
-
Dobson, J. T., & Lee, K. P. (2024). Artificial intelligence in the biopharmaceutical industry: Treacherous or transformative? BioProcess International, 22(3), 45–60. https://www.bioprocessintl.com/information-technology/artificial-intelligence-in-the-biopharmaceutical-industry-treacherous-or-transformative/
-
Global AI Research Institute. (2023). The global picture of AI in the biomanufacturing industry. Retrieved from https://www.globalairesearch.com/reports/the-global-picture-of-ai-in-biomanufacturing