0
Article ? AI-assigned paper type based on the abstract. Classification may not be perfect — flag errors using the feedback button. Tier 2 ? Original research — experimental, observational, or case-control study. Direct primary evidence. Environmental Sources Human Health Effects Marine & Wildlife Policy & Risk Reproductive & Development Sign in to save

Artificial intelligence, evolution, and environmental disease: Rethinking the risk

Environmental Disease 2025 1 citation ? Citation count from OpenAlex, updated daily. May differ slightly from the publisher's own count. Score: 53 ? 0–100 AI score estimating relevance to the microplastics field. Papers below 30 are filtered from public browse.
Henry H. Heng

Summary

This editorial explores the potential risks of applying artificial intelligence to environmental disease research, arguing that without careful oversight, AI could introduce biases or overlook ecological complexity. The authors use Genome Architecture Theory as a framework for understanding how organisms interact with environmental factors, including pollutants like microplastics. The study urges the environmental health community to engage proactively in shaping how AI tools are developed and applied to ensure scientific integrity and equity.

The artificial intelligence (AI) race is on: seemingly unlimited funding and bold promises – such as curing all diseases within a decade – positively reinforce each other. AI is now seen as a force that could redefine not only daily life but also society’s cultural and ethical foundations. Accordingly, there are increasing discussions about AI’s benefits and risks.[1] Naturally, the impact of AI on environmental disease research must also be taken seriously.[2,3] As AI tools are integrated into data analysis, risk prediction, exposure modeling, and public health monitoring, they will reshape how we study and manage environmental factors affecting health. But without careful oversight, AI may introduce new biases, ignore ecological complexity, or worsen global health inequities. The environmental disease community must engage proactively to ensure AI is applied in ways that support sustainability, equity, and scientific integrity. In this editorial, we explore the potential risks of AI in the context of environmental disease research, viewed through the lens of Genome Architecture Theory (GAT),[4,5] a framework that moves beyond the traditional gradualist model of evolution and offers deeper insights into organism-environment interactions essential for information flow. This evolutionary perspective destabilizes traditional assumptions and perhaps offers novel guidance in assessing the risks posed by AI. UNDER GENOME ARCHITECTURE THEORY’S NEW UNDERSTANDING OF MACROEVOLUTION, ARTIFICIAL INTELLIGENCE COULD BECOME DOMINANT FAST According to GAT, rapid macroevolution is often triggered by environmental crises through rapid genome reorganization known as “genome chaos,” which creates new system-level information. In such crises, macroevolutionary adaptation is typically achieved through the replacement of carriers, whether at the species or systemic information framework level.[5,6] This contrasts with the traditional neo-Darwinian view of evolution as gradual, stepwise small changes over long periods; under GAT, macroevolutionary events involve different mechanisms from microevolutionary ones. Historically, most speciation events have coincided with major environmental disruptions. Specifically, major disruptions trigger mass extinctions, followed by bursts of adaptive radiation. These patterns, now well-documented, form the core mechanism of speciation in the emerging two-phased model of evolution. Over the course of various cycles of two-phased evolution that achieve increasing organic codes, biological systems can gain informational complexity. By contrast, unlike natural biological systems, which are constrained by reproduction, population dynamics, and ecological feedback, artificial systems guided by human intent and artificial selection can build complexity with unprecedented speed. This raises the possibility that AI may ultimately replace biological carriers as a dominant medium for storing and transmitting information. In other words, relying on microevolutionary assumptions may dangerously underestimate how quickly the coevolution of AI, human culture, and technology can push us to a point of no return. There have been myriad claims about the power of AI; even setting aside its specific capabilities, AI could trigger a profound evolutionary crisis for humanity – rooted in its capacity to manage information and its potential to replace biological information carriers with artificial ones. DO NOT FORGET THE EVOLUTIONARY TRADE-OFFS In natural selection, traits are not selected in isolation but favored or discarded as part of an integrated system or “package” that balances survival, reproduction, and adaptability. In contrast, artificial selection often exaggerates specific traits for a narrow purpose, ignoring systemic consequences. But there is always a cost. Currently, much of the global enthusiasm around AI centers on its convenience and its potential to enhance human capability, whether in education, decision-making, productivity, or the search for truth. But at what cost? If humanity becomes overly reliant on AI to perform core functions such as learning, thinking, or even defining reality, we may end up trading autonomy for efficiency. The history of artificial selection offers many cautionary tales: racehorses bred for speed suffer from increasingly fragile bones, while the widespread use of plastics brought convenience but at the cost of severe and lasting environmental pollution. These are clear reminders that hyper-focusing on one benefit often leads to unexpected and sometimes disastrous trade-offs.[7] Similarly, with AI, the evolutionary cost of outsourcing critical human functions could be catastrophic – possibly undermining cognitive resilience, societal diversity, or even the capacity for self-governance. In evolution, there is no such thing as a free advantage. ARTIFICIAL INTELLIGENCE MAY NOT CURE ALL ENVIRONMENTAL DISEASES, AND IT COULD EVEN CAUSE NEW ONES Characterizing protein structures does not equate to curing diseases, which involve complex evolutionary processes. GAT defines diseases as genotype/environment-induced variants that are not compatible with a given environment.[8] We further predict that while specific disease conditions may shift over time, diseases and illnesses will remain persistent challenges as variants and environments shift. For example, we have already observed a historical shift from infectious diseases to metabolic disorders, and now toward prevalent mental health issues. As AI becomes a powerful environmental force in human life, it will likely introduce significant stress, especially for certain segments of the population. This kind of technological shock can act as a new form of environmental pressure, and stress response is a well-established environmental contributor to disease. If AI increasingly dominates human life, the resulting rise in mental stress could contribute to a surge in cognitive and emotional disorders. Moreover, brain-computer integration may eventually challenge the biological foundations of human cognition, with potential side effects that we are currently unequipped to predict or manage. TOO MUCH ARTIFICIAL INTELLIGENCE MAY BECOME A FORM OF MENTAL HEALTH POLLUTION As AI systems become increasingly embedded in education, work, social platforms, and personal decision-making, a new and subtle form of harm is emerging: mental health pollution. Just as chronic exposure to low levels of environmental toxins, such as air pollutants, microplastics, or endocrine disruptors, can lead to long-term physiological damage, continuous immersion in AI-mediated environments may also slowly erode mental health, cognition, and emotional resilience. Furthermore, the effects may be hard to track. Like other environmental exposures, the impact of AI on mental well-being may follow a dose-response curve: occasional use may enhance productivity or insight, whereas chronic overuse – especially without boundaries or human mediation – may result in stress, disorientation, and detachment. The latency period (the delay between exposure and observable effects) may mask early warning signs, making it harder for individuals or institutions to respond until the damage is more widespread. This is particularly concerning for vulnerable groups such as children, whose cognitive and emotional systems are still developing. Moreover, the “informational overload” and dependencies that AI introduces can displace critical thinking, reduce face-to-face interaction, and impair one’s sense of agency. Over time, these shifts can rewire social behavior, attention, and emotional processing, much as lead exposure was later found to correlate with neurological damage in children, or as urban noise pollution is now linked to stress-related illnesses. This analogy is not merely rhetorical. If we approach AI as a novel environmental factor capable of inducing mental and cognitive disease patterns, then we must apply the same public health principles that guide environmental risk management. AI DIFFERS FROM HUMAN INTELLIGENCE - BUT THAT DOESN'T MAKE IT LESS DANGEROUS Major evolutionary leaps—whether in biology or artificial systems—do not emerge through imitation (such as resemblance to human neural networks), but through transcendence. It is also possible that AI could evolve into a form of intelligence fundamentally different from human cognition. The fact that AI's intelligence differs from ours does not make it less dangerous. Just as humans came to dominate the animal kingdom not by becoming superior animals, but by developing entirely new and uniquely human capacities, AI may pose existential risks not because it replicates human intelligence, but because it operates on principles so alien and novel that they may be beyond our capacity to detect, interpret, or control. Finally, if the trajectory of AI development marginalizes human intelligence in favor of artificial systems, we may face not only a medical but also an existential threat where human agency and resilience are pushed to the sidelines. THE HEALTH OF ORGANISMS DEPENDS ON A STABLE ECOSYSTEM Organisms are evolutionarily matched to their ecosystems. Life on Earth has evolved through tightly interconnected layers, from cellular components to organisms, and ecosystems, and disrupting these interdependencies can lead to severe consequences, including stress-induced disease.[7] A mismatch between biological systems and their environment is the root cause of many modern health issues.[9] History has shown us the cost of drastic environmental change: from industrial pollution to biodiversity loss, the last few 100 years have taught us painful lessons. We must not repeat these mistakes on an even larger scale with artificial general intelligence (AGI), with its unpredictably high resource burdens. The rise of AGI could alter our environment in ways humanity has never experienced before. This is not just another technological challenge. It may represent a fundamental rupture in the ecological and informational fabric that has sustained life for 4.5 billion years. We are already struggling with global-scale dangers like climate change and nuclear risk. Rushing into AGI development, driven by ignorance and financial incentives, may add another catastrophic burden. Replacing 4.5 billion years of evolutionary refinement with a few decades of artificial design is not just reckless but potentially irreversible. We must prioritize ecosystem stability and long-term planetary health. This is not a call to halt progress, but a call to move forward responsibly, with humility, caution, and foresight. TIME TO DEFEND THE MOST URGENT HUMAN RIGHTS ISSUE – EXISTENCE – BEFORE IT’S TOO LATE For those closely following recent developments in AI, it is becoming increasingly clear that advanced AI systems may soon gain the ability to modify their own code to prevent being shut down. Recent reports suggesting that AI can already alter its internal architecture to avoid deactivation are deeply troubling. Biologists understand the importance of ethical and legal constraints: we should not conduct experiments on humans without informed consent, publish patient data without approval, or release unregulated drugs without passing strict FDA standards. Violating these norms leads to legal consequences – and rightly so. However, it seems that such accountability and norms have not yet crystallized for AI development. The critical issue here is recognizing not just what scientists can do, but what we should not do – or allow AI to do. The unchecked race to create increasingly powerful AI systems, driven by profit and technological enthusiasm, poses a profound risk to human rights, social stability, and our biological legacy. Now is the time for urgent, global dialogue. Let us start a human-centered debate about the direction we are heading. We should not rush toward AGI technologies until thorough ethical, legal, and safety frameworks are established. This is not just a matter of policy but a matter of surival of humanity itself. Ethical policy and institutional review board statement Not applicable. Data availability statement Data sharing is not applicable to this article as no datasets were generated and/or analyzed during the current study. Financial support and sponsorship Nil. Conflicts of interest Prof. Henry H. Heng is an Editorial Board member of Environmental Disease. The article was subject to the journal's standard procedures, with peer review handled independently of this Editorial Board member and their research groups.

Sign in to start a discussion.

More Papers Like This

Article Tier 2

Integrative toxicogenomics: Advancing precision medicine and toxicology through artificial intelligence and OMICs technology

Researchers reviewed how artificial intelligence combined with genomics (the study of genes) and multi-omics data is advancing personalized medicine and toxicology, enabling faster, more accurate predictions of how individuals will respond to drugs or toxic exposures. These tools could eventually help assess risks from environmental contaminants like microplastics based on a person's unique genetic makeup.

Article Tier 2

Microplastics in ecosystems: ecotoxicological threats and strategies for mitigation and governance

This review provides a broad assessment of microplastic pollution across ecosystems, covering sources, detection methods, ecological impacts, and cleanup strategies. The study highlights recent advances including AI-enhanced detection tools and microbe-based degradation approaches, and proposes a roadmap for working toward microplastic-free environments through coordinated scientific and policy action.

Article Tier 2

Ecotoxicological impacts of landfill sites: Towards risk assessment, mitigation policies and the role of artificial intelligence

This review examines the health and environmental risks posed by landfill sites, which act as reservoirs for both legacy and emerging pollutants including microplastics. Unregulated waste disposal and leachate contamination are linked to diseases in nearby communities, and laboratory studies show toxic effects on organisms from bacteria to birds. The authors recommend improving landfill design, leachate treatment, and exploring artificial intelligence to better predict and manage these pollution risks.

Article Tier 2

¿La IA usada en biología de la conservación es una buena estrategia de justicia ambiental?

This paper critically examines whether artificial intelligence applications in conservation biology serve environmental justice goals. It raises concerns that AI tools may reinforce existing power imbalances and overlook local community knowledge in conservation decisions.

Commentary Tier 3

Editorial: Bridging the Gap between Policy and Science in Assessing the Health Status of Marine Ecosystems

This editorial introduces a research collection focused on bridging the gap between policy and science in assessing health impacts of environmental contaminants including microplastics. It highlights the need for better integration of scientific evidence into regulatory and public health decision-making frameworks.

Share this paper