AI—prompted apocalypse? Why humanity will endure

By Life In Humanity Analysis Desk

A single question haunts scientists, ethicists, and policymakers alike, in an era where artificial intelligence promises both unprecedented progress and potential catastrophe: could AI truly end humanity? From the doomsday warnings of researchers Eliezer Yudkowsky and Nate Soares whose new book chillingly declares “If Anyone Builds It, Everyone Dies,” to the cautionary forecasts of Geoffrey Hinton and Google DeepMind, the specter of an intelligence surpassing our own has captured global attention.

Yet amid these fears, Life In Humanity offers a perspective both sobering and steadfast: humanity’s story may be far from over despite people’s AI-related predictions. Guided by millennia-old prophecies and the unyielding adaptability of our species, this analysis explores why, even in the face of super-intelligent AI, the complete annihilation of humankind by AI remains not only improbable—but fundamentally impossible.

AI’s unpredictable power and the limits of human control

Eliezer Yudkowsky. “Illustration by TIME; reference image courtesy of Eliezer Yudkowsky,”—Time. He is one of the most prominent voices warning about the dangers of artificial general intelligence (AGI). He is co-founder and senior researcher at the Machine Intelligence Research Institute (MIRI), a nonprofit dedicated to ensuring that future superintelligent AI is aligned with human values. This American born in 1979 is self-taught— no formal higher education. This researcher and writer focused on artificial intelligence alignment and rationality has been writing about AI alignment and existential risk since the early 2000s.

For AI researchers Eliezer Yudkowsky and Nate Soares, authors of the new, unambiguously titled book If Anyone Builds it, Everyone Diesit’s time to freak out about the technology,” reports the news website, Semafor, in its 12 September 2025 article “Researchers give doomsday warning about building AI too fast”.

Humans have a long history of not wanting to sound alarmist,” Yudkowsky said in an interview with Semafor before the book’s publication scheduled for this week. “Will some people be turned off? Maybe. Someone, at some point, just has to say what’s actually happening and then see how the world responds.”

Semafor further puts “What is happening, according to the book, is that most of the big technology companies and AI startups like Anthropic and OpenAI are building software they don’t understand (the authors argue it’s closer to alchemy than science).

At some point, if these firms continue along their current path, large language models will grow powerful enough that one of them will break free from human control. Before we even realize what’s happening, humanity’s fate will be sealed and the AI will devour earth’s resources to power itself, snuffing out all organic life in the process.”

The debate around artificial intelligence often balances optimism with caution. Yet some voices push the conversation to its most extreme edge, insisting that even cautious development is unacceptable. Semafor points out “With such a dire and absolute conclusion, the authors leave no room for nuance or compromise. Building the technology more slowly, or building something else, isn’t put forth as an option. Even companies like Safe Superintelligence, started by former OpenAI executive Ilya Sutskever, should shut down, according to Yudkowsky and Soares.”

The Cambridge Dictionary defines “alchemy” as a type of chemistry, especially in the Middle Ages, which focused on trying to achieve a way  to change ordinary metals into gold and trying to find a medicine which would cure any disease. This dictionary also defines the word in the literary sense as a process so effective that it seems like magic.

The context in which the word has been used by the authors corresponds to the first definition, signifying a process that appears mysterious, not fully understood, and bordering more on magical transformation than on established scientific method.

Nate Soares, the President of MIRI—Machine Intelligence Research Institute. Credit: Minding Our Way.

The comparison underscores the authors’ skepticism: just as alchemists never succeeded in turning ordinary metals into gold, today’s AI will fall short of the ambitions placed upon it. It instead highlights that AI will engender results not initially pursued—such as AI turning on humans and exterminating all of them. Similarly, the quest for a universal cure remains elusive, reinforcing the notion that some goals, however technologically or scientifically ambitious, can remain beyond reach.

The pursuits of the Middle Ages, spanning roughly 500 AD to 1500 AD, remain unrealizable even today. Despite tremendous advancements in medicine, no institution can claim to have discovered a cure for every disease. By invoking “alchemy”, the authors highlight not just the mystery surrounding AI development, but also the limits of human mastery over complex systems.

Understanding the inner workings of advanced AI is reportedly becoming increasingly elusive, even for the experts who build it. Yudkowsky and Soares illustrate this complexity through parables, highlighting how AI can behave in ways that defy current human comprehension and control. Semafor states “In attempting to make these concepts relatable to a broad audience, Yudkowsky and Soares use a series of parables to illustrate their logic, which leads to this: deep within the billions of neurons that control large language models, for reasons computer scientists can’t currently grasp, something is happening to make the models behave in unintended ways.

AI companies deal with this problem by ‘aligning the models’ with a series of techniques, from reinforcement learning to fine-tuning to system prompts. At some point, those techniques will no longer work, the authors argue, and an AI model will grow powerful enough to ignore those instructions, pursuing a different agenda that we can’t predict.

The world has become so networked, so computerized, that there is no possible ‘kill switch’ that could stop a rogue AI model. Even if a data center were buried in a vault and air-gapped, AI models would find ways to manipulate humans into unknowingly (and maybe even knowingly, in some cases) providing an escape route.”

AI is rapidly reshaping the world, but its promises come with profound risks. From potentially catastrophic economic inequality to discrimination, even the experts who create it are sounding urgent warnings about the consequences. Semafor states “Besides its safety implications, AI will also have negative economic consequences for most people globally, computer scientist Geoffrey Hinton recently predicted: ‘It’s going to create massive unemployment and a huge rise in profits.

‘It will make a few people much richer and most people poorer’. Discrimination is a bigger threat to humanity than the potential for a mass extinction event, then-European Commissioner for Competition Margrethe Vestager argued in 2023.”

According to Semafor, just a small number of AI researchers predict human extinction by AI. “Only 5% of AI researchers believe the technology will lead to human extinction, based on a survey of nearly 3,000 of them conducted by research projects AI Impacts in late 2023.

AI. Credit: Pexels/ Tara Winstead.

The debate over AI’s potential risks remains far from settled, with thoughtful critics highlighting limitations and imperfections inherent in any intelligent system. Semafor says “Room for Disagreement. This essay published on Medium by an anonymous, self-described computational physicist gives a comprehensive list of arguments against Yudkowsky’s predictions of AI dominance: ‘It’s true that an AI could correct its own flaws using experimentation. This cannot lead to perfection, however, because the process of correcting itself is also necessarily imperfect.

For example, an AI Bayesian who erroneously believes with ~100% certainty that the earth is flat will not become a rational scientist over time, they will just start believing in ever more elaborate conspiracies. For these reasons, I expect AGI [Artificial General Intelligence] to be flawed, and especially flawed when doing things it was not originally meant to do, like conquer the entire planet.’”

AI, extinction, human resilience—a RAND scientist’s lens

Scientific American is the oldest continuously published magazine, founded in 1845 in the U.S., which has released articles by more than 200 Nobel Prize winners. It reports on science, health, technology, the environment and society. It ran a 6 May 2025 opinion headlined “ Could AI Really Kill Off Humans?”, written by Michael J.D. Vermeer.

Vermeer is a senior physical scientist, at RAND, studying science and technology policy research in relation to homeland security, criminal justice, the intelligence community and the armed forces. His specialty lies in assessing opportunities and security risks associated with emerging technologies.

He begins his opinion with these words “Many people believe AI will one day cause human extinction. A little math tells us it wouldn’t be that easy.” He directly goes on, saying “In a popular sci-fi cliché, one day artificial intelligence goes rogue and kills every human, wiping out the species.

Could this truly happen? In real-world surveys, AI researchers say that they see human extinction as a plausible outcome of AI development. In 2024 hundreds of these researchers signed a statement that read: ‘Mitigating the risk of extinction from AI should be a global priority alongside other societal-scale risks such as pandemics and nuclear war.’”

While fears of AI-driven human extinction capture headlines, some experts urge a more measured perspective. According to the RAND Corporation scientist—Vermeer, tangible threats like pandemics and nuclear war remain far more immediate, and careful research’s hypothesis suggests that humanity’s adaptability makes an AI apocalypse highly unlikely. “Pandemics and nuclear war are real, tangible concerns, more so than AI doom—at least to me, a scientist at the RAND Corporation, where my colleagues and I do all kinds of research on national security issues. My co-workers and I take big threats to humanity seriously, so I proposed a project to research AI’s potential to cause human extinction.

Michael J. D. Vermeer. “Senior Physical Scientist,”—RAND.

My team’s hypothesis was this: no scenario can be described in which AI is conclusively an extinction threat to humanity. Humans are simply too adaptable, too plentiful and too dispersed across the planet for AI to wipe us out with any tools hypothetically at its disposal. If we could prove this hypothesis wrong, it would mean that AI might pose a real extinction risk.

As fears of AI-fueled catastrophe intensify, experts are grappling with both extreme and nuanced scenarios. The RAND research team took a rigorous, skeptical approach—setting out not to ask whether AI would destroy humanity, but whether it could, analyzing its potential interaction with nuclear war, pandemics, and climate change. “Many people are assessing catastrophic hazards related to AI. In the most extreme cases, some people assert that AI will become a superintelligence with a near-certain chance of using novel, advanced tech such as nanotechnology to take over Earth and wipe us out.

Forecasters have tried to estimate the likelihood of existential risk from an AI-induced disaster, often predicting there is a 0 to 10 percent chance that AI will cause humanity’s extinction by 2100. We were skeptical of the value of predictions like these for policymaking and risk reduction. Our team consisted of a scientist (me), an engineer and a mathematician. We swallowed our AI skepticism and—in very RAND-like fashion—set about detailing how AI could actually cause human extinction,” says Vermeer.

A simple global catastrophe or societal collapse was not enough for us. We were trying to take the risk of extinction seriously, which meant we were interested only in a complete wipeout of our species. We weren’t trying to find out whether AI would try to kill us; we asked only whether it could succeed in such an attempt. It was a morbid task. We went about it by analyzing exactly how AI might exploit three major threats commonly perceived as existential risks: nuclear war, biological pathogens and climate change.

Even in a worst-case scenario, AI-triggered nuclear war is unlikely to wipe out humanity entirely, thanks to our sheer numbers and global dispersion, according the RAND scientist. “It turns out it will be very hard—though not completely out of the realm of possibility—for AI to get rid of us all. The good news, if I can call it that, is that we don’t think AI could eliminate humans by using nuclear weapons. Even if AI somehow acquired the ability to launch all of the 12,000-plus warheads in the nine-country global nuclear stockpile, the explosions, radioactive fallout and resulting nuclear winter would most likely still fall short of causing an extinction-level event.

AI. Photo from Pexels/ This Is Engineering.

Humans are far too plentiful and dispersed for the detonations to directly target all of us. AI could detonate weapons over the most fuel-dense areas on the planet and still fail to produce as much ash as the meteor that wiped out the dinosaurs, and there are not enough nuclear warheads in existence to fully irradiate all the planet’s usable agricultural land. In other words, an AI-initiated nuclear Armageddon would be cataclysmic, but it would probably not kill every human being; some people would survive and have the potential to reconstitute the species.”

Vermeer doesn’t predict any event capable of annihilating all of humanity. “We did deem pandemics a plausible extinction threat. Previous natural plagues have been catastrophic, but human societies have soldiered on. Even a minimal population (a few thousand people) could eventually revive the species. A hypothetically 99.99 percent lethal pathogen would leave more than 800,000 humans alive. We determined, however, that a combination of pathogens probably could be designed to achieve nearly 100 percent lethality, and AI could be used to deploy such pathogens in a manner that assured rapid, global reach.

The key limitation is that AI would need to somehow infect or otherwise exterminate communities that would inevitably isolate themselves when faced with a species-ending pandemic. Finally, if AI were to accelerate garden-variety anthropogenic climate change, it would not rise to an extinction-level threat. We would seek out new environmental niches in which to survive, even if it involved moving to the planet’s poles. Making Earth completely uninhabitable for humans would require pumping something much more potent than carbon dioxide into the atmosphere.”

Vermeer reminds that extremely powerful greenhouse gases exist that could, in principle, threaten humanity on a global scale. In a worst-case scenario, according to him, if AI were to bypass international safeguards and produce these gases at industrial levels, these super-persistent chemicals could render Earth uninhabitable for humans. “The bad news is that those much more powerful greenhouse gases exist.

They can be produced at industrial scales, and they persist in the atmosphere for hundreds or thousands of years. If AI were to evade international monitoring and orchestrate the production of a few hundred megatons of these chemicals (an amount that is less than the mass of plastic that humans produce every year), it would be sufficient to cook Earth to the point where there would be no environmental niche left for humanity.”

Vermeer adds that AI-induced human extinction isn’t an occurrence that could occur accidentally, highlighting that executing such a scenario would be extraordinarily complex. He further states that their analysis shows that a hypothetical super evil AI would need precise goals, control over critical physical systems, the ability to manipulate humans, and the capacity to operate independently long enough to complete the task.

He explains “To be clear: none of our AI-initiated extinction scenarios could happen by accident. Each would be immensely challenging to carry out. AI would somehow have to overcome major constraints. In the course of our analysis, we also identified four things that our hypothetical superevil AI would require to wipe out humankind: it would need to somehow set an objective to cause extinction.

It also would have to gain control over the key physical systems that create the threat, such as the means to launch nuclear-weapons or the infrastructure for chemical manufacturing. It would need the ability to persuade humans to help and hide its actions long enough for it to succeed. And it would have to be able to carry on without humans around to support it, because even after society started to collapse, follow-up actions would be required to cause full extinction.

When humanity meets technology—AI: an extremely delicate touch/meeting between two worlds that many voices warn against. Picture sourced from Pexels/ Cottonbro.

The debate over artificial intelligence often swings between two extremes—unbridled optimism about its potential and deep anxiety about its risks. This passage cuts to the heart of that tension, weighing the possibility of extinction against humanity’s unwillingness to abandon the benefits AI promises. “Our team concluded that if AI did not possess all four of these capabilities, its extinction project would fail. That said, it is plausible that someone could create AI with all these capabilities, perhaps even unintentionally. Developers are already trying to build agentic, or more autonomous, AI, and they’ve observed AI that has the capacity for scheming and deception.

But if extinction is a possible outcome of AI development, doesn’t that mean we should follow the precautionary principle and shut it all down because we’re better off safe than sorry? We say the answer is no. The shut-it-down approach makes sense only if people don’t care much about the benefits of AI. For better or worse, people do care a great deal about the benefits it is likely to bring, and we shouldn’t forgo them to avoid a potential but highly uncertain catastrophe, even one as consequential as human extinction,” points out Vermeer.

The prospect of AI wiping out humanity is chilling, but the truth is that our species already is experiencing multiple self-inflicted threats, as underlined by Semafor. These words—with which Vermeer concludes his opinion— evoke that safeguarding the future isn’t only about AI safety—it also involves addressing nuclear, environmental, and health dangers we’ve long known exist. “So will AI one day kill us all? It is not absurd to say it could. At the same time, our work shows that we humans don’t need AI’s help to destroy ourselves.

One surefire way to lessen extinction risk, whether from AI or some other cause, is to increase our chances of survival by reducing the number of nuclear weapons, restricting globe-heating chemicals and improving pandemic surveillance.  It also makes sense to invest in AI-safety research even if you don’t buy the argument that AI is a potential extinction risk. The same responsible AI-development approaches that can mitigate risk from extinction will also mitigate risks from other AI-related harms that are less consequential but more certain to occur.”

Will AI really result in human extinction?

“Geoffrey Hinton said humans will be like toddlers compared with the intelligence of highly power AI systems. Photograph: Pontus Lundahl/TT News Agency/AFP/Getty Images,”— The Guardian.

Different additional credible sources, including The Guardian quoting AI pioneer Geoffrey Hinton and Google DeepMind’s own research, also warn that artificial intelligence could one day wipe humanity out. Hinton has raised the probability of human extinction from AI to as high as 20%, while DeepMind predicts AGI might emerge by 2030 with the potential to “permanently destroy humanity.” Yet, despite these grave warnings, Life In Humanity does not expect AI to bring about human extinction—for two essential reasons it will explain.

While Vermeer has already said that  forecasters have often predicted there exists a 0 to 10 percent chance that AI will result in humanity’s extinction by 2100. However, The  Guardian— in its 27 December 2024 story titled “Godfather of AI’ shortens odds of the technology wiping out humanity over next 30 years”— reports “Geoffrey Hinton says there is 10% to 20% chance AI will lead to human extinction in three decades, as change moves fast.

The Guardian  continues “The British-Canadian computer scientist often touted as a ‘godfather’ of artificial intelligence has shortened the odds of AI wiping out humanity over the next three decades, warning the pace of change in the technology is ‘much faster’ than expected. Prof Geoffrey Hinton, who this year was awarded the Nobel prize in physics for his work in AI, said there was a ‘10% to 20%’ chance that AI would lead to human extinction within the next three decades.

Previously Hinton had said there was a 10% chance of the technology triggering a catastrophic outcome for humanity. Asked on BBC Radio 4’s Today programme if he had changed his analysis of a potential AI apocalypse and the one in 10 chance of it happening, he said: ‘Not really, 10% to 20%.’”

The Guardian says that Hinton’s analogy underscores the profound challenge humanity faces: if controlling intelligence greater than our own almost carries no precedent, then the belief that we will effortlessly manage super-intelligent AI sounds dangerously naive. By highlighting the rare mother–infant dynamic, Hinton emphasized just how exceptional such cases are—leaving us with the unsettling implication that human control over advanced AI may prove equally elusive. “If anything. You see, we’ve never had to deal with things more intelligent than ourselves before. And how many examples do you know of a more intelligent thing being controlled by a less intelligent thing? There are very few examples. There’s a mother and baby.”

Hinton stressed how rare it is for something less intelligent to control something more intelligent. As an illustration, he pointed out that while infants can influence their mothers’ behavior, such cases are exceptional—and that renders the idea of humans successfully controlling a far smarter AI even more doubtful “ But that’s about the only example [mother and baby] I know of.”

Image credit: Pexels/Kindel Media.

London-born Hinton, a professor emeritus at the University of Toronto, underscored that humans would become like toddlers compared with the intelligence of highly powerful AI systems. “I like to think of it as: imagine yourself and a three-year-old. We’ll be the three-year-olds.”

The Guardian saying that AI can be loosely defined as computer systems performing tasks typically requiring human intelligence,  reports “Last year, Hinton made headlines after resigning from his job at Google in order to speak more openly about the risks posed by unconstrained AI development, citing concerns that ‘bad actors’ would use the technology to harm others. A key concern of AI safety campaigners is that the creation of artificial general intelligence, or systems that are smarter than humans, could lead to the technology posing an existential threat by evading human control.”

Reflecting on where he thought the development of AI would have reached when he first started his work on the technology, according to The Guardian, Hinton said “I didn’t think it would be where we [are] now. Because the situation we’re in now is that most of the experts in the field think that sometime, within probably the next 20 years, we’re going to develop AIs that are smarter than people. And that’s a very scary thought.

Hinton declared that the pace of development was “very, very fast, much faster than I expected” and exhorted for government regulation of the technology. “My worry is that the invisible hand is not going to keep us safe. So just leaving it to the profit motive of large companies is not going to be sufficient to make sure they develop it safely. The only thing that can force those big companies to do more research on safety is government regulation.”

NDTV published a story, on 7 April 2025, which features Google itself forecasting AI doomsday just only 5 years away. In the story entitled “AI Could Achieve Human-Like Intelligence By 2030 And ‘Destroy Mankind’, Google Predicts”, NDTV says “Human-level artificial intelligence (AI), popularly referred to as Artificial General Intelligence (AGI) could arrive by as early as 2030 and ‘permanently destroy humanity, a new research paper by Google DeepMind has predicted.”

Given the massive potential impact of AGI, we expect that it too could pose potential risk of severe harm,” the study highlights, adding that existential risks that “permanently destroy humanity” form clear examples of severe harm.

In between these ends of the spectrum, the question of whether a given harm is severe isn’t a matter for Google DeepMind to decide; instead it is the purview of society, guided by its collective risk tolerance and conceptualisation of harm.”

“A new research paper by Google DeepMind has predicted the doomsday scenario,”—NDTV.

The paper, according to NDTV— co-authored by DeepMind co-founder Shane Legg—, doesn’t explain how AGI may cause mankind’s extinction. It instead addresses preventive measures that Google and other AI companies should implement to decrease AGI’s threat.

The research breaks down the dangers likely to be begotten by advanced AI into four key groups: misuse, misalignment, mistakes and structural risks.

In February, Demis Hassabis, CEO of DeepMind stated that AGI, which is as smart or smarter than humans, will start to emerge in the next five or 10 years. He also batted for a UN-like umbrella organisation to oversee AGI’s development,” NDTV highlights.

In his own words, he stated “I would advocate for a kind of CERN for AGI, and by that, I mean a kind of international research focused high-end collaboration on the frontiers of AGI development to try and make that as safe as possible.

You would also have to pair it with a kind of an institute like IAEA, to monitor unsafe projects and sort of deal with those. And finally, some kind of supervening body that involves many countries around the world that input how you want to use and deploy these systems. So a kind of like UN umbrella, something that is fit for purpose for a that, a technical UN.”

Life In Humanity’s stance

Life In Humanity acknowledges that the prospect of developing artificial intelligence to the point where it equals—or even surpasses—human intelligence represents one of the most, if not the most, profound and dangerous threats in human history. The reason is twofold. First, it risks devaluing human life itself, reducing a human being—endowed with dignity and purpose by their Creator—into something that can be replaced or overshadowed by a machine. Second, it undermines the very balance of control and accountability upon which human civilization rests, introducing into the world a force that humanity may neither fully understand nor be able to stop.

Image credit: Pexels/Tara Winstead.

Unlike nuclear weapons which remain inert until activated by human decision, a super-intelligent AI could act independently, setting its own objectives beyond human control. This makes it maximally more dangerous than even the deadliest arsenal, since nuclear weapons cannot operate themselves, but AI could. In such a scenario, humanity would face not a passive tool, but an autonomous force capable of pursuing its agenda relentlessly.

This justifies why the warnings issued by researchers such as Yudkowsky and Soares, echoed by voices like Geoffrey Hinton and even Google DeepMind’s own study, cannot be dismissed. When experts describe advanced AI as “closer to alchemy than science,” they underscore that its development is not grounded in the transparency and rigor of established science, but in experimentation with consequences humanity may not survive. If AI were to become more intelligent than humans, the dynamic described by Hinton—that less intelligent beings rarely control more intelligent ones—implies that humanity would, in effect, hand over its stewardship of creation to an artificial entity.

For Life In Humanity, this is not merely a technical or scientific concern; it constitutes a civilizational and spiritual one. To elevate machines beyond human intelligence is to risk subordinating human worth, blurring the line between the created and the Creator’s design. In this light, the race to develop super-intelligent AI does not only constitute a maximally big threat to society—it also represents a profound challenge to the very meaning of being human.

We bring in religion-related aspects here not to teach religion or proselytize, but to draw wisdom from these texts as a framework for interpreting current existential questions—especially since their astonishing, literal fulfillment across centuries makes them impossible to dismiss. Far from being abstract or symbolic alone, these ancient prophecies have repeatedly aligned with real-world historical and technological developments, giving them a unique authority as we wrestle with challenges like artificial intelligence. In this sense, they provide not only spiritual insight but also a profoundly relevant perspective on humanity’s future.

Yet still, Life In Humanity argues that no event shall be able to completely destroy humanity, for the two reasons a bit addressed below as already promised. Life In Humanity contends that no event shall be able to completely destroy humanity, for two reasons rooted in the Holy Bible—acknowledged by many as infallible truth, though others see it differently.

The first reason is the “Explosion of Knowledge” prophesied by Daniel (Chapter 12:4) nearly 2, 500 years ago, a reality now unfolding before our eyes. Nobody can deny that knowledge has expanded beyond human comprehension, fueling innovations such as artificial intelligence itself. This acceleration of knowledge is not random; it forms part of an almost 2 500 year-old divine blueprint that ensures humanity’s story does not end prematurely.

Holy Bible. Photograph found on Pexels/Pixabay.

The second reason lies in the Revelation of John, particularly the prophecy of the “Mark of the Beast.” Our present world already holds the technological capacity to enforce this system of control— a mechanism indissociable from the Beast. If the Bible foretells these milestones in human history, then extinction before their occurrence is simply not possible. Indeed, nowhere in Scripture is it written that humanity will vanish entirely; rather, the texts describe trials, tribulations, and transformations, but not total annihilation.

For those who question biblical authority, caution should be applied. Isn’t it both stunning and unsettling to realize that a prophecy spoken nearly 2 500 years ago has come to fulfillment in our own age—especially since the 1700s when the Industrial Revolution ignited? For thousands of years before the 1500s, humanity lived in nearly the same manner: farming in the same way, fighting wars with the same strategies, transport being conducted in the same way, and advancing only in tiny increments. But with the Industrial Revolution, the pace of knowledge erupted, and today we stand in a world radically transformed—exactly as Daniel foresaw.

And what about John’s Revelation, written nearly 2,000 years ago? Is it not astonishing that his prophecy, which speaks of a time when a vast portion of humanity would perish yet not all, could now be fulfilled with the capacities of modern warfare and technology? Here is that prophecy (Revelation 9:13–21), the only place in the Bible foretelling the death of such a great number of humans, yet still sparing mankind from extinction:

“Then the sixth angel sounded: and I heard a voice from the four horns of the golden altar which is before God, saying to the sixth angel who had the trumpet, ‘Release the four angels who are bound at the great river Euphrates.’ So the four angels, who had been prepared for the hour and day and month and year, were released to kill a third of mankind. Now the number of the army of the horsemen was two hundred million; I heard the number of them. And thus I saw the horses in the vision: those who sat on them had breastplates of fiery red, hyacinth blue, and sulfur yellow; and the heads of the horses were like the heads of lions; and out of their mouths came fire, smoke, and brimstone.

By these three plagues a third of mankind was killed—by the fire and the smoke and the brimstone which came out of their mouths. For their power is in their mouth and in their tails; for their tails are like serpents, having heads; and with them they do harm. But the rest of mankind, who were not killed by these plagues, did not repent of the works of their hands, that they should not worship demons, and idols of gold, silver, brass, stone, and wood, which can neither see nor hear nor walk. And they did not repent of their murders or their sorceries or their sexual immorality or their thefts.”

The inevitability of these developments stands as evidence that humanity will endure. While artificial intelligence may threaten livelihoods, dignity, or even the balance of power in society, it cannot erase the very species that these ancient writings show to have a future. In this light, Life In Humanity affirms: no matter how formidable AI becomes—even surpassing nuclear weapons in autonomy and danger—it cannot bring about the extinction of humankind.

More comprehensively, Life In Humanity insists that the Holy Bible establishes a crucial distinction between the instruments of human agency and the sovereign action of the Creator. Scripture consistently portrays God as the ultimate Judge and Sustainer of life; even the final destruction of the wicked/sinners is described as God’s work, not the spontaneous prerogative of anything humanity fashions. Thus, however powerful a created thing may become—even an artificial general intelligence with terrifying capacities—it remains a contingent, dependent entity: it did not bring itself into existence. It requires energy, infrastructure and human-made supply chains to function, and it cannot assume divine authority to bring humanity to its terminal close.

This theological point is also practically persuasive. Prophecies such as Daniel’s vision of an explosion of knowledge and John’s Revelation suggest human continuity long enough for their symbolic milestones to occur. If Scripture foretells events that require ongoing human societies—global systems, instruments of control, and the very “mark” described in Revelation—then those prophecies themselves argue against the notion of an abrupt, total extinction brought about by technology before those markers unfold. In other words: the prophetic script requires humans to still be here for its final acts to be staged, and that is a powerful constraint on any claim that AGI must inevitably erase humanity.

Finally, the moral logic matters. The Bible says God will judge sin and injustice—He will execute final justice in His time and by His means. That truth reframes our response to AI: our responsibility is to act as wise stewards—building safeguards, regulations, and ethical guardrails around AI—while also remembering that we are extremely urged to produce creations that remain under human control, so as to avoid building instruments that could endanger us. Above all, no human invention, however advanced, will ever possess the power to wipe out all humanity. The final destiny of mankind rests in the hands of the Almighty alone, never in the works of our own creation.

Leave a Reply

Your email address will not be published. Required fields are marked *