We have already analyzed the existential risks for humanity emerging from and . In this article we address the risks arising from artificial intelligence (AI).
In 1965, computer scientist IJ Good the possibility of creating an “ultra-intelligent machine”, which “could design even better machines” and which would lead to an “explosion of intelligence”, leaving man’s intelligence behind. In 1993, computer scientist and science fiction writer Vernor saw this possibility as a coming “technological singularity” and predicted that “within thirty years, we will have the technological means to create superhuman intelligence.”
Most people, including elites and political decision-makers, still conceive of AI as just another technological instrument, not realizing that they are autonomous agents with the ability to learn, adapt and evolve autonomously, which can make decisions and create new ideas without the need for direct human intervention.
The stated goal of many leading AI companies is the development of general AIs and, above all, superintelligent AIs that can significantly outperform humans in virtually all cognitive tasks.
In March 2023, the Future of Life Institute (FLI) issued a [assinada por um grande número de especialistas e figuras públicas] asking AI companies to “pause large-scale AI research.” The underlying concern was: “Should we develop non-human minds that could eventually surpass us, make us obsolete, and replace us? Should we risk losing control of our civilization?” Two months later, hundreds of prominent people signed a one-sentence statement stating that “Mitigating the risk of AI extinction must be a global priority alongside other societal-scale risks such as pandemics and nuclear war.” On October 22, 2025, the FLI issued a new report on the risks of developing superintelligent AI.
Policymakers tend to view these concerns as exaggerated and speculative. Despite the focus on AI security at international AI conferences in 2023 and 2024, this year’s AI Action Summit in Paris. It is important that policymakers, policy analysts and AI researchers devote more time and energy to addressing the risks underlying AI development. And it is crucial that policymakers in major powers understand the nature of the existential threat and recognize that, as we move toward AI systems that surpass human intelligence, demands must be made to control AI and adopt measures to protect human security.