AI May Be to Blame for Our Failure to Make Contact with Alien Civilizations
In this scenario, I estimate the typical longevity of a technological civilization might be less than 100 years. That’s roughly the time between being able to receive and broadcast signals between the stars (1960), and the estimated emergence of ASI (2040) on Earth. This is alarmingly short when set against the cosmic timescale of billions of years.
This estimate, when plugged into optimistic versions of the Drake equation – which attempts to estimate the number of active, communicative extraterrestrial civilizations in the Milky Way – suggests that, at any given time, there are only a handful of intelligent civilizations out there. Moreover, like us, their relatively modest technological activities could make them quite challenging to detect.
Wake-Up Call
This research is not simply a cautionary tale of potential doom. It serves as a wake-up call for humanity to establish robust regulatory frameworks to guide the development of AI, including military systems.
This is not just about preventing the malevolent use of AI on Earth; it’s also about ensuring the evolution of AI aligns with the long-term survival of our species. It suggests we need to put more resources into becoming a multiplanetary society as soon as possible – a goal that has lain dormant since the heady days of the Apollo project, but has lately been reignited by advances made by private companies.
As the historian Yuval Noah Harari noted, nothing in history has prepared us for the impact of introducing non-conscious, super-intelligent entities to our planet. Recently, the implications of autonomous AI decision-making have led to calls from prominent leaders in the field for a moratorium on the development of AI, until a responsible form of control and regulation can be introduced.
But even if every country agreed to abide by strict rules and regulation, rogue organizations will be difficult to rein in.
The integration of autonomous AI in military defense systems has to be an area of particular concern. There is already evidence that humans will voluntarily relinquish significant power to increasingly capable systems, because they can carry out useful tasks much more rapidly and effectively without human intervention. Governments are therefore reluctant to regulate in this area given the strategic advantages AI offers, as has been recently and devastatingly demonstrated in Gaza.
This means we already edge dangerously close to a precipice where autonomous weapons operate beyond ethical boundaries and sidestep international law. In such a world, surrendering power to AI systems in order to gain a tactical advantage could inadvertently set off a chain of rapidly escalating, highly destructive events. In the blink of an eye, the collective intelligence of our planet could be obliterated.
Humanity is at a crucial point in its technological trajectory. Our actions now could determine whether we become an enduring interstellar civilization, or succumb to the challenges posed by our own creations.
Using Seti as a lens through which we can examine our future development adds a new dimension to the discussion on the future of AI. It is up to all of us to ensure that when we reach for the stars, we do so not as a cautionary tale for other civilizations, but as a beacon of hope – a species that learned to thrive alongside AI.
Michael Garrett is Sir Bernard Lovell chair of Astrophysics and Director of Jodrell Bank Centre for Astrophysics, University of Manchester. This article is published courtesy of The Conversation.