July 8, 2025. The rapid advancements in artificial intelligence are pushing humanity closer to a technological singularity, a point of hypothetical future growth where technological progress becomes uncontrollable and irreversible, resulting in unfathomable changes to human civilization. Dr. Alan D. Thompson, a prominent AI expert, posits that this transformative event could occur as early as mid-2025, with early indicators already manifesting through groundbreaking AI discoveries.
Dr. Thompson’s bold prediction is rooted in the accelerating progress of Artificial General Intelligence (AGI) and the nascent stages of Artificial Super Intelligence (ASI). According to his analysis, humanity is already 94% of the way to achieving AGI, and the initial steps towards unlocking ASI are now being observed. This sentiment is echoed by figures within the industry, such as Arvind Srinivas, CEO of Perplexity, who recently tweeted that the focus of AI development should now decisively shift from AGI to the even more powerful realm of ASI.
To track this unprecedented progress, Dr. Thompson has developed an “ASI checklist,” a comprehensive framework outlining 50 key indicators of Artificial Super Intelligence. While none of these indicators have been fully realized, several are reportedly in advanced prototype stages, marked by yellow indicators on his checklist, signifying tangible progress towards these ambitious goals.
The past year has seen a flurry of remarkable AI-driven inventions and discoveries that lend credence to Dr. Thompson’s assessment. Microsoft’s “Discovery AI” stands out, having invented a novel non-PFAS coolant. This breakthrough is not merely an incremental improvement but a significant leap, directly contributing to several points on the ASI checklist, including “first new discovery,” “first new physical invention,” and “novel computing materials discovered.” This demonstrates AI’s capacity to innovate in fundamental scientific and engineering domains.
Beyond coolants, Microsoft’s AI has also made strides in material science, screening over 32 million potential candidates to identify a superior battery solution. This intensive computational search led to the discovery of a solid-state electrolyte candidate that remarkably uses 70% less lithium, promising more sustainable and efficient energy storage solutions. Such an achievement highlights AI’s ability to accelerate research and development cycles that would take human scientists decades to complete.
Another compelling example comes from the realm of theoretical physics. An AI model, identified as “03 mini high,” assisted a researcher at Brookhaven National Laboratory in finding novel exact solutions to a complex physical model. This collaboration between human intellect and AI demonstrates the latter’s potential to augment human scientific inquiry, pushing the boundaries of understanding in highly specialized fields.
Perhaps one of the most intriguing developments is Google’s “Alpha Evolve,” an evolutionary coding agent powered by Gemini 2.0. Designed as a general-purpose system for scientific and engineering tasks, Alpha Evolve has already demonstrated astonishing capabilities. It has significantly improved Google’s data centers, known as Borg, by optimizing compute resources by an impressive 7%. Furthermore, Alpha Evolve has shown its prowess in hardware optimization, improving AI chips (TPUs) and streamlining the intricate training processes for Gemini models. Most remarkably, this AI system managed to optimize an algorithm that had remained unimproved by human efforts for over 60 years, showcasing its ability to surpass human ingenuity in complex problem-solving.
However, the rapid ascent of AI is not without its profound philosophical and societal implications. Consider the controversial “Rapture” theory, a concept reportedly discussed by prominent AI researcher Ilya Sutskever. In 2023, Sutskever is said to have suggested that OpenAI might require a “bunker” before releasing AGI, driven by the belief that such immensely powerful technology would become a target for governments worldwide, necessitating protection for the core scientists involved. The “Rapture” in this context draws a parallel to the Christian theological concept of believers ascending to heaven, leaving others behind during a period of tribulation. This raises a critical question: would this AI-driven “rapture” be a positive or negative event, and what would it mean for those “left behind”?
The potential future implications of ASI are further illuminated by referencing Max Tegmark’s 2017 book, “Life 3.0.” Tegmark’s work predicts an “astonishing tech boom” fueled by ASI, leading to revolutionary products and an unprecedented acceleration of scientific discoveries. This could see AI models overwhelming patent offices with a deluge of inventions and ultimately dominating the technological landscape. Dr. Thompson’s ASI checklist also includes future milestones such as the widespread deployment of fully autonomous humanoids in workplaces and homes, the potential elimination of crime through Universal Basic Income (UBI) and improvements in mental wellness, and the comprehensive resolution of mental health conditions.
As AI continues its relentless march forward, the discussions around its potential, its risks, and its ethical implications become ever more critical. The insights from Dr. Thompson and the tangible advancements showcased by companies like Microsoft and Google underscore that the singularity might not be a distant sci-fi fantasy but a rapidly approaching reality, demanding careful consideration and proactive planning from humanity.