From the Centre for Data Ethics and Innovation (CDEI)
Divination, fortune telling, and oracles fascinate mankind. The undeniable continuous success of Nostradamus' "The Prophecies" tends to prove that human beings want reassurance regarding what tomorrow will bring. Trading vision and crystal ball for data and facts, Dr Bruce Tsai and Dr Ben Smith invite us to dive into a systematic, interdisciplinary and holistic study of possible social/technological advancement scenarios.
The development of artificial intelligence is advancing at a pace that demands our immediate attention. Current trends suggest we may achieve AI systems capable of replacing remote workers by the early 2030s, potentially as soon as 2027,[i] which may radically increase economic growth. More striking still is the possibility that AI research itself could be automated — imagine millions of AI researchers working at ten times human speed, exponentially accelerating development.
However, significant bottlenecks could constrain this rapid growth. The energy demands alone are staggering, exemplified by the recent restart of a Pennsylvania nuclear plant to power AI data centers. Other limiting factors include the costs of training runs,[ii] chip manufacturing capacity, data availability, and fundamental algorithmic limitations. Finally–it is hard to predict with certainty which problems AI can help solve and which will remain unsolved.
The pace of AI advancement has consistently surprised experts in the field. AI systems now match or exceed top-percentile human performance across many domains,[iii] making it increasingly difficult for researchers to design benchmarks these systems cannot solve. The Center for AI Safety's "Humanity's Last Exam", a project to create the most difficult public AI benchmark in existence, highlights this challenge.
Even the most credentialled skeptics have so far consistently underestimated AI's potential — Yann LeCun, Turing Award winner, predicted in 2022 that a text based model being able to reason about physical interactions with the real world was "basically hopeless", no matter how powerful the machine is — not even "GPT-5000". This exact example (and questions of this nature) are now trivially answerable by GPT-4. On questions about AI progress, forecasters have consistently underestimated the rate at which AI progress happens.[iv]
Increased AI capabilities also come with corresponding risks. One example is that it is difficult to ensure that AIs remain "aligned", i.e. have behaviours that match the intended objectives or principles of their human creators or users. Researchers from MIT Future Tech have created an AI risk repository, which catalogues more than 700 different risks cited in the AI literature and classifies them in different ways.[v] And while AI systems have already jeopardised the health of millions of people in concrete ways, as algorithms become more powerful, and broader in scope and uptake, experts are also raising more speculative concerns.
Geoffrey Hinton, another recipient of the Turing Award, and more recently the Nobel Prize in Physics, resigned from Google in 2023 in order to speak freely on the risks of AI — calling it an existential threat. This was an opinion shared by other AI pioneers, hundreds of AI researchers, chief executives of leading AI companies, and government officials.[vi] Paul Christiano, a pioneer of AI safety, inventor of RLHF, and head of the US AI Safety Institute, believes that there is a ~1 in 5 chance of human extinction within 10 years of human-level AI.[vii] A survey of almost 3000 AI authors asking about a wide range of AI outcomes showed that almost a third of respondents put at least a 10% chance on extreme outcomes as bad as human extinction.[viii]
These numbers seem far too high to ignore, even if these risks are currently seen as speculative. In the commercial airline industry, the chance of boarding a plane that fatally crashes is in the order of 1 in 1-10 million[ix]. Even if Christiano was 100,000 times too pessimistic, the estimated likelihood of these potentially catastrophic risks would still be higher than that of a fatal plane crash — and far more is at stake. In a recent legislation decision about regulating AI, over 100 current and former employees at the top AI labs joined hands with the SAG-AFTRA to call for safeguards against severe risks that the most powerful AI models may soon pose.
Often when discussing these potential risks from AI, skeptics dismiss these as belonging solely to the realm of science fiction. But AI does not need to be super-intelligent, misaligned, or embodied in order to be a risk. For example, if you thought AI could pose a threat but were very confident that it would remain aligned — it's still the case that AI would be joining a very short list of technologies that have the ability to pose a threat to civilisation (e.g. nuclear weapons). We just don't have many of these lying around, and it seems like adding one to this category warrants caution even if we were confident about keeping it aligned and controlled. And even if AI never surpassed ~top human levels of intelligence, and could only take actions via the internet[x] or computers — it would be very worrying if this group of highly intelligent people, numbering in the many millions (and possibly rapidly growing), that we couldn't control could have a goal to take down humanity, even under these constraints.
There is a track record of the tech world being overconfident about how quickly all of the amazing benefits will materialise, and some of this hype contributes to a messier, noisier space that makes it more difficult to evaluate the true benefits and risks of AI and make tradeoffs accordingly. Despite this, it is critical we think about the possible ways AI can continue the trend of many innovations improving wellbeing for humanity as a whole.
In this article, we have drawn inspiration from Dario Amodei, Anthropic Co-Founder and Leopold Aschenbrenner–former OpenAI employee, Holden Karnovsky, Open Philanthropy Cofounder & Director of AI Strategy, and research from Epoch, a top research institute focused on investigating key trends relevant to the trajectory and governance of AI. Some of these authors we find insightful; some we have profound disagreements with and cite mainly because of their influence.[xi] We make no claims to originality.
Biologists have been skeptical of AI's potential due to the field's complexity and the belief that AI can't produce better data—"garbage in, garbage out." However, this perception is changing. Recent Nobel Prizes went to scientists like David Baker, who used AI for protein design, and the developers of AlphaFold, which predicts protein structures from amino acid sequences. As data generation becomes cheaper[xii] and more sophisticated, AI models may revolutionalise our understanding of disease processes.
Many breakthroughs in biology come from new techniques or tools,[xiii] and the rate of these discoveries might be largely constrained by intelligence.[xiv] Effective deployment of AI could compress decades of progress into years, potentially leading to reliable prevention and treatment of most infectious diseases, significant reductions in cancer and noncommunicable diseases, and cures for genetic disorders.[xv]
The impact on global health equity remains uncertain. While AI research might initially benefit wealthy nations who fund it, historical patterns suggest hope for broader benefits. Global health initiatives have achieved remarkable successes, with innovations in healthcare, infrastructure, and agriculture benefiting developing nations. Some developing countries have progressed faster than developed ones,[xvi] and global economic inequality has decreased over the last our decades.[xvii]
While skepticism about top-down approaches like AI-driven economic planning is understandable,[xviii] significant improvements can occur without them. Progress in global health and the spread of technologies made possible by AI will likely both boost productivity and economic growth.[xix], [xx]
AI presents a complex dynamic for climate change. Its enormous energy demands (projected to increase)[xxi] are already reshaping the energy / utility sector, with growing power requirements challenging existing carbon emissions and net-zero commitments.
However, AI could also help accelerate the development of technology crucial for a carbon-neutral economy. Innovations in clean energy and carbon capture,[xxii] as well as increased energy efficiency across multiple sectors. This puts AI in a scenario of being both potentially one of the largest drivers of emissions demand as well as one of the most potentially useful tools to keep us below 1.5°C of warming (which is looking increasingly unlikely given current carbon budget estimates).[xxiii]
As we approach futures where AI capabilities match or exceed human performance, we face fundamental questions about the meaning of work and resource allocation. Our current economic system, where most people support themselves through work, may need significant reshaping to ensure AI benefits translate to broader prosperity rather than exploitation or dystopia. But this may be more speculative and further away than many might assume — people routinely find meaning in pursuits despite not being the best in the world at it, and even if AI are already better — chess is an excellent example of this. Meaning and fulfillment can come from pursuits that are largely unrelated to contribution to the economy. And if we do focus on the economy, even if AI is better at 90% of any particular given job, it doesn't necessarily follow that we will lose 90% of human jobs.
Liberal democracies have provided many of us with fundamental freedoms — political freedoms, universal suffrage, civil liberties, and separation of state powers. However, AI may pose challenges to democracy — AI is already aiding authoritarian control through surveillance and its use in political marketing points to its potential as a propaganda tool. The massive computing power and investments required for AI systems is reshaping global energy politics, potentially strengthening the influence of the Gulf petrostates.[xxiv]
There currently exists a very active debate in the United States about its approach towards China in particular, its closest competitor in AI. Opinion ranges from very hawkish to more collaborative. On the one hand, it is argued that "The Free World must do everything in its power to prevail and secure a decisive strategic and military advantage with AI". On the other hand, it is critical to avoid an AI arms race because this could undermine AI safety efforts, especially as we are already building relationships and making progress on global AI Safety cooperation. Ultimately, there are real empirical uncertainties about how different approaches may affect conflicts today, including risking nuclear escalation, whether "The Free World" can maintain a lead, deploy safely, enforce norms, and what this might prompt adversaries to do leading up to that point, as well as in futures where they take the lead.
The dilemma is real for Aotearoa, where we both value liberal democracy and benefit from a peaceful global environment that is cooperative rather than competitive. While global powers risk an arms race in AI capabilities, what is clear is that New Zealand can and should support efforts to cooperate on AI Safety – that is, an international consensus to ensure that AI is developed responsibly in order to avoid catastrophic and existential risks.
Bruce Tsai is an independent Researcher, previously collaborating with the Future of Humanity Institute, University of Oxford.
Ben Smith is an interdisciplinary research scientist in responsible AI currently based at a leading US technology company.
[i] Some estimates are shorter, while others are longer (though the most up to date OWID data is 2022, which means it's probably reasonable as an upper bound given the very rapidly changing space)
[ii] doubling about every 9 months, with Microsoft and OpenAI planning an $100 billion data centre project for 2027
[iii] For GPT-4, this includes medical licensing tests, bar exams (though possibly overstated, biology olympiad tests. For o1, this includes the International Maths Olympiad,
[iv] In this particular question, the forecasters underestimated AI progress by 18 years in 2020, 7 in 2021, 4 in 2022, and 2 years in 2023.
[v] Via a Domain taxonomy, e.g. Discrimination / Misinformation / Malicious actors & misuse, as well as a Causal one (Human vs AI, intentional vs unintentional, pre vs post deployment)
[vi] as well as experts across a broad range of fields and sectors, like Bill Gates, Bill McKibben, Angela Kane, and Laurence Tribe
[vii] he gives a similar probability to the likelihood of humanity eventually losing control of AI
[viii] Though this has decreased somewhat from the 2022 survey
[ix] 5/38 million is ~1 in 8 million. Another source (pg 17) suggests ~1 in 250,000 incidents, which after excluding nonfatal crashes becomes around ~1 in 6 million (~96% of passengers in crashes survive)
[x] Though importantly, this includes recruiting and purchasing services from people who could act without these constraints
[xi] Especially on points around international governance by Dario and Leopold, but they serve as useful starting points in looking at positions that people at the at frontier of AI development hold.
[xii] Note log axis
[xiii] like CRISPR, genome sequencing, mRNA vaccines, optogenetic techniques, various microscopy advancements
[xiv] These discoveries are generally made by a tiny number of researchers, and often the same people, suggesting skill and not random search. They often, in retrospect, could have been made years earlier than they were — CRISPR had been known for 25 years before people realised it could be repurposed for general gene editing. Many successful projects were scrappy, poorly funded afterthoughts (in many cases even delayed by many years due to lack of support from the scientific community). These suggest progress is also not being primarily driven by massive resource concentration, but that intelligence and ingenuity may play a large role.
[xv] And while there are unavoidable latency e.g. in the clinical trial pipeline (often taking as long as 10-15 years), a sizable portion of this slowness comes from the difficulty of rigorously evaluating drugs that either ambiguously work or come with important tradeoffs (e.g. expensive new cancer therapies increasing survival by a few months with significant side effects). This requires huge studies for statistical power and a lot of competing interests for regulatory agencies to manage. When effect sizes are larger, as in the case of mRNA vaccines for COVID, approval happened in 9 months.
[xvi] Both contemporaneously and during the same phase of development, most notably in South and East Asia.
[xvii] though it's not obvious that this comparatively recent trend will persist
[xviii] the 20th century is littered with structural adjustment programmes that have caused significant harms
[xix] The world coordinated through a 20 year campaign that ended with the eradication of smallpox — going from 10s of millions of cases a year in the mid 20th century to zero just 2 decades later. Polio cases have dropped by 99%, going from being endemic in 125 countries in 1988 to two over the course of ~35 years.
[xx] Advances in fertilisers, pesticides, and automation drastically increased crop yields in the 20th century; similar advancements, as well as in other areas like genetic engineering, weather forecasting and increasing supply chain efficiencies may lead to continued improvements in food security.
[xxi] AI data centres may consume up to 25% of America's power by 2030.
[xxii] likely necessary to achieve net zero GHG emissions CLIMATE CHANGE 2023 (IPCC 6th Synthesis report - B.5.1)
[xxiii] CLIMATE CHANGE 2023 (IPCC 6th Synthesis report - B.5.2) suggests a ~500Gt CO2 budget remains for a 50% chance at staying below 1.5°C.
[xxiv] For these countries, AI is an opportunity to diversify an oil dependent economy, to find technologies that allow the continuation of petroleum production, and also to find opportunities to exchange abundant energy and petrodollars for a seat at the AI table.