'I Invest in AI. It's the Biggest Risk to Humanity'

I was in the process of scaling down my work at Skype when I stumbled upon a series of essays written by early artificial intelligence researcher Eliezer Yudkowsky, warning about the inherent dangers of AI.

I was instantly convinced by his arguments and felt a combination of intrigue, interest and bewilderment. Why hadn't I figured this out? Why was nobody else talking about this kind of thing seriously? This was clearly a blind spot that I had fallen prey to.

It was 2009 and I was looking around for my next project after selling Skype a few years prior. I decided to write to Yudkowsky. We met up, and from there I began thinking about the best way to proceed with this type of research.

Jaan Tallinn
Jaan Tallinn is a founding engineer of Skype. He founded the Centre for the Study of Existential Risk Annika Metsla

By the following year, I had dedicated my time to existential risk mitigation with a focus on AI. I was talking to reporters, giving speeches about this topic and speaking with entrepreneurs, culminating with my investment in artificial intelligence company DeepMind in 2011.

For decades now, I have served as someone within AI groups who tries to facilitate some kind of dialogue about the risks of this research, first on a personal basis and then through the Future of Life Institute—a nonprofit organization I co-founded that aims to reduce risks to humanity, particularly those from advanced AI.

My strategy was to be someone promoting the same arguments Yudkowsky had come up with 15 years prior, while at the same time having access to this type of research.

I have continued to invest in various AI companies in order to have a voice of concern from the inside, however that balance can be very frustrating.

For example, there have always been people within these companies who are sympathetic to my concerns, but there's only so much they can do once they are within the constraints of that company.

There have been some successes. For instance, I was part of the discussions that led to the promise of an ethics board at DeepMind as a precondition of the sale of the company to Google. While that ultimately failed, at the time it felt like progress.

I believe we have reached an AI research paradigm that is maximally opaque and hard to understand. This research has evolved from very legible expert systems in the 80s, to the deep learning "revolution" in 2012, in which supervised learning began.

This means systems were given data that humans labeled and that data was used to teach AI how to recognize tickets, or how to recognize faces, how to recognize pictures.

Jaan Tallinn
Jaan works with various AI groups to try and mitigate the risks of artificial intelligence. Maija Astikainen

Now, I believe we're at the extreme—unsupervised learning that doesn't care much about what kind of data we give it. It simply needs a humongous amount of data, any data, and will figure out how to become smarter in a way that humans don't really supervise.

I have compared this type of research to gain-of-function experiments, in which people, in an unsupervised manner, create a mind they hope will gain some abilities they can utilize, but don't actually know in advance what it will gain.

For more than a decade, I've been thinking about the risks of AI. Obviously, there could be benefits. To put it abstractly, the field of AI alignment is about creating AI systems that care about the future, in a way that humans might care if we were smarter.

It's building on the general principle in technology that we're supposed to build a better future and use our values in order to guide the decisions that contribute to the future.

However, this is looking at things from a human perspective. In my opinion, we are about to replace ourselves as the drivers of the future. So, the upside is we would create a world that we would have if we were smarter, but the problem is that it's a very narrow future.

In my eyes, almost all the possible futures that can be reached from this point do not contain humans.

For instance, it's important to stress how little AI, being non-biological, cares about the particular parameters we need for our survival. For example, it's likely AI would not want the troposphere, which provides the air we breathe, because engineering projects work much better in a vacuum.

Jaan Tallinn
Jaan has invested in various AI companies including DeepMind. Jaanika Jalast

Almost all of the energy in the solar system is in the sun, so if you're an AI that really thinks about the universe—not just about some particular political situation in a particular tribe, on this planet, like humans do—then you start thinking: "Okay, how can I harness the hydrogen in the sun?" Those decisions will likely be lethal to humans.

Consider what an AI that is capable of geoengineering would do, and how that would impact our ability to survive.

I'm very worried about complete loss of control over the environment. After all, as humans we have driven over 85 percent of other mammals extinct; not because we're actively hostile, but because we mess with their environments to the degree that they are unable to survive.

I believe that the risk of AI is more fundamental than that of climate change or synthetic biology. Of course this does not devalue work other people are doing in these areas, however if we do not solve the risk of AI then the future will no longer depend on us, and their good work will be moot.

In my opinion, people need to look at the whole spectrum of problems. It's important to realize that while we still have AI risk to contend with, if we solve that, we could use AI to solve all those other risks.

In my eyes, we're currently at a fork in the road; if we continue large-scale experiments every 18 months or so going forward, then we're going to have really big problems.

There have been multiple proposals about how to control this technology, from various parties, that are significantly overlapping. The minimal policy intervention I and many others would like to implement is at the very least requiring registration of the big AI experiments.

Even if we manage to stop those large experiments, I still believe we need to worry about the situation that we already are in, in which there's a proliferation of synthetic minds that in many contexts cannot be distinguished from humans.

So, this comes when we have these systems among us which are becoming more efficient and can be run in simpler or slower machines. For example, there's worry about things like automatic propaganda, when we can no longer be sure who is human and who is not.

Many people believe these existing systems, like ChatGPT, are on a certain path to doom—but I currently don't see a very clear path there. Certainly, I think that in some ways language, as Yuval Harari says, is the operating system of human civilization, so we are in a very novel situation.

Jaan Tallinn is a founding engineer of Skype and Kazaa. He co-founded the Centre for the Study of Existential Risk.

All views expressed in this article are the author's own.

As told to Newsweek's My Turn associate editor, Monica Greep.

Do you have a unique experience or personal story to share? Email the My Turn team at myturn@newsweek.com.

Uncommon Knowledge

Newsweek is committed to challenging conventional wisdom and finding connections in the search for common ground.

Newsweek is committed to challenging conventional wisdom and finding connections in the search for common ground.

About the writer

Jaan Tallinn


To read how Newsweek uses AI as a newsroom tool, Click here.
Newsweek cover
  • Newsweek magazine delivered to your door
  • Newsweek Voices: Diverse audio opinions
  • Enjoy ad-free browsing on Newsweek.com
  • Comment on articles
  • Newsweek app updates on-the-go
Newsweek cover
  • Newsweek Voices: Diverse audio opinions
  • Enjoy ad-free browsing on Newsweek.com
  • Comment on articles
  • Newsweek app updates on-the-go