Opinion:

The Kavli Prizes: Professor Stuart J. Russell gives his speech during the government banquet 2024, in Oslo City Hall.

What will happen if we succeed with artificial intelligence?

OPINION: "We should expect that machines will take control," concluded Alan Turing in 1951. In this text, one of the world's leading AI researchers, Professor Stuart J. Russell, discusses the benefits and dangers of the research.

Published

Professor Stuart J. Russell was one of the keynote speakers at the government banquet in Oslo City Hall in connection with the awarding of the Kavli Prizes. Here is the main portion of his speech:

For centuries, science has been regarded — often, it must be said, by scientists — as an unequivocal good.

It is, in its best form, the search for truth. Its existence and progress ennoble humanity, for surely knowledge is better than ignorance, understanding better than confusion and delusion.

Science has supported the development of technologies that have, over two centuries, led to a thirteenfold increase in global income per capita and a one-hundred-fold increase in global output. As a result, hundreds of millions enjoy an objectively splendid style of life.

So it is right and proper that we honour those outstanding scientists whose contributions were enumerated in the Prize ceremony today. Congratulations again to the Laureates.

How can we retain power, forever, over entities far more powerful than ourselves?

And we thank the Kavli Foundation for its support of science around the world and of the prizes awarded today, which raise awareness of these disciplines of nanoscience, neuroscience, and astrophysics, much as some other prizes awarded in Stockholm have done for chemistry and biology.

On the other hand, billions of people do not participate in this objectively splendid style of life - some due to lack of health, which the sciences can help to alleviate, but most due to massive structural inequality, both within and between nations, about which science can do little.

And then there is the question of misuse. As science’s founding philosopher, Francis Bacon, observed in 1609, “The mechanical arts are of ambiguous use, serving as well for hurt as for remedy.”

We see this “hurt” in climate change, which began with thermodynamics and the quest for energy, and in nuclear weapons, which sprang uninvited from basic questions of physics.

The concern for societal hurt has also led to long-established restrictions on gene editing and potential restrictions on neurotechnology.

These are just some of the questions being considered by two new Kavli Centers for Ethics, Science, and the Public, at Berkeley and Cambridge.

The prize winners are accompanied down the stairs and welcomed by the guests of the banquet.

The Kavli Prize 2024

 On Tuesday, September 3rd, the Kavli Prize for 2024 was awarded to the following researchers:

  • Astrophysics: David Charbonneau (Canada/USA) and Sara Seager (Canada/USA)
  • Nanoscience: Robert S. Langer (USA), Armand Paul Alivisatos (USA) and Chad A. Mirkin (USA)
  • Neuroscience: Nancy Kanwisher (USA), Winrich Freiwald (Germany) and Doris Tsao (USA)

The Kavli Prize honors scientists for breakthroughs in astrophysicsnanoscience and neuro­science – transforming our understanding of the big, the small and the complex.

(Source: https://www.kavliprize.org/)

And in my own area of artificial intelligence: For 80 years, we have been trying to create superhuman machines. What if we succeed? How can we retain power, forever, over entities far more powerful than ourselves?

Alan Turing, the founder of computer science, considered this question in 1951 and concluded that “we should have to expect the machines to take control.” Hurt, in other words, and lots of it.

The argument is simple enough: AI systems are designed to pursue objectives, and if their objectives are misaligned with those of humanity, we lose.

On the other hand, perfectly aligning the objectives seems impossible, as you soon find out if a genie grants you three wishes. Your third wish is always “Please undo the first two wishes! I’ve ruined the universe.”

There is, however, another way: build machines whose only objective is to further human interests, but who know that they don’t know what those interests are.

They can learn more through observation and communication, but they remain fundamentally humble. This humility can be translated into mathematical guarantees, including the all-important guarantee that the machine is always willing to be turned off.

One can draw a lesson from this for science more generally. Like the AI systems, we scientists should be humble. We need to learn from humanity more about what humanity wants and doesn’t want. Not from corporations, who normally seek to maximize profit, nor from governments, who sometimes want methods of control and destruction, but instead from human beings.

Thank you.

Powered by Labrador CMS