How not to destroy the world with AI
I will briefly survey recent and expected developments in AI and their implications. Some are enormously positive, while others, such as the development of autonomous weapons and the replacement of humans in economic roles, may be negative. Beyond these, one must expect that AI capabilities will eventually exceed those of humans across a range of real-world-decision making scenarios. Should this be a cause for concern, as Elon Musk, Stephen Hawking, and others have suggested? And, if so, what can we do about it? While some in the mainstream AI community dismiss the issue, I will argue that the problem is real and that the technical aspects of it are solvable if we replace current definitions of AI with a version based on provable benefit to humans.
L'exposé sera visible en ligne sur Zoom : https://zoom.us/j/99449844950?pwd=Y1BQdFVkTHFyWjYxNFNjN3h3bVBxdz09
(Meeting ID: 994 4984 4950 ; Passcode: russell)
Stuart Russell is a Professor of Computer Science at UC Berkeley, and Adjunct Professor of Neurological Surgery at UC San Francisco. His research includes major contributions to machine learning, probabilistic reasoning, knowledge representation, planning, real-time decision making, and computer vision. He founded the Center for Human-Compatible AI. He is active in the movement to ban autonomous weapons. He is on the boards of the Future of Life Institute, the Centre for the Study of Existential Risk, and the World Economic Forum's Council on AI and Robotics. Stuart Russell (co)authored several books, including the main reference textbook in the field Artificial Intelligence: A Modern Approach (1995), and the recent book Human Compatible: Artificial Intelligence and the Problem of Control (2019). He holds numerous awards among which the IJCAI Computers and Thought Award, the NSF Presidential Young Investigator Award, the ACM and the AAAI Outstanding Educator awards.