top of page

Eleven thousand years of human civilization

AI can learn everything we know

Pause AI:
A Personal Appeal

​

Note: This is a personal appeal from Mark John Holland MD. Forgive me if it seems unprofessional or hysterical. Those of you who know me can attest that I am not an hysteric and I NEVER push causes on people and above all, not on patients. Until now. Please give this a bit of your time.

​

Mark Holland MD, April 17, 2023

If I may be forgiven the indulgence of rambling off-topic for a bit, AI is terrifying to me. It is terrifying precisely because it is so incredibly good at doing what I spent a college and post-graduate degree learning and a career perfecting: thinking about medicine and communicating it to other people. Now, in this otherwise mundane year of 2023, suddenly and seemingly without warning, a technology has arrived in the world that can do what took me sixty years to develop---it can do that--- in seconds (or perhaps minutes). In a few weeks, it can be trained (starting with a dumb machine) to write like a scholar and know a significant fraction of all the human knowledge that has taken our species eleven thousand years to accumulate. And GPT-4 is just the beginning. What happens when we develop a 'generalized AI' that can do everything we can do, everything that everyone can do and ever could do---can do ALL of it perfectly? What happens when my job is replaced by an AI? What happens when our children are taught by an AI? What happens when we decide that our children don't even need to bother with education because AI can already do and think anything they can do OR EVER WILL THINK OR DO?

​

If you don't know it yet, you need to know it. Alien intelligence has arrived on Earth. Soon it will be so much smarter than we are that none of us will be able to understand why or how it acts. And soon we may give up even trying. And "soon" means SOON. As in a few years---or God forbid---a few months or weeks. I fear that I may live to see people surrender the long race for understanding that has brought our species and our planet so much wealth and pain. And if we do that, we may be the 'masters' of our creations for a bit longer but soon and inevitably, we will become their slaves. And this isn't off in some far future. This is now. RIGHT NOW.

​

Pausing AI 

​

Elon Musk, Max Tegmark, many Nobel Laureates and founders of AI have just written a public letter pleading for a worldwide six-month pause in 'giant artificial intelligence research' everywhere on Earth because they too appreciate the danger. It is through many of these people that we got here and through them that we must now act. And it CAN be done. All governments have an interest in ensuring that AI doesn't destroy civilization. All people have the same interest. We did it with nuclear weapons and we can do it with AI.

​

AND we can enforce a pause... AI requires Giga-Watt consuming massive data centers that can be seen from space. Any government attempting to violate a ban would become known immediately. The human race can do this. We really can.

​

And we really must.

​

And it doesn't mean stopping progress. It means properly harnessing and regulating it. It won't take long to craft the rules. It can be done in six months. And it can work. It HAS worked for nuclear weapons.

​

Humanity arrived twice at civilization-ending technology. The first time was 1945. The second time is now.

​

Sign the AI Pause Letter: 

​

https://futureoflife.org/open-letter/pause-giant-ai-experiments/

​

Signatories List:

Yoshua Bengio, Founder and Scientific Director at Mila, Turing Prize winner and professor at University of Montreal

Stuart Russell, Berkeley, Professor of Computer Science, director of the Center for Intelligent Systems, and co-author of the standard textbook “Artificial Intelligence: a Modern Approach"

Elon Musk, CEO of SpaceX, Tesla & Twitter

Steve Wozniak, Co-founder, Apple

Yuval Noah Harari, Author and Professor, Hebrew University of Jerusalem.

Emad Mostaque, CEO, Stability AI

Andrew Yang, Forward Party, Co-Chair, Presidential Candidate 2020, NYT Bestselling Author, Presidential Ambassador of Global Entrepreneurship

John J Hopfield, Princeton University, Professor Emeritus, inventor of associative neural networks

Valerie Pisano, President & CEO, MILA

Connor Leahy, CEO, Conjecture

Jaan Tallinn, Co-Founder of Skype, Centre for the Study of Existential Risk, Future of Life Institute

Evan Sharp, Co-Founder, Pinterest

Chris Larsen, Co-Founder, Ripple

Craig Peters, Getty Images, CEO

Tom Gruber, Siri/Apple, Humanistic.AI, Co-founder, CTO, Led the team that designed Siri, co-founder of 4 companies

Max Tegmark, MIT Center for Artificial Intelligence & Fundamental Interactions, Professor of Physics, president of Future of Life Institute

Anthony Aguirre, University of California, Santa Cruz, Executive Director of Future of Life Institute, Professor of Physics

Sean O'Heigeartaigh, Executive Director, Cambridge Centre for the Study of Existential Risk

Tristan Harris, Executive Director, Center for Humane Technology

Rachel Bronson, President, Bulletin of the Atomic Scientists

Danielle Allen, Harvard University, Professor and Director, Edmond and Lily Safra Center for Ethics

Marc Rotenberg, Center for AI and Digital Policy, President

Nico Miailhe, The Future Society (TFS), Founder and President

Nate Soares, MIRI, Executive Director

Andrew Critch, Founder and President, Berkeley Existential Risk Initiative, CEO, Encultured AI, PBC; AI Research Scientist, UC Berkeley.

Mark Nitzberg, Center for Human-Compatible AI, UC Berkeley, Executive Directer

Yi Zeng, Institute of Automation, Chinese Academy of Sciences, Professor and Director, Brain-inspired Cognitive Intelligence Lab, International Research Center for AI Ethics and Governance, Lead Drafter of Beijing AI Principles

Steve Omohundro, Beneficial AI Research, CEO

Meia Chita-Tegmark, Co-Founder, Future of Life Institute

Victoria Krakovna, DeepMind, Research Scientist, co-founder of Future of Life Institute

Emilia Javorsky, Physician-Scientist & Director, Future of Life Institute

Mark Brakel, Director of Policy, Future of Life Institute

Aza Raskin, Center for Humane Technology / Earth Species Project, Cofounder, National Geographic Explorer, WEF Global AI Council

Gary Marcus, New York University, AI researcher, Professor Emeritus

Vincent Conitzer, Carnegie Mellon University and University of Oxford, Professor of Computer Science, Director of Foundations of Cooperative AI Lab, Head of Technical AI Engagement at the Institute for Ethics in AI, Presidential Early Career Award in Science and Engineering, Computers and Thought Award, Social Choice and Welfare Prize, Guggenheim Fellow, Sloan Fellow, ACM Fellow, AAAI Fellow, ACM/SIGAI Autonomous Agents Research Award

Huw Price, University of Cambridge, Emeritus Bertrand Russell Professor of Philosophy, FBA, FAHA, co-foundor of the Cambridge Centre for Existential Risk

Zachary Kenton, DeepMind, Senior Research Scientist

Ramana Kumar, DeepMind, Research Scientist

Jeff Orlowski-Yang, The Social Dilemma, Director, Three-time Emmy Award Winning Filmmaker

Olle Häggström, Chalmers University of Technology, Professor of mathematical statistics, Member, Royal Swedish Academy of Science

Michael Osborne, University of Oxford, Professor of Machine Learning

Raja Chatila, Sorbonne University, Paris, Professor Emeritus AI, Robotics and Technology Ethics, Fellow, IEEE

Moshe Vardi, Rice University, University Professor, US National Academy of Science, US National Academy of Engineering, American Academy of Arts and Sciences

Adam Smith, Boston University, Professor of Computer Science, Gödel Prize, Kanellakis Prize, Fellow of the ACM

Marco Venuti, Director, Thales group

Erol Gelenbe, Institute of Theoretical and Applied Informatics, Polish Academy of Science, Professor, FACM FIEEE Fellow of the French National Acad. of Technologies, Fellow of the Turkish Academy of Sciences, Hon. Fellow of the Hungarian Academy of Sciences, Hon. Fellow of the Islamic Academy of Sciences, Foreign Fellow of the Royal Academy of Sciences, Arts and Letters of Belgium, Foreign Fellow of the Polish Academy of Sciences, Member and Chair of the Informatics Committee of Academia Europaea

Andrew Briggs, University of Oxford, Professor, Member Academia Europaea

Laurence Devillers, à Sorbonne Université/CNRS, Professor d'IA, Légion d'honneur en 2019

Nicanor Perlas, Covid Call to Humanity, Founder and Chief Researcher and Editor, Right Livelihood Award (Alternative Nobel Prize); UNEP Global 500 Award

Daron Acemoglu, MIT, professor of Economics, Nemmers Prize in Economics, John Bates Clark Medal, and fellow of National Academy of Sciences, American Academy of Arts and Sciences, British Academy, American Philosophical Society, Turkish Academy of Sciences.

Christof Koch, MindScope Program, Allen Institute, Seattle, Chief Scientist

Gaia Dempsey, Metaculus, CEO, Schmidt Futures Innovation Fellow

Henry Elkus, Founder & CEO: Helena

Gaétan Marceau Caron, MILA, Quebec AI Institute, Director, Applied Research Team

Peter Asaro, The New School, Associate Professor and Director of Media Studies

Jose H. Orallo, Technical University of Valencia, Leverhulme Centre for the Future of Intelligence, Centre for the Study of Existential Risk, Professor, EurAI Fellow

George Dyson, Unafilliated, Author of "Darwin Among the Machines" (1997), "Turing's Cathedral" (2012), "Analogia: The Emergence of Technology beyond Programmable Control" (2020).

Nick Hay, Encultured AI, Co-founder

Shahar Avin, Centre for the Study of Existential Risk, University of Cambridge, Senior Research Associate

Solon Angel, AI Entrepreneur, Forbes, World Economic Forum Recognized

Gillian Hadfield, University of Toronto, Schwartz Reisman Institute for Technology and Society

in the hands of inhuman unconscious robots...

except its own reasons to value life

The Aliens are Here

we built them...

Pause Artificial Intelligence Research

​

https://futureoflife.org/open-letter/pause-giant-ai-experiments/

bottom of page