Last month, hundreds of well-known people in the world of artificial intelligence signed an open letter warning that A.I. could one day destroy humanity.
“Mitigating the risk of extinction from artificial Intelligence should be a global priority alongside other societal-scale risks, such as pandemics and nuclear war,” the one-sentence statement said.
The letter was the latest in a series of ominous warnings about A.I. that have been notably light on details. Today’s A.I. systems cannot destroy humanity
The scary scenario
According to Cassandras in the tech industry, robust Artificial Intelligence systems could one day be used by businesses, governments, or independent researchers to handle everything from business to warfare.
We might not want those systems to act in certain ways. In addition, they could resist or even replicate themselves to continue operating in the event that humans attempted to disrupt or stop them.
Are there signs A.I. could do this?
Not exactly. However, systems that are able to take actions based on the text they generate are being developed by researchers from chatbots like ChatGPT. An excellent illustration is the AutoGPT project.
Giving the system objectives like “create a company” or “make some money” is the idea. Then, especially if it is connected to other internet services, it will keep looking for ways to get there.
Programs for computers can be made with a system like AutoGPT. It could run those programs if researchers give it access to a computer server. In theory, this is a way for AutoGPT to perform almost any online activity, including information retrieval, application usage, application creation, and even self-improvement.
Where do Artificial Intelligence systems learn to misbehave?
- A.I. systems like ChatGPT are built on neural networks, mathematical systems that can learn skills by analyzing data.
- Around 2018, companies like Google and OpenAI began building neural networks that learned from massive amounts of digital text culled from the internet.
- By pinpointing patterns in all this data, these systems learn to generate writing on their own, including news articles, poems, computer programs, and even humanlike conversation.
- The result: chatbots like ChatGPT.
Who are the people behind these warnings?
In the mid-2000s, a youthful essayist named Eliezer Yudkowsky started cautioning that Artificial Intelligence could obliterate humankind. A group of believers emerged from his posts on the internet. This group, known as rationalists or effective altruists, rose to great prominence in government think tanks, the tech industry, and academia.
Both OpenAI and DeepMind, Google’s 2014 acquisition of an artificial intelligence lab, were significantly influenced by Mr. Yudkowsky’s writings. Additionally, many members of the “EA” community worked in these labs. They were of the opinion that because they were aware of the risks associated with AI, they were in the best position to develop it.