21.2 C
Athens
Σάββατο, 27 Απριλίου, 2024
ΑρχικήEnglish EditionThe dangers of tomorrow: A look at the possibilities of a superintelligent...

The dangers of tomorrow: A look at the possibilities of a superintelligent threat


By Nickolas Dinos,

From the moment the first human beings laid their foot on this planet, technology and a vast variety of tools made the survival of our ancient ancestors possible. The ability to light a fire and a detailed understanding of material resources combined with the unique depth of humanity’s vocal communication led to the rise of civilization, as we know it. Technological advances have always been of immense importance in humanity’s journey on planet Earth and a major part of everyday life. As of now, technology remains under the control of the masters that created and maintain it. But will there come a point in time that technology’s power may prove too difficult to control?

Existential risk from artificial general intelligence is the hypothesis that substantial progress in artificial general intelligence could someday result in human extinction or some other unrecoverable global catastrophe. It is argued that human species currently dominate other species because the human brain has some distinctive capabilities that other animals lack in. If AI surpasses humanity in general intelligence and becomes “superintelligent”, then it could become difficult or impossible for humans to control. Just as the fate of the mountain gorilla depends on human goodwill, so might the fate of humanity depend on the actions of a future machine’s superintelligence.

The likelihood of this type of scenario is widely debated, and hinges in part on differing scenarios for future progress in computer science. The exclusive domain of science fiction and concerns about superintelligence started to become mainstream in the 2010s, and were popularized by public figures such as Stephen Hawking, Bill Gates, and Elon Musk. As AI systems become more powerful and more general, they may become superior to human performance in many domains. If this occurs, it could be a transition as transformative economically, socially, and politically as the Industrial Revolution. This could lead to extremely positive developments but could also potentially pose catastrophic risks from accidents (safety) or misuse (security).

Ford is tapping four-legged robots at its Van Dyke Transmission Plant (Source: Ford Media Center)

Humanity’s current systems often go wrong in unpredictable ways. There are several difficult technical problems related to the design of accident-free artificial intelligence. Aligning current systems’ behaviour with our goals has been proved difficult and has resulted in unpredictable and negative outcomes. Accidents caused by more powerful systems would be far more destructive.

Advanced AI systems could be key economic and military assets. Were these systems in the hands of bad actors, they might be used in harmful ways. If multiple groups competed its development first, it might have the destabilising dynamics of an arms race. Mitigating risk and achieving the global benefits of AI will present unique governance challenges and will require global cooperation and representation.

There is great uncertainty and disagreement over timelines for the development of advanced AI systems. But whatever the speed of progress in the field, it seems like there is useful work that can be done right now. Technical machine learning research into safety is now being led by teams at OpenAI, DeepMind, and the Centre for Human-Compatible AI. AI governance research into the security implications is developing as a field.

But still, it seems the scientists and political leaders all over the world find it rather challenging. When it comes to an artificial entity that can be understood as an object that demonstrates something similar to “superintelligence”, the danger seems grand. “Artificial Intelligence: A Modern Approach”, the standard undergraduate AI textbook, assesses that superintelligence “might mean the end of the human race”. It states: “Almost any technology has the potential to cause harm in the wrong hands, but with [superintelligence], we have the new problem that the wrong hands might belong to the technology itself. Even if the system designers have good intentions, two difficulties are common to both AI and non-AI computer systems.”. The system’s implementation may contain initially unnoticed routine but catastrophic bugs. An analogy is space probes: despite the knowledge that bugs in expensive space probes are hard to fix after launch, engineers have historically not been able to prevent catastrophic bugs from occurring. No matter how much time is put into pre-deployment design, a system’s specifications often result in unintended behaviour the first time it encounters a new scenario. For example, Microsoft’s Tay behaved inoffensively during pre-deployment testing, but was too easily baited into offensive behaviour when interacting with real users.

“Robodog” in Singapore enforces safe distancing among people (Source: Mirror)

AI systems uniquely add a third difficulty: the problem that even given the “correct” requirements, bug-free implementation and initial good behaviour is that an AI system’s dynamic “learning” capabilities may cause it to “evolve into a system with unintended behaviour” without the stress of new unanticipated external scenarios. An AI may partly botch an attempt to design a new generation itself and accidentally create a successor AI that is more powerful but no longer maintains the human-compatible moral values pre-programmed into the original AI. For a self-improving AI to be completely safe, it would not only need to be “bug-free”, but it would need to be able to design successor systems that are also “bug-free”.

A superintelligent machine would be as alien to humans as human thinking processes are to cockroaches. Such a machine may not have humanity’s best interests at heart; it is not obvious that it would even care about human welfare at all. If superintelligent AI is possible, and if it is possible for a superintelligence’s goals to conflict with basic human values, then AI poses a risk of human extinction. A “superintelligence” (a system that exceeds the capabilities of humans in every relevant endeavour) can outmanoeuvre humans any time its goals conflict with human goals; therefore, unless the superintelligence decides to allow humanity to coexist, the first superintelligence to be created will inexorably result in human extinction.

History has proven more often than not, that whenever humanity had to face a great threat, it was able to survive and overcome it. But the threat posed from a superintelligent machine is far greater that any physical disaster or any kind of event -political or military- that humanity has ever faced. And in a world where technology is evolving exponentially on a far larger scale than has ever done before, the significance of studying and understanding the risks and threats of our own intellectual inventions seems larger than any other time.


References
  • Bostrom, N., How Long Before Superintelligence, Available here
  • Kieslich, K., Lünich, M. & Marcinkowski, F., The Threats of Artificial Intelligence Scale (TAI). Int J of Soc Robotics, Available here
  • Marr, B., Is Artificial Intelligence Dangerous? 6 AI Risks Everyone Should Know About, Forbes, Available here

TA ΤΕΛΕΥΤΑΙΑ ΑΡΘΡΑ

Nickolas Dinos
Nickolas Dinos
Nickolas was born in 1998 in Athens, Greece and he is a graduate of the Media, Communications and Culture Department of Panteion University. He has done some radio work at Panteion’s public station as a radio producer. He likes writing, playing music and sports.