54.2 F
Washington D.C.
Thursday, April 25, 2024

Experts Warn Tech Sector of Rapid AI Development

The AMCS letter comes as AI pioneer Dr. Geoffrey Hinton wrote in a May 1 article for the New York Times that he has quit his job at Google so that he can freely speak about the risks of AI. “It is hard to see how you can prevent the bad actors from using it for bad things,” Dr. Hinton told the newspaper.

Researchers from the Association for Mathematical Consciousness Science (AMCS) have written an open letter emphasizing the urgent need for accelerated research in consciousness science in light of rapid advancements in artificial intelligence (AI) technologies. 

It is the latest in a series of letters calling for caution on AI. In March, Elon Musk joined over a thousand other tech leaders and researchers in calling for a pause in AI development. And last month the Association for the Advancement of Artificial Intelligence warned of the risks that AI could pose.

The AMCS letter, titled “The Responsible Development of AI Agenda Needs to Include Consciousness Research,” highlights the potential implications of AI systems achieving consciousness and the importance of understanding and addressing the ethical, safety, and societal ramifications associated with artificial general intelligence.

“The increasing computing power and capabilities of new AI systems are accelerating at a pace that far exceeds our progress in understanding their capabilities and their “alignment” with human values,” the letter states.

As AI systems continue to develop at an unprecedented pace, the potential for them to achieve human-level consciousness becomes increasingly plausible. The researchers argue that  science must now establish whether AI is, or can become, conscious and that this is vital for public awareness, societal institutions, and governing bodies to make informed decisions about the future of AI and its potential impact on society.

“AI systems, including Large Language Models such as ChatGPT and Bard, are artificial neural networks inspired by neuronal architecture in the cortex of animal brains.” the letter continues. “In the near future, it is inevitable that such systems will be constructed to reproduce aspects of higher-level brain architecture and functioning. Indeed, it is no longer in the realm of science fiction to imagine AI systems having feelings and even human-level consciousness.”

The letter cites the growing international community of researchers in the field of consciousness science, including organizations such as AMCS and the Association for the Scientific Study of Consciousness, as key players in tackling questions of AI consciousness. However, the researchers stress that substantial support and resources are needed to ensure that consciousness science can align with advancements in AI and other brain-related technologies.

The open letter calls on the tech sector, the scientific community, and society as a whole to take seriously the need to accelerate research in consciousness science to ensure that AI development delivers positive outcomes for humanity.

The AMCS letter comes as AI pioneer Dr. Geoffrey Hinton wrote in a May 1 article for the New York Times that he has quit his job at Google so that he can freely speak about the risks of AI.

“It is hard to see how you can prevent the bad actors from using it for bad things,” Dr. Hinton told the newspaper. And speaking to the BBC, he elaborated that “you can imagine, for example, some bad actor like Putin decided to give robots the ability to create their own sub-goals.” The British-Canadian cognitive psychologist and computer scientist told the BBC that chatbots could soon overtake the level of information that a human brain holds. “Right now, they’re not more intelligent than us, as far as I can tell. But I think they soon may be.”

Read the open letter at AMCS

author avatar
Kylie Bielby
Kylie Bielby has more than 20 years' experience in reporting and editing a wide range of security topics, covering geopolitical and policy analysis to international and country-specific trends and events. Before joining GTSC's Homeland Security Today staff, she was an editor and contributor for Jane's, and a columnist and managing editor for security and counter-terror publications.
Kylie Bielby
Kylie Bielby
Kylie Bielby has more than 20 years' experience in reporting and editing a wide range of security topics, covering geopolitical and policy analysis to international and country-specific trends and events. Before joining GTSC's Homeland Security Today staff, she was an editor and contributor for Jane's, and a columnist and managing editor for security and counter-terror publications.

Related Articles

Latest Articles