58.6 F
Washington D.C.
Thursday, April 25, 2024

PERSPECTIVE: Confronting the Potential Security Threats from Inevitable Artificial General Intelligence Development

While we will be able to program the motivations of initial AGIs, we can’t control the motivations of the people or even the corporations that own them.

From digital voice assistant to smart home devices, artificial intelligence (AI) has quickly become an integral part of our everyday lives – a fact that has not escaped the notice of military establishments, both here and abroad.

Recognizing the role AI potentially can play in changing the nature of war, the U.S. Department of Defense recently called for proposals to imagine the way in which AI will change decision-making on the battlefield. NATO, meanwhile, has already published a set of short stories on the future of warfare, while both China and Russia are making significant investments in AI for national security purposes.

All of this suggests that AI is likely to play a critical role in future conflicts. But what about the next phase of AI: artificial general intelligence (AGI)? Defined as the ability of an intelligent agent to understand or learn any intellectual task that a human can, AGI potentially could do everything from driving a car to performing a complex surgery at a consistently higher level than humans. It could also represent a huge national security risk if it were to fall into the wrong hands or be developed by a rogue nation.

While some experts in the field predict AGI might take hundreds of years to emerge, if it emerges at all, I believe AGI is not only inevitable, but likely to occur within the next decade. All that is really needed is the insight to understand how human-like intelligence works. To get there, though, researchers must shift their focus away from using ever-expanding datasets to support AI and concentrate instead on a more biologically plausible structure that enables AI to exhibit the same kind of contextual, common-sense understanding that humans do.

To that end, consider how the human brain works. The structure of the neocortex (the part of the brain we use to think) and the amount of DNA needed to define its structure may require as little as 7.5 megabytes of data. We already produce massive programs that can process and analyze vast amounts of data faster and more accurately than humanly possible, so creating an AGI program as small as 7.5 megabytes is infinitely doable. Similarly, the part of the brain responsible for muscular coordination has been largely duplicated in today’s best robotic systems, suggesting that a few microprocessors (along with the insight as to how the brain works) can do the work of billions of neurons.

In short, we likely already have more than enough computer power to achieve AGI. All that is missing is the insight into how the human brain works and many, many researchers are working on this question – not necessarily for AGI, but for the medical advances it will spin off.

Today’s AI can already exceed human mental performance. but still falls short in areas where any 3-year-old can excel – areas such as understanding object persistence, three-dimensional environments, cause and effect, and the passage of time. This has been a sticking point for most AGI research to date. Let’s face it, even if researchers had the insight to replicate the way in which the consciousness of a 3-year-old works, 3-year-olds typically need another 20 years or so before they have the knowledge and understanding to be fully functional and reasonably intelligent persons. Researchers simply don’t want to wait 20 years.

That explains why AGI emergence is likely to be gradual. Each step along the way will create capabilities that seem like good ideas and, as such, are individually marketable. An advance that helps Siri or Alexa make fewer speech recognition errors occurs and that development is rushed to market. Another advance produces better vision, enabling self-driving cars to be safer, and that development is rushed to market.

While each of these developments is marketable on its own, if they are built on a common underlying data structure and attached to each other, they can begin to interact and build a broader context. Doing so will enable us to gradually approach AGI. At some point we’re going to get close to the human-level threshold, then equal that threshold, then exceed that threshold. At that point, we’re going to have machines that are obviously superior to human intelligence and people will begin to agree that, yes, maybe AGI does exist.

AGIs will necessarily be goal-driven systems. In an ideal world, the people responsible for these thinking machines would ensure that the goals set for an AGI include adequate safeguards so that the subsequent operation is safe. But what if the first owner of a powerful AGI system is a world power such as Russia or China – or, worse, a rogue nation such as North Korea or Iran? While we will be able to program the motivations of initial AGIs, we can’t control the motivations of the people or even the corporations that own them.

This is an especially scary proposition. In a military context, computers will be in a position to recommend strategies, propose weapons systems, and evaluate competitive weaknesses. While it is unlikely computers would have absolute control over weapons systems, it is just as unlikely they will be out of the loop on any significant decision. In decisions involving large amounts of information which must be balanced and predictions with multiple variables, the computers’ abilities will make them superior strategic decision-makers.

While such a scenario is alarming enough, I believe the greater threat comes from our computers’ ability to sway opinion or manipulate markets. We have already seen efforts to control elections through social media, and AGI systems will make this vastly more effective. We already have markets at the mercy of programmed trading, and AGIs will amplify this issue. We already have hackers making cybersecurity a significant threat and AGIs could do a much more efficient job of attacking lesser computer systems as well.

To paraphrase concepts from nuclear deterrence and the NRA, the best way to stop a bad AGI is with a good AGI. As AGIs are inevitable and sooner than most people think, I see it as imperative that the West put a great effort into broader AGI development rather than the haphazard approach of hoping that narrow AI systems will someday “grow together” into general intelligence. For national security reasons alone, it is incumbent on us to be first to create AGI.

 

The views expressed here are the writer’s and are not necessarily endorsed by Homeland Security Today, which welcomes a broad range of viewpoints in support of securing our homeland. To submit a piece for consideration, email [email protected].

author avatar
Charles Simon
Charles Simon, BSEE, MSCs is a nationally recognized entrepreneur and software developer, and the CEO of FutureAI. Simon is the author of “Will Computers Revolt? Preparing for the Future of Artificial Intelligence,” and the developer of Brain Simulator II, an AGI research software platform, and Sallie, a prototype software and artificial entity that learns in real-time with vision, hearing, speaking, and mobility.
Charles Simon
Charles Simon
Charles Simon, BSEE, MSCs is a nationally recognized entrepreneur and software developer, and the CEO of FutureAI. Simon is the author of “Will Computers Revolt? Preparing for the Future of Artificial Intelligence,” and the developer of Brain Simulator II, an AGI research software platform, and Sallie, a prototype software and artificial entity that learns in real-time with vision, hearing, speaking, and mobility.

Related Articles

Latest Articles