54.9 F
Washington D.C.
Friday, April 19, 2024

PERSPECTIVE: Artificial Intelligence Is Risky, but Federal Agencies Need It to Get Ahead

At NASA, the Sensor Web Project combines a network of software and internet-linked space, terrestrial, and airborne sensors to monitor the world’s 50 most active volcanoes. Within the Army, maintenance crews at the Logistics Support Activity office are teaming up with IBM’s “Watson” supercomputer to analyze an estimated 5 billion points of sensor data from Jeeps, drones, and other military assets to predict when parts will break down so the crews can fix them before they’re sent out to support missions.

Once viewed as futuristic fantasy, artificial intelligence (AI) is now very real and very much in demand: The global AI market will reach nearly $58.98 billion by 2025, up from $5.97 billion two years ago, according to a forecast from Grand View Research.

Federal agencies are buying in and for good reason: conservatively speaking, automation solutions and AI can boost task speed efficiency by 20 percent – which would result in a savings of $3.3 billion for the U.S. government. At the higher end – with task speeds doubling – the savings projection would increase to $41.1 billion.

But there are cautionary sentiments about this still-emerging technology, as 86 percent of cybersecurity professionals are concerned about the use of AI in cyberattacks, according to survey research from Webroot. More than three out of five of these professionals believe that criminal hackers will increasingly weaponize the technology this year, according to another survey from Cylance.

While failing to leverage the technology would put the government behind the technological curve, it’s critical that the federal government first collaborate with industry to fully understand AI, and private sector companies self-regulate and shift the way they release new AI technologies. The competition to get a new product to market has led many companies to take shortcuts when it comes to testing, relying heavily on beta testing and putting the onus of security on the end users. There needs to be a significant increase in internal testing, with scenario-based implications front of mind. Instead of asking, “Does my product do what it claims?” companies instead need to ask, “Is it possible to make my product do anything other than what it’s supposed to, and what are we doing to prevent that?”

Industry also needs to realize that the manner in which the federal government employs AI may be very different from how AI is leveraged in the commercial space – either because of unique needs, the sheer size of the federal government, or the fragmented and distributed nature of government data. For the government to safely and effectively move ahead with AI, agencies need to be willing to carefully communicate the challenges they’re facing with industry, realizing that some of them can be resolved through AI, but others will require different solutions.

It’s important that agencies share enough information for industry to take effective action – in most scenarios, AI requires large volumes of training data. If the government doesn’t have that data, or isn’t willing to share it, AI won’t be much help. While AI vendors certainly need to do their part to make their products secure and understand how to modify their products to meet government needs, the government also needs to take inventory of opportunities for improvement and drive the conversation with industry. In doing so, the government can enable industry to make sure their products not only operate securely and as designed, but also answer the more specific question, “Will my product do what the government needs and could it be sabotaged for ill intent?”

Fortunately, for federal agencies, there are also “safety nets” in the form of government regulations to help them make better AI acquisition decisions. One of them, the 140 series of Federal Information Processing Standards (FIPS) from the National Institute of Standards and Technology (NIST), sets cryptography requirements for hardware and software (including AI products) introduced to the government space. Another regulation, known as the Security Technical Implementation Guides (STIGs) initiative from the Defense Information Systems Agency (DISA), assigns security codes to systems to flag risk levels. If a system is classified as a “CAT I Severity Code” – meaning it has bypassed primary security protections – it must be corrected before any network implementation is authorized.

In pursuing AI innovation, government IT decision-makers could benefit by selecting from the list of solutions which have already successfully proven FIPS and STIG compliance. The alternative and less-than-prudent path would likely involve the purchase of products on the open market, including popular open-source AI solutions. Open source brings a higher level of risk since anyone – well-intentioned programmers and malicious hackers alike – can contribute code. And, buying their own AI will likely require a significant investment to gain regulatory approval, delaying implementation and deployment, and calling into question not just the security of such a strategy, but also the efficacy.

That said, as AI and other technologies continue to shift, there’s a need for adaptive compliance adjustments – regulation that balances the needs of federal agencies with the benefits and risks of new technology like AI. While legislation is in the early phases, with the “Future of Artificial Intelligence Act of 2017” having only been introduced in the House, there are three key requirements that any future legislation should encompass:

  • Usability and Practicality – when it comes to putting new tech on federal networks, the government needs to take a strict and practical approach. Will AI bring measurable benefits if used? Will AI enable a new capability that was previously unavailable? And, it’s important to consider AI in terms of applications and use cases – it doesn’t have to be introduced to the entire organization, just where it makes sense. In addition, there should be an anticipated measurable benefit – if the benefit doesn’t reach a minimum threshold, then it doesn’t qualify.
  • Outcome-Based Regulation – trying to regulate the manner in which AI is developed is impractical and guaranteed to stifle innovation. Instead, regulation should center on the outcomes of the development. For instance, if AI is developed with the intent of correlating incidents on the network with security risks, but in testing it’s determined that the data the AI collects can be easily stolen, then the AI can’t be approved for network use until it’s fixed.
  • Testing – both previous points allude to the need for a collaborative testing environment. If the federal government doesn’t want to fall behind China, Russia, and others, it needs to regulate through testing requirements. Commercial IT environments are very different from federal environments, however, so the government will need to collaborate to create an environment that industry can use to develop and test AI applications that are not only beneficial to the government, but also proven secure.

AI has its risks, but it also offers incredible benefits. The biggest risk of AI, however, is inaction that results in the failure to adopt the technology. Debating and contemplating the humanistic appropriateness of using AI – in the commercial space, in the government, and especially in the military – is often viewed as a way of mitigating the risks, but the reality is that such political deliberation simply shifts the risk from the office cubicle, to the soldier on the ground. It is in the best interest of the United States to strategically pursue AI technologies, arming our forces and defenses with the best technological advances possible.

 

The views expressed here are the writer’s and are not necessarily endorsed by Homeland Security Today, which welcomes a broad range of viewpoints in support of securing our homeland. To submit a piece for consideration, email [email protected]. Our editorial guidelines can be found here.

author avatar
Robert Schofield
Robert Schofield is a Director of Enterprise Solutions with a BS in Information Technology and a MS in Information Systems. He has over 15 technical certifications to include those for Microsoft, VMware products, and ITIv3. He has 20 years' experience supporting the DoD at an enterprise level to include 8 years active duty in the Armed Forces. He was the technical Program Manager supporting several customers to include US Army Netcom, JSP, CIA, DIA, Secretary of Defense and others. Mr. Schofield has worked for NetCentrics since 2007, most recently supporting the management of worldwide deployments of Enterprise Management (Microsoft System Center) capabilities to the United States Coast Guard
Robert Schofield
Robert Schofield
Robert Schofield is a Director of Enterprise Solutions with a BS in Information Technology and a MS in Information Systems. He has over 15 technical certifications to include those for Microsoft, VMware products, and ITIv3. He has 20 years' experience supporting the DoD at an enterprise level to include 8 years active duty in the Armed Forces. He was the technical Program Manager supporting several customers to include US Army Netcom, JSP, CIA, DIA, Secretary of Defense and others. Mr. Schofield has worked for NetCentrics since 2007, most recently supporting the management of worldwide deployments of Enterprise Management (Microsoft System Center) capabilities to the United States Coast Guard

Related Articles

Latest Articles