House Oversight and Government Reform Subcommittee on Information Technology Chairman Will Hurd (R-Texas) and Ranking Member Robin Kelly (D-Ill.) today released a white paper on artificial intelligence (AI), presenting lessons learned from the subcommittee’s oversight and hearings on AI and setting forth key recommendations for moving forward.
Beginning in February of 2018, the Subcommittee on Information Technology held a series of hearings on artificial intelligence (AI). In connection with those hearings, committee staff met with leading experts from academia, industry, and government, and reviewed multiple reports on the subject.
- The paper details how AI advancement in the short-term could lead to the loss of jobs due to AI-driven automation.
- To address this issue, the paper recommends federal, state, and local agencies “engage more with stakeholders on the development of effective strategies for improving the education, training, and reskilling of American workers to be more competitive in an AI-driven economy.”
- The federal government is also encouraged to “lead by example by investing more in education and training programs that would allow for its current and future workforce to gain the necessary AI skills.”
- The paper finds AI technologies rely heavily on computer algorithms that often require vast amounts of personal data and raise legitimate privacy concerns.
- To address this challenge, the paper recommends federal agencies “review federal privacy laws, regulations, and judicial decisions to determine how they may already apply to AI products within their jurisdiction, and – where necessary – update existing regulations to account for the addition of AI.”
- The paper describes how AI systems are increasingly being used to make consequential decisions about people and the harm that can occur when AI systems rely on biased data sets.
- To better account for this problem, the paper recommends that when federal, state, and local agencies use AI-type systems to make decisions about people, these agencies “should ensure the algorithms supporting these systems are accountable and inspectable.”
- The paper highlights how AI’s computing power increases the risks of cyberattacks that are even more likely to exploit vulnerabilities that exist on the computer networks of public and private sector entities.
- The paper recommends the government address this challenge by taking more active steps to “consider the ways in which [AI] could be used to harm individuals and society, and prepare for how to mitigate these harms.”