For the past decade, artificial intelligence has been used to recognize faces, rate creditworthiness and predict the weather. At the same time, increasingly sophisticated hacks using stealthier methods have escalated. The combination of AI and cybersecurity was inevitable as both fields sought better tools and new uses for their technology. But there’s a massive problem that threatens to undermine these efforts and could allow adversaries to bypass digital defenses undetected.
The danger is data poisoning: manipulating the information used to train machines offers a virtually untraceable method to get around AI-powered defenses. Many companies may not be ready to deal with escalating challenges. The global market for AI cybersecurity is already expected to triple by 2028 to $35 billion. Security providers and their clients may have to patch together multiple strategies to keep threats at bay.
The very nature of machine learning, a subset of AI, is the target of data poisoning. Given reams of data, computers can be trained to categorize information correctly. A system may not have seen a picture of Lassie, but given enough examples of different animals that are correctly labeled by species (and even breed) it should be able to surmise she’s a dog. With even more samples, it would be able to correctly guess the breed of the famous TV canine: Rough Collie. The computer doesn’t really know. It’s merely making statistically informed inference based on past training data.
Read the full story at Bloomberg