In his book, A Tale of Two Cities, Charles Dickens, author, poet, and obvious time traveler, described perfectly the GenAI dilemma: “It was the best of times, it was the worst of times, it was the age of wisdom, it was the age of foolishness, it was the epoch of belief, it was the epoch of incredulity, it was the season of light, it was the season of darkness, it was the spring of hope, it was the winter of despair.”
In a sense, GenAI is new to nearly all of us (sort of). Of course, now it is nearly impossible to ignore (and I’ve tried) as it dominates the headlines as both the savior and destroyer of humankind. I absolutely see the game-changing potential of GenAI. However, I cannot ignore the risks. GenAI is the thing that goes bump in night, a cyber 3-mile-island nightmare that will, in under sixty seconds, solve all of your data problems, monitor and respond to threats in something approaching precognition, compose a symphony, write a poem (Ode to a Cyber Apocalypse?), and draw a picture of dancing unicorns. However, ten seconds after these amazing “successes”, the GenAI system devolves into a drunk-uncle version of HAL 9000 going from mission hero to the perpetrator of a cyber incident of epic proportions.
GenAI is the future of IT. However, before we march into that future, we should figure out how to actually secure it.
To this end, i have created an AISec primer to highlight, from a federal perspective, gaps in the NIST controls as it relates to GenAI. In truth, It is little more than an introduction to federal AISec needs but it is a start. There is a lot missing. For example:
—AI Categorization. Not all AI technologies are created equal. We need a way to categorize AI solutions according to the risks an AI system poses to the enterprise. In other words, we need a FIPS 199 like process for AI. I have some ideas on how this might work but I didn’t include them.
—Insider Threat. GenAI opens up an entirely new type of insider threat that is both awesome and terrifying. We need to make sure it is part of the AISec conversation. Oddly, no one is talking about this.
—Risk Deconstruction. We need to add additional parts and sub parts to the risks I’ve included and ensure I haven’t missed something (pretty sure I have). We also need to include mitigation strategies where possible.
The current state of AISec is lagging far behind the leaps AI technology is making. In my view, when it comes AISec there are known knowns, unknown unknowns, and everything in between. We need a data driven, fluid control overlay that would give federal agencies the ability to assess and authorize AI technologies, particularly on the GenAI side of things
Welcome to the AI Jungle.