A graduate research project is raising questions about how longstanding transparency laws may be creating unintended risks for homeland security in the digital age.
In her thesis, Leaving the FOIA Window Open: Implications for U.S. Homeland Security in the Age of Artificial Intelligence, at the Center for Homeland Defense and Security (CHDS), Melanie Simmons, a statistician at U.S. Immigration and Customs Enforcement (ICE), examines how the Freedom of Information Act (FOIA) interacts with modern data and AI capabilities.
Originally designed for a paper-based environment, FOIA now operates in a landscape where large volumes of digital data can be released, aggregated, and analyzed at scale. Simmons’ research highlights how artificial intelligence can combine seemingly unrelated data points from multiple disclosures to reveal sensitive information—a concept known as the “Mosaic Theory.”
The study outlines how actors can leverage FOIA’s “blind requester” principle, which does not require individuals to disclose their intent, to submit large or strategic requests. When combined over time, these datasets can potentially be used to reconstruct law enforcement-sensitive information or identify operational patterns.
Simmons’ work also points to broader implications for how federal agencies manage data disclosure in an era of advanced analytics. The research suggests that existing frameworks may need to evolve to address risks tied to data aggregation and inference.
As part of her recommendations, Simmons proposes modernizing FOIA processes to better account for these challenges, incorporating current federal data protection practices, and using AI tools to assess disclosure risks before information is released.


