PERSPECTIVE: AI Is Not a Witness

The Emerging Risks of Letting GenAI Write Police Reports, Legal Narratives, and Use-of-Force Statements

A federal judge recently called out Immigration and Customs Enforcement officials for something many people in government, law enforcement, and technology have quietly worried about since artificial intelligence (AI) tools became widely available: the use of Generative AI (GenAI) to write legal documents, police reports, and other critical, sensitive documentation. In this case, ChatGPT was used to write use-of-force reports that didn’t match body-camera footage.

In the judge’s 223-page ruling, a footnote described an agent who provided ChatGPT a sentence and several images, then requested a “professional narrative” documenting the encounter. The resulting report contained factual inaccuracies and described elements of the event that did not occur. The judge noted that the use of AI “undermines the credibility” of these reports and “may explain the inaccuracy.”

That footnote may become one of the most critical early markers in the national conversation about AI in the justice system, because it exposes a truth few want to admit:

We are adopting powerful technology faster than we are building the policy, oversight, and training necessary to use it safely.

And that is dangerous – not because AI is inherently bad – but because we are beginning to treat it like a substitute for judgment rather than a tool that supports it.

The Temptation: Faster, Cleaner, Professional-Looking Reports

If you’ve written case narratives, affidavits, suppression responses, or after-action reports, you know they are time-consuming and mentally draining, particularly after high-stress or complex events.

AI promises efficiency. It promises clarity. It promises time. But it also promises something it cannot deliver: lived experience.

A report is not simply a summary of events. It is a reflection of:

  • Perception;
  • Decision-making under pressure;
  • Threat evaluation;
  • Reasonable belief;
  • And ultimately, accountability.

AI does not perceive a threat; feel fear; evaluate risk to life; and does not weigh imperfect information in real time.

And AI will never take the stand to defend the actions recorded in a report bearing your name.

The Danger: Accuracy Without Truth

GenAI is essentially predictive language at scale; it creates the most statistically probable following sentence based on its training data. That means it can produce statements that sound correct but have no relationship to reality.

AI’s confidence is not evidence, its eloquence is not credibility, and its output – no matter how polished – is not the truth of what occurred.

When AI generates a use-of-force report:

  • Details can be added that no witness saw
  • The sequence can be rearranged
  • Motivations may be assigned
  • Gaps may be filled using conjecture rather than fact

In an adversarial justice system, where the credibility of law enforcement, prosecutors, and expert testimony is already under scrutiny, that is gasoline on an open flame. Because the danger isn’t just a wrong fact. It’s the appearance that the system is willing to let technology reshape reality.

Privacy and Evidence Risks: What Was Uploaded, and Who Owns It Now?

Another critical point is the use of AI in these circumstances is the possibility that sensitive images were uploaded to a public version of ChatGPT.

If that occurred, the implications are vast:

  • Chain of custody concerns;
  • Unauthorized disclosure;
  • Victim image exposure;
  • Violations of privacy statutes; and
  • Compliance failures (CJIS, HIPAA, state privacy laws).

The moment sensitive evidence, especially images, information on minors, sexual exploitation material, or medical information, is voluntarily provided to a third-party system without a legal framework or contractual protections, the organization may have lost control.

We have spent decades building legal process frameworks for phone records, GPS, cloud servers, vehicle data, online platforms, routers, and messaging apps. GenAI, however, creates a new layer—a derivative data layer of processed, inferred, synthesized information that may reveal more about you than the raw data itself.

We are no longer concerned only with the uploaded data. The risk now extends to:

  • How the data is transformed
  • Who sees it
  • How long it persists
  • Whether it is used to train future models
  • And whether it is discoverable

AI introduces a new breed of evidence risk: not only what exists, but what the machine creates, and whether that generated narrative is mistakenly treated as factual.

AI Cannot Be Allowed to Create Legal Documents That Replace Thought

The more existential risk is cultural.

Technology becomes dangerous when people stop thinking because the tool appears to think for them.

For decades, the justice system has confronted this:

  • “The computer says this license is expired.”
  • “The cell site shows he was near the tower.”
  • “The algorithm flagged this person as a risk.”

We have seen wrongful arrests, false accusations, and miscarriages of justice because a tool is perceived as neutral when it is often flawed, incomplete, or misunderstood.

Generative AI now raises the stakes.

If officers begin to rely on AI to articulate perception, justification, or rationale – elements core to legal standards such as “objective reasonableness” – then we are allowing an algorithm to reconstruct memory, intent, and threat assessment.

The courts have never allowed a tool to testify about someone’s perception or decision-making., and AI is no different.

The Answer: Use AI Like a Tool, Not a Crutch

The solution is not to reject AI. That would be as misguided as adopting it without restraint.

The answer is disciplined integration:

  • AI can assist, but it cannot author experience.
  • AI can provide structure, but it cannot provide judgment.
  • AI can summarize content, but it must not invent context.

Just like night-vision goggles, forensic software, license plate readers, or data fusion platforms, AI is powerful when used by trained humans, and dangerous when used instead of them.

For agencies, governments, and legal professionals, this means implementing:

Human-in-the-loop review: Mandatory, logged, accountable human verification.

Internal policy that matches risk: Not all reports are equal, and not all tasks should be automated.

Restricted data handling; Enterprise tools must govern sensitive uploads with contractual safety.

Training on both usage and limitations: Knowledge of what AI cannot do must be part of training.

Transparency: If AI generates portions of a document, internal notation of that fact matters because discovery will eventually demand it.

If AI Is the Future, Accountability Must Be the Guardrail

AI will transform every sector: law enforcement, intelligence, litigation, emergency response, case management, and public policy. But the measure of progress is not how quickly we adopt technology. It is whether that technology strengthens trust, reinforces credibility, and enhances rather than replaces professional judgment.

No matter how advanced AI becomes, we cannot outsource duty, responsibility, integrity, or the consequences of being wrong.

AI will evolve. Courts will evolve. Policy will evolve. But one reality will not change:

AI is not a witness. AI is not accountable. AI is not a substitute for thought.
We may use AI but we must never allow AI to use us.

Kevin Metcalf is Vice President of the Law Enforcement Division at Whooster/OWL Intel, where he works at the intersection of investigations, intelligence, and technology. A former federal agent and prosecutor, he has spent decades helping law enforcement agencies navigate complex cases involving child exploitation, human trafficking, and digital evidence.

He is a founding board member of Raven and the founder and former CEO of the National Child Protection Task Force, bringing together multidisciplinary teams to support agencies working to protect vulnerable children and disrupt trafficking networks. Metcalf is widely known for advancing investigative approaches that combine legal strategy with open-source intelligence, geospatial analysis, cryptocurrency tracing, and other emerging technologies.

His career has spanned military, state, local, and federal law enforcement, as well as prosecution and public service, including leadership of Oklahoma’s Human Trafficking Response Unit. He writes frequently on human trafficking, child protection, and the evolving role of technology in modern investigations.

Related Articles

- Advertisement -

Latest Articles