spot_img
76.5 F
Washington D.C.
Tuesday, July 8, 2025

NIST Cybersecurity Center Seeks Public Comment on Internal AI Chatbot

Key Takeaways:

  • NIST’s NCCoE built a secure, internal-use chatbot using retrieval-augmented generation (RAG) technology to help staff more efficiently search and summarize cybersecurity guidance.
  • The chatbot is not public-facing and is intended for internal use only. It is deployed locally and includes multiple privacy and security safeguards to address known risks.
  • The system uses open-source tools and models to ensure full control over deployment and data handling within the NCCoE AI Lab.
  • Public comments are open through August 4, 2025, at 11:59 p.m. EDT.

The National Institute of Standards and Technology (NIST) has released a new internal draft report documenting its development of a secure AI chatbot to support staff at the National Cybersecurity Center of Excellence (NCCoE). The prototype was designed to improve how NCCoE employees discover, access, and synthesize cybersecurity guidance within the center’s library of documents, using advanced artificial intelligence without compromising security.

Built on a retrieval-augmented generation (RAG) architecture, the chatbot integrates information retrieval with natural language generation, allowing it to answer nuanced questions based on the actual content of NIST publications. Unlike traditional keyword-based search tools, this LLM-powered system understands the context of questions and produces answers with page-level citations to source materials, keeping internal cybersecurity work fast, traceable, and relevant.

While chatbot interfaces are becoming more common across both government and private industry, NIST’s version stands out in key ways. It’s entirely local and private, running on secured internal servers using open-source components. The foundation model used is Meta’s LLaMA 3.1 with 70 billion parameters, chosen for its balance of performance and compatibility with the NCCoE’s GPU infrastructure.

To support trust and accuracy, the chatbot includes multiple technical safeguards. It filters out hallucinated responses, cross-verifies its answers against known data chunks, and is accessible only by trusted users over a VPN. In short: the tool is not a replacement for human analysts, but a productivity-enhancer, with security and accuracy front of mind throughout its lifecycle.

While the chatbot remains an internal tool, the draft report provides valuable insight into how federal agencies are cautiously and creatively experimenting with LLMs. NIST is now requesting public input to shape its ongoing work.

The full draft is available here, and details on how public comments can be submitted are here.

(AI was used in part to facilitate this article.)

Matt Seldon
Matt Seldon
Matt Seldon, BSc., is an Editorial Associate with HSToday. He has over 20 years of experience in writing, social media, and analytics. Matt has a degree in Computer Studies from the University of South Wales in the UK. His diverse work experience includes positions at the Department for Work and Pensions and various responsibilities for a wide variety of companies in the private sector. He has been writing and editing various blogs and online content for promotional and educational purposes in his job roles since first entering the workplace. Matt has run various social media campaigns over his career on platforms including Google, Microsoft, Facebook and LinkedIn on topics surrounding promotion and education. His educational campaigns have been on topics including charity volunteering in the public sector and personal finance goals.

Related Articles

- Advertisement -

Latest Articles