GenAI

Army Overhauls AI Policy and Tech to Secure GenAI Use Amid Growing Risks

The U.S. Army is tightening its grip on generative artificial intelligence (GenAI) implementation across its systems, with a strategic push to ensure responsible and secure adoption. In an exclusive interview with Breaking Defense, Army Chief Information Officer Leonel Garciga revealed sweeping changes to policy and infrastructure; centered around the Army Enterprise Large Language Model (LLM) Workspace, to align AI innovation with military-grade data protection standards.

Garciga emphasized that while GenAI tools can deliver tremendous efficiency; saving time, money, and operational complexity – their use must not compromise sensitive military data, personally identifiable information (PII), or the integrity of records required under FOIA (Freedom of Information Act). To that end, the Army recently updated its FOIA and records management policies to reflect AI-specific guidelines.

“If you’re using an AI tool, it doesn’t absolve you from meeting those requirements,” Garciga noted. “I still have a responsibility to the American public to provide FOIA, and to ensure adherence to both federal and DoD record-management policies.”

A key moment in this shift was the Army’s decision in April to block access to earlier GenAI models like NIPRGPT, developed by the Air Force Research Lab. These early models, while valuable for training and exploration, lacked sufficient cybersecurity safeguards. Garciga stressed that the move was not a condemnation of those efforts, but rather a sign that the Army was ready to transition to more secure systems, especially when dealing with contractor data and PII.

The Army Enterprise LLM Workspace, launched in May, was developed as a secure, compliant GenAI environment. It has since been endorsed by the Pentagon’s Chief Digital and AI Office (CDAO) for use across joint headquarters. Unlike the “first wave” AI tools, the Workspace includes robust data protection mechanisms and records management features, helping ensure transparency and accountability in alignment with military obligations.

Garciga also pointed to the need for broader government action. “We haven’t updated our legislation or policies to reflect the advent and the deployment of LLMs in our environment,” he said. Commercial sectors like banking and healthcare have already moved to enforce strict AI compliance, and he argued that the government should follow suit rather than reinvent existing best practices.

The Army is also reviewing all enterprise software providers; such as SAP, Salesforce, and Palantir – who are independently integrating GenAI into their platforms. Garciga’s office is now working closely with vendors and contracting officials to revise service-level agreements, ensuring all AI features meet Army data security standards.

“I’m sure we’re going to miss some things,” Garciga admitted. “The Army’s big. We’ve got a lot of programs. So our big push is to make sure we get the word out. We really do spend a lot of time protecting the Army’s data.”

As GenAI becomes deeply integrated into military workflows, the Army’s approach may serve as a blueprint for balancing technological innovation with national security and transparency. The message is clear: artificial intelligence is here to stay, but only if it’s built on a foundation of trust, safety, and regulatory compliance.

Similar Posts

One Comment

Leave a Reply

Your email address will not be published. Required fields are marked *