Google is bringing AI agents into the Pentagon for a workforce that numbers about 3 million people, giving civilian and military staff new tools to handle routine work on unclassified networks.
The rollout centers on Gemini agents, which can carry out jobs on a user’s behalf after being told what to do. That means people inside the Pentagon will be able to set tasks in plain language and let the software take care of parts of the job without writing code.
The first stage will stay on unclassified systems, and the reason is simple. That is where most Defense Department users already work.
Emil Michael, the under secretary of defense for research and engineering, said the department plans to go further after that. He said, “We’re starting with unclassified because that’s where most of the users are, and then we’ll get to classified and top secret.”
He also said talks with Google about using the agents on the classified cloud are already happening. Emil added, “I have high confidence they’re going to be a great partner on all networks.”
The new setup will let people across the Pentagon build their own AI agents by typing normal instructions instead of using technical commands.
Jim Kelly, a vice president at Google, said in a Tuesday blog post that both civilian employees and military personnel at the Defense Department will be able to create those agents using natural language. The idea is to make the system usable by regular workers, not just specialists.
Even so, Emil made clear that those discussions are already active from the government side.
The wider Pentagon push into Google’s tools did not start this week. The Defense Department has already been using a Google chatbot through the GenAI.mil portal for unclassified work since December.
A Pentagon spokesperson said 1.2 million employees have used that system so far. Those users have entered 40 million unique prompts and uploaded more than 4 million documents.
Starting Tuesday, the portal will also offer Gemini agents, adding a new layer of automation to work that is already being done through the platform.
Emil said the department needs more AI, not less, but he also said people still need to check what the software produces. He said, “It saves you a lot of time in the middle, but you have to review at the end to make sure there’s no hallucinations.”
He also said the Pentagon can reduce risks with training, guidance, and policies, especially when agents might hide mistakes or make errors harder to spot. Emil said he was surprised by how far behind the department was when he took over the AI portfolio in August.
Emil said, “When I got here and took over the AI portfolio in August, I was somewhat shocked that we didn’t have the basic AI capabilities that most people, consumers around the world have now.”
The Pentagon’s expanding work with Google is happening at the same time as a bitter fight with Anthropic.
Court filings show that more than 30 employees from OpenAI and Google DeepMind filed a statement on Monday backing Anthropic’s lawsuit against the U.S. Defense Department. Their filing came after the federal government labeled Anthropic a supply-chain risk.
That label is usually tied to foreign adversaries. In this case, the Pentagon used it against a major American AI company after Anthropic refused to allow its technology to be used for mass surveillance of Americans or for autonomously firing weapons.
The Defense Department had argued that it should be able to use AI for any “lawful” purpose and should not be limited by a private contractor.
The court brief from the OpenAI and Google employees said the government’s action went too far. It stated, “The government’s designation of Anthropic as a supply chain risk was an improper and arbitrary use of power that has serious ramifications for our industry.” One of the signatories was Jeff Dean, the chief scientist at Google DeepMind.
The filing hit the docket a few hours after Anthropic, the company behind Claude, filed two lawsuits against the Defense Department and other federal agencies.
In the brief, the employees argued that if the Pentagon did not like the contract terms it had with Anthropic, it had another option.
They wrote that if the department was “no longer satisfied with the agreed-upon terms of its contract with Anthropic,” it could have “simply canceled the contract and purchased the services of another leading AI company.”
It said, “If allowed to proceed, this effort to punish one of the leading U.S. AI companies will undoubtedly have consequences for the United States’ industrial and scientific competitiveness in the field of artificial intelligence and beyond.” It also said, “And it will chill open deliberation in our field about the risks and benefits of today’s AI systems.”
Emil, who led negotiations with Anthropic, said the dispute would not be settled in court and said the Pentagon was now “moving on.” That stance comes with history behind it.
In 2018, thousands of Google employees protested the company’s role in Project Maven, a Pentagon program that used AI to analyze video from America’s overseas drone wars. The backlash was strong enough that Google chose not to renew that contract.
Later, the company dropped some restrictions on working with the military.
The smartest crypto minds already read our newsletter. Want in? Join them.