Australia released its artificial intelligence strategy on Tuesday. The surprising part? The government is stepping back from the tougher rules it had talked about earlier for high-risk AI uses.
There aren’t any specific AI laws in Australia at the moment. Last year, the Labor government said it might introduce voluntary guidelines as people had raised concerns about privacy issues, safety, and transparency.
Tuesday’s National AI Plan focuses on three things: getting investment for advanced data centers, building up AI skills to protect jobs, and keeping the public safe as AI becomes part of everyday life.
The government plans to use existing laws to manage AI risks instead of creating new ones. “The government’s regulatory approach to AI will continue to build on Australia’s robust existing legal and regulatory frameworks, ensuring that established laws remain the foundation for addressing and mitigating AI-related risks,” the plan states.
Individual government agencies will handle AI risks in their own areas
Regulators across the globe have been raising red flags about misinformation from AI tools that generate content. Microsoft-backed OpenAI’s ChatGPT and Google’s Gemini are becoming widely used, which has heightened these concerns.
Last month, the government announced it would create an AI Safety Institute in 2026. The institute will help monitor emerging risks and respond to threats.
Federal Industry Minister Tim Ayres said the roadmap aims to help Australians benefit from new technology. It’s trying to balance innovation with managing risks.
“As the technology continues to evolve, we will continue to refine and strengthen this plan to seize new opportunities and act decisively to keep Australians safe,” Ayres said.
Not everyone’s buying what the government’s advocating for, though. Niusha Shafiabady, an Associate Professor at Australian Catholic University, said the updated roadmap has critical gaps.
“The plan is ambitious in unlocking data and boosting productivity, but it leaves critical gaps in accountability, sovereignty, sustainability, and democratic oversight,” Shafiabady said.
She added: “Without addressing these unexplored areas, Australia risks building an AI economy that is efficient but not equitable or trusted.”
If you're reading this, you’re already ahead. Stay there with our newsletter.