5 Strategic conversations every government leader should have about AI

The rapid advancement of AI is outpacing governance frameworks, widening the knowledge gap, and intensifying concerns around risk and transparency.

Public Sector Network recently hosted roundtable discussions across Australia and New Zealand discussing the topic ‘Responsible AI’. Our CTO and VP Strategy engaged with leaders to tackle a critical question: What are the best practices for establishing ethical and trustworthy AI in government?

1. Ethics and governance: establishing transparent AI frameworks

A central theme across the different roundtables was the urgent need to embed robust ethical AI frameworks within public sector applications. These frameworks ensure AI systems are designed with transparency, fairness and accountability at their core, especially in high-stakes areas such as healthcare, child protection, and justice. Embedding privacy, security, and fairness in AI design upholds ethics, builds public trust, and ensures responsible deployment.

Critical considerations:

  • Can you explain or justify outputs obtained from using AI systems?
  • Moving into a production context, how would you pass an audit or present a full record of your process for an investigation or Royal Commission?
  • How do we operationalise AI into business processes instead of summarising data passively?

Expert perspective:
Frameworks and guardrails go hand in hand when designing responsible systems. A framework sets the direction, ensuring clarity and alignment with strategic goals. Guardrails provide the necessary boundaries to keep the system operating within ethical, legal, and operational limits including human oversight. Many current proofs of concept overlook the practical considerations necessary to address these requirements. However, by strategically planning for these needs, concepts can successfully transition to production and achieve lasting impact.

Practical application:
In South Australia, AI is playing a pivotal role in healthcare, particularly in predictive modelling, where transparent and explainable systems ensure AI serves as a valuable support tool rather than an opaque decision-maker. Leading initiatives are advancing AI-driven algorithms to analyse mammograms, enhancing early breast cancer detection.

2. Data as a foundational asset

There was strong agreement across the roundtable discussions on the foundational importance of data. The next step for agencies is to elevate this by curating high quality data, to pave the way for AI success. This process directly tackles two significant challenges: quality and reducing the risk of AI hallucinations—where models generate inaccurate or misleading outputs.

To achieve this, agencies can:

  • Curate relevant datasets that are consistent, screened for sensitive and private information, and fit for purpose. This enhances AI reliability, mitigates risks, and prevents wasted processing of irrelevant or low-value content.
  • Leverage RAG Ops to adjust AI settings, reducing unnecessary creativity and minimising hallucinations. Pre-processing with optimised chunking yields far superior results than generic generative AI.
  • Implement inbuilt governance with visibility into source content, audit trails for prompts and results, and clear oversight of model settings.

Critical considerations:

  • How is AI integrating with existing policies and procedures relating to privacy, security and recordkeeping standards.
  • How do we ensure that AI systems only access the most relevant and high-quality data?
  • How do you remove old information and data and keep models fresh?

Expert perspective:
Curation is going to be critical for any intelligent service in use to assure quality and contain costs. Additionally, information preparation with processes such as chunking, coupled with grounded models are outperforming quality and performance. These are critical aspects required to move a project from concept to production.

Practical application:
In New Zealand, shared data platforms are being developed to improve accessibility and foster collaboration. Sovereign AI models and centralised data systems enable multiple agencies to leverage high-quality data, ensuring a unified approach to decision-making.

The DTA Policy for the responsible use of AI in government highlights preparation, transparency and business agility. Government should be able to set its own roadmap, be adaptable with flexibility to choose what’s required and appropriate for different needs. Teams need to be aware that there isn’t just one AI and it’s quite likely there will be legitimate needs to run several models and smaller experiences across different business applications.

3. Workforce transformation: upskilling and AI literacy

Upskilling is no longer just a technical requirement—it is a fundamental shift in how public sector employees operate. Employees must go beyond simply learning the tools- they must understand the guidelines, their obligations and how these connect to the broader technology landscape. Leaders at the roundtable recognise that this shift requires not only individual training but also a rethinking of processes to ensure AI is effectively embedded in government operations. The message is clear: the time to start upskilling is now.

Critical considerations:

  • How do we close the skills gap between existing capabilities and those needed for AI implementation?
  • How do we integrate AI into everyday business operations?
  • How can we balance technology with transparency, governance, compliance, and risk management?

Expert perspective:
We have a responsibility to respond to pressing needs in the community and prioritise resources, reducing the cases of inundation and missing signals in the noise. It's critical that we stop overburdening the workforce with tedious, draining activities that drain time and energy. These improvements help new personnel be effective sooner and reduce burnout in longer serving personnel.

Practical application:
Australian universities, are leading the way in responsible AI education, equipping future and current employees with the critical skills needed to effectively manage AI in public sector operations. At the same time, government agencies are proactively establishing working groups and committees to identify AI use cases, assess capabilities, and share key learnings. These collaborative efforts are strengthening the public sector’s AI readiness, fostering innovation, and reducing the risk of project failures.

4. Collaboration and resource sharing across agencies

Government leaders in each state reinforced a critical truth- Cross-agency collaboration and resource sharing is essential for unlocking AI’s full potential in the public sector. While many recognise this, making it a reality remains a persistent challenge. However, some agencies are already taking the first steps by considering the long-term value of collaboration, designing their AI systems with shared resources in mind. By pooling data, AI models, and resources, they can eliminate silos, reduce duplication, and amplify the impact of their AI initiatives.

Critical considerations:

  • How can we effectively share data and resources across agencies while ensuring privacy and security across multiple platforms?
  • How can we foster a culture of collaboration across agencies to ensure seamless data sharing and resource utilisation?
  • How can we leverage best practices and leverage working solutions proven by other departments?

Expert perspective:
AI is a team sport. By creating shared AI models and frameworks, agencies can pool resources, knowledge, and data, reducing duplication and fostering innovation across the public sector.

Practical application:
New Zealand has established centralised AI libraries and sovereign large language models (LLMs) empowering multiple agencies to access AI capabilities without duplicating efforts. Across the region, there is growing consensus on improving information sharing to track AI initiatives across agencies. In the US, a centralised AI project register publicly lists AI-driven initiatives across sectors including transport & engineering, farming & biomedical, medical-providing transparency and enabling cross sector collaboration.

5. Building trust in AI through transparency

Every roundtable made one thing clear, building trust and transparency in AI is not optional. When AI models are explainable, dynamic and underpinned by high-quality, curated data, they provide the clarity necessary to assure the public that these systems operate in their best interests. The roundtable discussions emphasised how this balance of explainable AI and human oversight forms the cornerstone of the transparency needed to earn and maintain public trust.

Critical considerations:

  • How do we ensure AI supports, rather than replaces, human decision-making?
  • How do we assure the findings are correct, up-to-date and accurate?
  • How do we integrate AI’s insights into operational decision-making?

Expert perspective:
AI should be seen as a progression, not a revolution. AI must be transparent and accountable, with clear mechanisms ensuring decisions are understandable and support—rather than replace—human oversight.

New Zealand sets a strong example with its Responsible AI Guidance for the Public Service: GenAI, which prioritises transparency, accountability, and public trust. By following similar principles, governments can integrate AI responsibly, improving efficiency while maintaining confidence in AI-supported decisions.

Practical application:

Queensland showcases AI’s benefits through low-risk applications like recruitment pre-screening, demonstrating how AI can be tracked, understood, and explained while driving government innovation. The Federal Attorney-General’s proposed automated decision-making reform, stemming from the Robodebt Royal Commission, underscores the need for fairness and transparency in AI-driven government services. Meanwhile, Air Canada’s 2022 liability for its chatbot’s incorrect bereavement fare advice reinforces that AI automation does not absolve organisations of responsibility and accountability.

A final word from our experts

As AI continues to transform the public sector, its true value lies in empowering human decision-making, not replacing it. The path to successful AI adoption is built on transparency, collaboration, and a strong foundation of data readiness.

The roundtable tables were a great insight into the different approaches to AI, with a valuable exchange in the governance foundations required for successful long term AI adoption. The key takeaway being the time to act is now—embracing AI in government is not just a possibility. By setting a solid scaffolding, government can contain risk, cost and build on high quality experiences that truly deliver and elevate public service.