In this commentary, VMIA Chief Information Security Officer, Ian Pham, shares his top 10 recommendations drawn from his own experience to guide responsible adoption, ensuring risk management and governance safeguards evolve as rapidly as emerging AI capabilities.
Artificial Intelligence (AI) is rapidly weaving itself into the fabric of every public sector agency, offering unprecedented pathways to efficiency and data-driven insights. Yet before we embrace any new tool, we must ensure that our safeguards, particularly those relating to risk management and governance, evolve just as quickly.
In the Victorian Government, we have the advantage of pursuing public value rather than shareholder returns. This means we can afford to, and are expected to, adopt AI responsibly, not recklessly.
Ian Pham, one of the speakers at CyberCon Melbourne 2025
Embracing new technology
Each new AI capability is a double-edged sword, creating valuable opportunities while introducing fresh exposures. We cannot shy away from these benefits. History shows that truly transformative technologies become part of our daily lives regardless of whether we feel ready.
I still remember driving through Melbourne with a worn-out Melway tucked behind the passenger seat, surrounded by old tissues and crumbs. When the street signs disappeared and the roads were unfamiliar, I would pull over, fumble through the pages and try to work out where on earth I was. The first time a GPS unit appeared on my dashboard, I did not trust it."
The voice instructions felt robotic and, to be honest, occasionally wrong. Yet within a few short years, I stopped questioning it at all. It has become indispensable. That experience reminds me that while AI follows a similar path, its growth is much faster and far less predictable. Controls we finalise this month may well be outdated by next quarter.
We all have a responsibility to reconsider how we design, review and refresh our policies. The real challenge is no longer “how do we control AI?”, but rather how we can build adaptive safeguards that can evolve as quickly as the technology itself.
Below are some recommendations, in chronological order, that have helped govern my AI journey.
Top 10 AI governance recommendations
Speak to your organisation’s AI specialists to better understand the below recommendations and risks.
Set a clear vision, guiding principles and risk appetite to ensure it aligns with your organisation’s objectives.
Create a cross-functional working group to steer the programme and resolve issues. Consider including teams from:
cyber security
risk
privacy
legal
procurement
innovation
data
business owners.
Translate the framework into a single, mandatory policy that sets out roles, responsibilities and approval gates.
Publish concise, easy-to-follow guidelines and decision trees so staff understand when and how they may use AI tools.
Embed the policy in induction, supplier contracts and project governance to ensure consistent adoption.
Offer tailored training for developers, risk owners and end-users to improve adoption and maturity on:
safe prompting
data handling
escalation routes.
Create prompt libraries to help guide users in reaching outcomes that are safe and successful.
Foster a culture where staff feel confident reporting AI concerns.
Catalogue every AI model, tool or embedded feature you use along with the information required to manage. It should be like an asset or third-party register where you can identify and classify based on risk, and should be updated regularly.
Screen each AI capability before deployment for ethical, privacy, safety and cyber risks. Maintain a consistent approach to your test scenarios that focuses on your highest risk.
Apply deeper scrutiny to higher-risk applications such as public-facing generative tools.
Confirm that training and input data are accurate, representative and legally sourced.
Use Privacy Impact Assessments and de-identification techniques for personal information.
Ensure humans can review, override or shut down automated outputs.
Assign clear accountability for performance, bias monitoring and incident response. Expect that things can go wrong and have people and processes in place for fixing it.
Add contractual clauses on AI use that protects your data, including audit rights for transparent assurance.
Continually monitor your suppliers, ensuring any AI applied to your data or systems sits within your risk appetite.
Actively monitor and scan open-source components for known vulnerabilities.
Track metrics for drift, bias, false positives and cyber anomalies.
Have clear processes to efficiently respond and remediate, otherwise the speed and volume of issues that arise may be unmanageable over time.
Set automated alerts to trigger AI retraining or roll-back when thresholds are exceeded.
Review security architecture and controls early in the design phase and throughout change lifecycles where risk has increased.
Regularly review AI-related guidelines and user training material given the constant change of the AI landscape. Consider leveraging the NIST AI Risk Management Framework to support.
Conduct tabletop exercises to test your response to prompt injection, model inversion or hallucination events.
AI can be a powerful ally, but only if we all manage it with intention and vigilance. The real challenge isn’t whether AI is friend or foe. It is whether we are ready to take responsibility for how we use it.
Be it Melway or GPS, the tools may change but the goal remains the same – to ensure we arrive safely.