
Microsoft Just Published a Governance Framework for AI Agents. Here Is What It Actually Means
Microsoft Just Published a Governance Framework for AI Agents. Here Is What It Actually Means
Something significant happened in the Microsoft Power Platform blog this week that did not get the attention it deserved.
On April 1, Microsoft's Partner Director of Product Management for Power Apps published a piece titled "Building trustworthy AI: A practical framework for adaptive governance." It is not a product announcement. It is not a feature update. It is Microsoft acknowledging, publicly and plainly, that most organisations deploying AI agents are doing so without the governance models to match.
That is worth pausing on.
The Problem Microsoft Is Describing
The blog post makes an observation that will be familiar to anyone who has been watching Copilot deployments closely. Most organisations are not struggling to adopt agents because the technology is unsafe. They are struggling because their governance models were built for a world that no longer exists.
Traditional security and compliance frameworks were designed around a clear perimeter inside and outside. What is authorised and what is not. Who has access and who does not. Those boundaries still matter, but they were never designed for systems that move fluidly across apps, data sources, and workflows in minutes rather than weeks.
When an agent can be built and deployed in a Copilot Studio session, the governance review process that was built around a formal IT change request simply cannot keep up. The pace has changed. The frameworks have not.
Microsoft puts it plainly: when governance strategies come down to either locking everything down or figuring it out later, neither outcome is good. The first produces shadow IT. The second produces unmanaged risk.

The Questions Every Agent Should Be Able to Answer
The Microsoft blog frames governance around a set of questions that every deployed AI agent should be able to answer. Reading between the lines, these are the questions most organisations currently cannot answer about their agents:
What data can this agent access, and does it need all of it? Agents built quickly in Copilot Studio often inherit broad permissions because scoping them down takes deliberate effort that gets deprioritised when teams are moving fast.
Who is accountable if the agent makes a wrong decision or produces a harmful output? Accountability is not a technical configuration, it is a governance decision that has to be made before deployment, not after an incident.
Is the agent's behaviour observable and auditable? An agent that takes actions on behalf of users, sends emails, updates records, or retrieves sensitive data needs to leave a trail that can be reviewed. Not all do by default.
Does the deployment meet the organisation's legal and ethical obligations? In Australia, that means Australia's 8 AI Ethics Principles and the Privacy Act. For organisations with UK operations, it includes the UK AI Regulation White Paper. These are not optional frameworks, they represent the standard against which a regulator or a board will measure a deployment that goes wrong.

Why Microsoft Publishing This Matters
Microsoft is not writing this blog post because the problem is theoretical. They are writing it because their customers are deploying agents at scale, through Copilot Studio, through Power Automate, through Azure AI, and the governance gap is real and growing.
Agent 365, which launches May 1, is Microsoft's operational response to this. It gives IT teams visibility into what agents are doing across a tenant. But as I have noted in previous posts, Agent 365 is an IT monitoring tool. It tells you what is happening. It does not tell you whether what is happening is compliant, ethical, or defensible.
The governance layer, the frameworks, the risk scoring, the documentation, the accountability structures, sits above what Agent 365 provides. That is where the real work happens.
The fact that Microsoft is now publishing guidance on adaptive governance frameworks for AI agents tells you something important: this is no longer a niche concern for compliance teams. It is a mainstream deployment question that every organisation building on Copilot needs to answer.

What Good Governance Actually Looks Like in Practice
The Microsoft framework makes a point that I think is particularly useful for organisations trying to work out where to start: the goal is not to stop agents, it is to classify risk clearly and apply the right controls at the right time.
That is a practical frame. Not every agent carries the same risk. An agent that helps a user draft a document is a different risk profile to an agent that accesses Dataverse tables containing personal information, sends emails on behalf of users, and creates records in a CRM system. Treating them the same, either locking everything down or applying no controls misses the point.
Good governance for AI agents starts with understanding what each agent actually does: what data it touches, what actions it can take, what permissions it has, and how its outputs are used. From that baseline, you can assess the risk, identify the gaps, and build the controls that match the exposure.
That assessment process should happen before deployment, not after an incident forces the question.

Where This Leaves Australian and UK Organisations
For Australian organisations, the governance question has a specific legal dimension that is arriving faster than many teams realise. The Privacy Act amendments introducing transparency requirements around automated decision-making are effective December 2026. Organisations using AI agents that inform or make decisions affecting individuals will need to be able to explain those decisions and demonstrate that appropriate oversight is in place.
For UK organisations, the AI Regulation White Paper is already shaping expectations among regulators and large enterprise customers. Having a documented, evidenced governance position is increasingly a procurement and due diligence requirement, not just a best practice.
The window to get this right proactively, before a regulator asks the question or a client raises it in a tender is narrowing.

A Closing Thought
Microsoft publishing a governance framework for AI agents is a signal, not just a resource. It tells you that the industry has moved past the question of whether agents will be deployed at scale. They already are. The question now is whether the organisations deploying them have the governance structures to match.
If your organisation is building or running Copilot Studio agents and you have not yet done a formal governance review, that is the conversation worth having now, before May 1, before the Privacy Act amendments, and before something surfaces that you were not prepared for.
At Aureus Solutions, we assess Copilot Studio agents against Australia's 8 AI Ethics Principles, ISO/IEC 42001, Microsoft's Responsible AI Standard, and the UK AI Regulation White Paper. The output is a scored risk report, a prioritised gap analysis, and a full governance document suite, everything you need to deploy with confidence.
If this resonates, I am happy to have that conversation.
Jan Davids Principal Consultant, Aureus Solutions Microsoft AI Cloud Partner | Adelaide, SA
Source: Microsoft Power Platform Blog, April 1 2026 — https://www.microsoft.com/en-us/power-platform/blog/2026/04/01/building-trustworthy-ai-a-practical-framework-for-adaptive-governance/
Insights & Updates
Explore articles, resources, and ideas where we share updates about the product, thoughts on technology, and lessons learned while building along the way.
Insights & Updates
Explore articles, resources, and ideas where we share updates about the product.

