
When Algorithms Decide Who Gets In: Education's December 2026 Privacy Act Deadline
When Algorithms Decide Who Gets In: Education's December 2026 Privacy Act Deadline
If you work in Australian education, the 10 December 2026 commencement of new automated decision-making transparency rules under the Privacy Act is something you need to be paying attention to. The new requirements under APP 1.7, 1.8 and 1.9 don’t get a lot of education-sector airtime, but the impact is significant.
Most universities, TAFEs, and many schools are APP entities under the Privacy Act. That alone puts them squarely in the scope of the new transparency obligations. What people often miss is just how many of education’s everyday operational decisions are now made or substantially shaped by computer programs that use personal information.
Where ADM is hiding in education
The obvious place is admissions. Most large institutions have moved at least partially to algorithmic ranking and shortlisting for entry decisions. ATAR is itself a computational process, but the more interesting use cases sit one layer deeper. International applicant scoring, scholarship allocation, equity program eligibility, and conditional offer logic all involve programs using personal information to make or contribute to decisions that significantly affect rights or interests. An offer of admission, an offer of financial aid, a place in a competitive program, these are all decisions that materially affect a person’s life trajectory.
The less obvious places are where the December deadline gets uncomfortable. Student wellbeing flagging is one. A growing number of institutions use automated systems to identify students at risk of disengagement, mental health concerns, or academic failure. These systems take attendance, LMS engagement, assessment patterns, and sometimes financial signals, and they output a prioritised list of students for intervention. That’s a computer program using personal information to make a decision that significantly affects a student’s interests, particularly when intervention happens without the student’s awareness.
Adaptive learning platforms are another. Increasingly, courseware adjusts in real time based on individual learner data, what gets shown next, which scaffolds get presented, sometimes which pathway a student is steered toward. The personalisation is the point. But once that personalisation crosses into directing a student’s educational pathway in ways that materially shape their outcomes, the disclosure obligation kicks in.
Plagiarism and academic integrity tools are now AI-driven and increasingly opaque. When a system flags a student for academic misconduct, the consequences can include suspension or expulsion. The new transparency rules require that the use of such systems be clearly disclosed, including the kinds of personal information used and the kinds of decisions made.
Then there’s the Microsoft 365 layer. Across Australian education, Copilot Studio agents are being built at pace for student support, enquiry handling, application processing, and case management. Many of these agents touch personal information. Many of them contribute to decisions. Few are inventoried in a way that supports a compliant privacy policy disclosure.

The specific risk for education
Education has two particular exposures here that other sectors don't share to the same extent.
The first is the consent and capacity question. Many of the data subjects in education systems are minors or young adults. The transparency obligation under APP 1 sits alongside, not on top of, an institution’s broader duty of care. A privacy policy disclosure that says we use AI to flag students at risk of failing is technically compliant. Whether it’s appropriate, ethical, and aligned with community expectations is a separate question. The OAIC's enforcement posture makes clear that compliance with the letter is necessary but not sufficient.
The second is the regulatory layering. Universities are subject to the Privacy Act, TEQSA standards, the ESOS Act for international students, and increasingly state-level requirements around child safety and digital wellbeing. The privacy policy is one disclosure surface; institutions are likely to need to align disclosures across multiple regulatory regimes simultaneously.

The Robodebt parallel that nobody talks about
Robodebt is treated as a government story. But the underlying lesson that an automated decision-making system, deployed at scale, with insufficient transparency and inadequate human review, can cause harm to thousands of people simultaneously applies just as forcefully to education.
A flawed admissions algorithm, a poorly calibrated retention model, a wellbeing flagging system trained on biased data, any of these could constitute the same kind of systemic harm at scale. The December 2026 transparency rules are the start of Australia’s regulatory response to that risk, not the end of it.

What education leaders should be doing now
The first step is the same as for any sector: inventory the AI estate. For most institutions, this is harder than it sounds. ADM happens in core systems, in faculty-procured tools, in cloud platforms, and increasingly in citizen-developer Copilot agents that nobody catalogued.
The second step is to assess each instance against the significantly affect rights or interests threshold. In education, that threshold is met more often than people initially expect. Anything affecting admission, progression, financial support, academic standing, or wellbeing intervention is in scope.
The third step, and the one I’d push hardest on, is to bring privacy, IT, academic, and student affairs leadership into the same room early. The privacy policy disclosure is going to need to describe practices that span all of these areas. If they’re being managed in silos in May, they won’t be coherent in December.
There’s nothing in the new rules that should slow education’s appropriate use of AI. But the December deadline does demand a level of self-awareness about the AI estate that most institutions don’t currently have. Better to build that awareness now than to build it under regulator scrutiny.
Jan Davids Principal Consultant, Aureus Solutions Microsoft AI Cloud Partner | Adelaide, SA
Sources
1. Privacy and Other Legislation Amendment Act 2024 (Cth). Federal Register of Legislation. https://www.legislation.gov.au/C2024A00128/asmade
2. Office of the Australian Information Commissioner, Chapter 1: APP 1 — Open and transparent management of personal information (APP Guidelines). https://www.oaic.gov.au/privacy/australian-privacy-principles/australian-privacy-principles-guidelines/chapter-1-app-1-open-and-transparent-management-of-personal-information
3. Lander & Rogers, Australian Privacy Law Update: What APP entities need to know in 2026. https://www.landers.com.au/legal-insights-news/australian-privacy-law-update-what-app-entities-need-to-know-in-2026
4. Johnson Winter Slattery, Practical implications of the new transparency requirements for automated decision making. https://jws.com.au/what-we-think/practical-implications-of-new-transparency-requirements-for-automated-decision-making/
5. Corrs Chambers Westgarth, Australia’s ongoing privacy reforms: bolstering Australia’s privacy regulatory framework. https://www.corrs.com.au/insights/australias-ongoing-privacy-reforms-bolstering-australias-privacy-regulatory-framework
6. MinterEllison, OAIC ramps up privacy enforcement: are you ready? (February 2026). https://www.minterellison.com/articles/oaic-ramps-up-privacy-enforcement-are-you-ready
Insights & Updates
Explore articles, resources, and ideas where we share updates about the product, thoughts on technology, and lessons learned while building along the way.
Insights & Updates
Explore articles, resources, and ideas where we share updates about the product.

