AI Governance · Higher Education · 25 Universities · 9 Countries · 2026

The Three Yeses

Every one of twenty-five universities says students may use AI. None of them mean the same thing — and what none of them say is the story that matters most.

25Universities
21Public policies found
0Blanket bans on AI
3Governance philosophies
1Proctored AI exam (LSE)
10Dimensions no one covers

Part One

Three Acts

In the autumn of 2023, the London School of Economics did something almost no major university had the institutional nerve to do: it issued a school-wide ban on generative AI in all assessed work. Not a framework. Not a set of conditions. A ban.

By 2024, LSE removed it.

By 2025, LSE had introduced something more inventive than either: the Observed Assessment. Students write essays with AI tools available — but in a supervised room, under time pressure that makes it impossible to outsource the thinking. The tool is permitted. The question of whether the work reflects genuine understanding is answered not by trust but by presence.

Ban. Lift. Watch. This three-act journey — prohibition, permission, then something that isn't quite either — is the most compressed version of a conversation that twenty-five of the world's top universities are all having simultaneously. We read their public governance documents. What we found is not the consensus story.

Part Two

What Everyone Agrees On

Before the differences, the agreement. Of the twenty-five universities in this analysis, twenty-one have published AI policies. Not one has banned generative AI. That alone is worth sitting with. Institutions that built their reputations on original thought from students — on the assumption that intellectual struggle is what makes learning stick — have decided, with remarkable speed, that AI use should be permitted.

The agreement runs deeper than "yes." Those twenty-one institutions converge on five structural principles with a consistency that suggests either deliberate coordination or convergent evolution under identical pressure. Disclosure is required. The human user bears accountability for accuracy. Existing academic integrity rules extend to AI. Sensitive data must stay out of AI systems. And local instructors, departments, and funders may add stricter rules.

The Big Five — present in all 21 policy-issuing universities

📢
Disclose
AI use must be declared — 21/21
🧍
You own it
Human accountable for output — 21/21
⚖️
Rules extend
Integrity policies cover AI — 21/21
🔒
No private data
Confidential data stays out — 21/21
📋
Local rules
Courses can go stricter — 21/21

These 25 are the outliers, not the norm

EDUCAUSE's 2025 AI Landscape Study found that only 39% of higher education institutions have any AI-related acceptable use policy at all — up from 23% in 2024. The 25 universities here represent the frontier. What looks like a cautious, incomplete consensus in this dataset is still well ahead of most of the sector. The convergence on five guardrails is real progress. It is also, by any comprehensive standard of AI governance, the beginning of the work.

The table below shows what the consensus conceals: how comprehensively each university has actually answered the eleven policy dimensions that matter. Sorted by coverage. Click any cell for the evidence.

Policy coverage score — average across 11 dimensions (disclosure, accountability, integrity, privacy, tools, assessment, research, override, authorship, stance, type)

Score = mean value (1–5) across 11 policy dimensions ÷ 5. Gray = no public policy found.

The Full Policy Matrix

Click any column header to sort. Click any colored cell for evidence, quotes, and source. Green = comprehensive · Yellow = partial · Orange = absent · Gray = no public policy found.

University Policy TypeFormal/guidance/none Default StancePermit / restrict / delegate DisclosureMust AI be declared? AccountabilityWho owns the output? IntegrityMisconduct rules apply? PrivacyData input limits Inst. ToolsApproved tools provided? AssessmentExams & coursework ResearchResearch publications Local OverrideDept / course rules AI AuthorshipCan AI be an author?

Part Three

Three Very Different Yeses

The phrase "permitted with conditions" covers an enormous amount of ground. At one end of the spectrum, it means essentially: here is a list of approved tools, please note which ones you used. At the other end, it means: you must prove to us, in a supervised room, that the work is actually yours.

The dataset resolves into three broad governance philosophies:

Open Default (16 institutions)

AI is permitted for most purposes unless a specific context restricts it. The default state of the student is AI-capable.

MIT · Yale · Princeton · NUS · NTU · SMU · ETH Zurich · EPFL · KU Leuven · Helsinki · HKU · Melbourne · UNSW · IISc · Cambridge · SUTD*

Closed Default (5 institutions)

AI is restricted unless explicitly authorised. The burden of proof falls on permission, not restriction.

Oxford · Imperial · Stanford · University of Toronto · University of Sydney

Structured / Abstain (4 institutions)

Either departments must choose their own position (LSE), or the centre refuses to make any uniform call (Tokyo). India's three institutions have no public policy at all.

LSE · Tokyo · IIT Madras · Ashoka

Oxford and Imperial College London are the clearest default-restrictors. Oxford's policy states AI is "not permitted unless the assessment brief explicitly states otherwise." Imperial's guidance warns that absent explicit authorisation, using AI to create assessed work "may be treated as an offence such as contract cheating." Both institutions start from restriction and require explicit permission to move.

Stanford's approach is subtler but similarly structured: "absent a clear statement from a course instructor, use of or consultation with generative AI shall be treated analogously to assistance from another person." The analogy to another person doing your work is deliberate and elegant — it activates existing academic integrity intuitions without requiring new rules.

LSE found a fourth path. After lifting its school-wide ban, it issued a mandatory choice: every academic department must publicly commit to one of three positions — no AI, limited AI, or full AI. The centre doesn't decide. It forces decisions.

Then there is the University of Tokyo, which occupies a category of its own. Its policy statement reads: "The University of Tokyo will not uniformly prohibit the use of natural language-generation AI tools, such as ChatGPT, in educational settings." This is not permission. It is a deliberate refusal to issue either permission or prohibition — a principled delegation of the question to individual instructors. Tokyo has decided that the centre is not the right place to make this call. That is itself a governance philosophy, and perhaps the most intellectually honest one in the dataset.

Who was in the room when these policies were written?

The AAUP's 2025 report on AI and academic professions found that at US institutions, 71% of faculty say administrators "overwhelmingly" lead AI policy conversations with "little meaningful input" from faculty, staff, or students. The three governance philosophies above were not, in most cases, chosen by the people who teach and learn under them. When Tokyo delegates to individual instructors, it is often framed as abdication. But it may also be the only institution in this dataset with a governance structure where the people closest to the problem actually make the call.

Part Four

The Disclosure Mountain

Every policy in this dataset requires disclosure. The word appears in all twenty-one. Functionally, it spans orders of magnitude.

At one end, the University of Melbourne requires students to have records of "prompts and outputs used in the AI tools or technologies available on request." The University of Sydney requires "the name and version, the publisher, the URL, and a brief description of how you have used" each tool. NUS requires not just which tools but "which tasks" and "which prompts" — at the time of submission. These policies produce an audit trail.

At the other end, the University of Tokyo recommends that instructors "have students specify which generative AI they used when submitting reports." A recommendation. Which tool. Nothing more.

IISc's tiered rule — unique in this dataset

The Indian Institute of Science makes a distinction no other university here makes explicit: minor AI use (grammar, word choice) requires no disclosure. Substantive cognitive assistance — analysis, argumentation, structure — requires a methods-section declaration including what tools were used, for which tasks, and how outputs were verified. It acknowledges that "using spell-check" and "using AI to draft an argument" are not the same kind of thing.

The disclosure spectrum matters because it is the operational centre of these policies. Principle without procedure is aspiration. A student who receives a "disclose your AI use" instruction without knowing what that means in practice — which tool name? which version? the prompts? a methods section? a footnote? — is not protected by the policy. They are confused by it.

There is a structural flaw in disclosure as a compliance mechanism that no policy here acknowledges. If disclosure is how institutions verify AI use, then AI detectors are the enforcement layer behind it — and that layer is quietly discriminatory. A Stanford HAI study found that detectors misclassify over 61% of essays by non-native English speakers as AI-generated, compared to near-perfect accuracy for native speakers. The scoring method uses "perplexity" — a proxy for linguistic sophistication — and non-native writers naturally score lower. Ninety-seven percent of TOEFL essays were flagged by at least one detector. Not one of the twenty-five institutions in this dataset acknowledges whether it uses AI detectors, their known bias, or any safeguard against wrongful accusation. A policy built on disclosure, enforced through detection, carries a built-in justice problem that the policies themselves are silent about.

10 Things No Policy Addresses

Every dimension below is absent from all 25 institutions in this dataset. These are not edge cases — they are active pressure points where students, faculty, and institutions are making consequential decisions right now, without institutional guidance.

01
AI in Admissions
No university addresses whether applicants may use AI in essays or personal statements — the highest-stakes academic writing most students ever do.
Why it matters: The undetectable use with the most consequential outcome.
02
AI Detection Tools
No policy discusses whether or how the institution uses AI detectors, their known false positive rates, or safeguards against wrongful accusation.
03
Equity and Access
No policy addresses unequal access to AI tools — paid subscriptions, hardware, bandwidth, or language barriers that disadvantage some students structurally.
Why it matters: Policies that encourage AI use favour students who can afford it.
04
Environmental Impact
No institution mentions the energy cost of generative AI queries, despite active institutional sustainability commitments.
Why it matters: A GPT-4 query uses ~10× the energy of a Google search.
05
Student Data in Training
While all policies restrict uploading sensitive data, none address whether licensed institutional tools train on student submissions, or what opt-out rights exist.
Why it matters: Students may be training the tools they're later tested on. Only UNSW addresses this contractually.
06
Multilingual Students
No policy addresses AI use specifically for students whose first language is not the language of instruction — a structurally different use case from standard disclosure.
Why it matters: Polishing language is not the same as generating arguments. Policies treat them identically.
07
Mental Health & Dependency
No policy addresses psychological dimensions — dependency, deskilling anxiety, or the cognitive load of constant AI-rule navigation.
Why it matters: The skill gap AI creates may only become visible at graduation.
08
Professional Practice Training
No policy addresses AI in clinical, legal, or engineering professional education — contexts with entirely different ethical stakes from essay-writing.
Why it matters: A medical student reasoning about a case with AI faces different professional ethics than one writing a history essay.
09
Administrative AI Use
Policies focus on academic work. None substantively address AI in HR, student services, academic analytics, or grading support — where students are subjects, not users.
Why it matters: Students are also governed by AI they didn't choose.
10
Shared Governance
No policy describes who was consulted in its drafting, whether faculty or students had input, or how the policy will be reviewed. The 2025 AAUP report found 71% of faculty had "little meaningful input" in AI decisions at their institutions.
Why it matters: Policies written without the people who live under them are unlikely to be trusted or followed.

Part Five

The Research Gap

Most AI policies were written by education offices, for students, about coursework. Research — where AI's implications are more profound, more complex, and more consequential — is consistently the afterthought.

Fifteen of twenty-five institutions have some research-specific guidance. But quality varies enormously. The University of Hong Kong's research guidelines are in a different league from the rest. They require "any use of GenAI, including how the software is used, should be well documented." Graduate students must declare AI use "at the time of thesis submission for examination." And most meaningfully: "any responsible use of GenAI in research must be done in a process in which human researchers can readily verify and validate output produced by GenAI." This is not just disclosure — it is a reproducibility standard.

UNSW Sydney has published a formal research position paper. KU Leuven integrates GDPR compliance into its research guidance. These represent serious attempts to address the research context specifically.

NUS — which has the most operationally specific teaching policy in this dataset — has no public research policy. Stanford, whose faculty have co-created the AI systems everyone else is writing policies about, has "partial related guidance but no dedicated public research policy." Tokyo has no research policy at all.

There is something quietly strange about this: the institutions most invested in producing AI research are among the least invested in governing how their own researchers use AI tools. The policy conversation has been driven by the education side of the house. The research community is still working out what to say.

The pattern follows the governance structure. UNESCO's 2023 guidance for generative AI in education and research explicitly called for institutions to address AI in research outputs, scientific integrity, reproducibility, and intellectual property — requirements that the teaching-office documents dominating this dataset are structurally not built to fulfil. The research gap is not an oversight. It is a symptom of who was in the room when policy was being made, and what problem they had been asked to solve.

Part Six

The Infrastructure Race

Most universities licensed Microsoft Copilot in 2023 and called it a policy. A few went further. The distance between these approaches reveals a deeper disagreement about what a university's relationship to AI infrastructure should actually be.

The University of Helsinki built CurreChat — a homegrown AI assistant, the only university-developed tool in this dataset. Imperial College London has gone more ambitious: dAIsy, described as "a secure AI platform designed to give staff and students safe, easy access to multiple generative AI models (such as GPT, Claude, Deepseek, and others) through a single interface." One platform. Multiple models. Institutional control over access, audit, and data flows. This is not procurement. It is infrastructure.

The London School of Economics provides both Copilot with commercial data protection and Claude for Education — making it the only institution in this dataset with official institutional access to Anthropic's Claude for students alongside Microsoft's offering. That choice of two different AI providers reflects a deliberate hedging strategy against vendor dependence.

UNSW Sydney has taken a contractual approach rather than building its own platform. It negotiated Copilot access with a specific assurance that "user prompts and responses are not retained, saved, or used as part of any training set for the underlying large language model." This is the only institution in this dataset to explicitly address whether student AI interactions feed back into model training — a question every institution using commercial tools should be asking but most are not.

These three approaches — build it, diversify your vendors, or lock down the contract — are really three versions of the same question: who controls the data? A university's AI infrastructure strategy is, at bottom, a data sovereignty question. The vendor that holds your students' conversations holds behavioural data on how your community learns, struggles, and thinks. Most universities are still choosing a fourth option: accept commercial defaults and hope the vendor's privacy terms hold.

The stakes are higher than most administrators appreciate. Data privacy experts have converged on a single priority institutions are systematically skipping: ensuring AI vendors cannot use student interactions to train their models. "The most overlooked issue," according to a 2026 EdTech analysis of higher education AI risk, "is ensuring that student data is not used to train external AI models." UNSW is the only institution in this dataset to have made this a contractual requirement. For every other institution, the question of whether student AI sessions feed back into the models that will later evaluate them remains unanswered — and unasked.

What This Means

Four practical conclusions from twenty-five policies, nine countries, and ten governance blind spots.

🎓
For students

The real policy is probably not the central one

  • The universal local override clause means your course syllabus or instructor's guidance is the rule that governs you — not the university's headline statement.
  • If your institution has no public AI policy (IIT Madras, SUTD, Ashoka), your instructor sets the rules. Ask — in writing — before you submit.
  • "Disclosure required" is a statement of principle until it specifies what to write and where. Ask for the format.
  • The specificity gap between institutions is real: NUS tells you which prompts to declare; MIT just says disclose. The word "disclosure" is not a sufficient guide.
👩‍🏫
For instructors

The override clause makes you the policymaker

  • Every institution in this dataset has delegated the real decisions to you. That means your students' AI experience depends on your explicit guidance — not the central policy.
  • Consider IISc's tiered approach: grammar/word choice is different from argument generation. Not all AI use is the same; a tiered rule respects that.
  • The best course AI policies in this dataset answer: which tools, for which tasks, in what format, with what evidence. Yours should too.
  • The specificity gap between your colleagues is invisible to students. They experience it as arbitrary inconsistency.
🏛️
For institutions

The missing agenda is the urgent one

  • Start with equity: who has access to the tools you're encouraging or requiring students to use? This is solvable and unaddressed.
  • Address AI detection tools before deploying them: Stanford found 61% false positive rates for non-native English speakers. That is a justice issue dressed as a technical one.
  • Research governance is the missing chapter. Teaching policies were written by education offices. The research community needs its own framework.
  • Follow UNSW: ask your vendor, in writing, whether student prompts are used to train models. This is the most important data privacy question no policy currently asks.
  • Include faculty, students, and staff as co-authors of policy — not as reviewers of a finished document. The AAUP found that 71% of faculty had "little meaningful input" in AI policy decisions. Policies written without the people who live under them are not governance. They are announcements.
🔭
Watch this space

Three developments to track

  • Observed Assessment. LSE's proctored AI exam is a live experiment. If it holds up under scrutiny — if it actually tests thinking rather than tool use — it could become a template. Watch whether others adopt it.
  • Institutional AI platforms. Helsinki's CurreChat and Imperial's dAIsy represent a bet on sovereignty over convenience. As commercial AI terms evolve, this bet will either look prescient or unnecessary. The outcome matters for every university currently accepting commercial defaults.
  • The India gap. Three of the world's leading technical institutions — IIT Madras, IISc (barely), Ashoka — have no meaningful public AI governance. As AI becomes embedded in coursework and research, this absence will become increasingly costly. The question is whether the change comes from within or through external pressure.