Part One
Three Acts
In the autumn of 2023, the London School of Economics did something almost no major university had the institutional nerve to do: it issued a school-wide ban on generative AI in all assessed work. Not a framework. Not a set of conditions. A ban.
By 2024, LSE removed it.
By 2025, LSE had introduced something more inventive than either: the Observed Assessment. Students write essays with AI tools available — but in a supervised room, under time pressure that makes it impossible to outsource the thinking. The tool is permitted. The question of whether the work reflects genuine understanding is answered not by trust but by presence.
Ban. Lift. Watch. This three-act journey — prohibition, permission, then something that isn't quite either — is the most compressed version of a conversation that twenty-five of the world's top universities are all having simultaneously. We read their public governance documents. What we found is not the consensus story.
Part Two
What Everyone Agrees On
Before the differences, the agreement. Of the twenty-five universities in this analysis, twenty-one have published AI policies. Not one has banned generative AI. That alone is worth sitting with. Institutions that built their reputations on original thought from students — on the assumption that intellectual struggle is what makes learning stick — have decided, with remarkable speed, that AI use should be permitted.
The agreement runs deeper than "yes." Those twenty-one institutions converge on five structural principles with a consistency that suggests either deliberate coordination or convergent evolution under identical pressure. Disclosure is required. The human user bears accountability for accuracy. Existing academic integrity rules extend to AI. Sensitive data must stay out of AI systems. And local instructors, departments, and funders may add stricter rules.
The Big Five — present in all 21 policy-issuing universities
These 25 are the outliers, not the norm
EDUCAUSE's 2025 AI Landscape Study found that only 39% of higher education institutions have any AI-related acceptable use policy at all — up from 23% in 2024. The 25 universities here represent the frontier. What looks like a cautious, incomplete consensus in this dataset is still well ahead of most of the sector. The convergence on five guardrails is real progress. It is also, by any comprehensive standard of AI governance, the beginning of the work.
The table below shows what the consensus conceals: how comprehensively each university has actually answered the eleven policy dimensions that matter. Sorted by coverage. Click any cell for the evidence.
Policy coverage score — average across 11 dimensions (disclosure, accountability, integrity, privacy, tools, assessment, research, override, authorship, stance, type)
Score = mean value (1–5) across 11 policy dimensions ÷ 5. Gray = no public policy found.
The Full Policy Matrix
Click any column header to sort. Click any colored cell for evidence, quotes, and source. Green = comprehensive · Yellow = partial · Orange = absent · Gray = no public policy found.
| University | Policy TypeFormal/guidance/none | Default StancePermit / restrict / delegate | DisclosureMust AI be declared? | AccountabilityWho owns the output? | IntegrityMisconduct rules apply? | PrivacyData input limits | Inst. ToolsApproved tools provided? | AssessmentExams & coursework | ResearchResearch publications | Local OverrideDept / course rules | AI AuthorshipCan AI be an author? |
|---|
Part Three
Three Very Different Yeses
The phrase "permitted with conditions" covers an enormous amount of ground. At one end of the spectrum, it means essentially: here is a list of approved tools, please note which ones you used. At the other end, it means: you must prove to us, in a supervised room, that the work is actually yours.
The dataset resolves into three broad governance philosophies:
Open Default (16 institutions)
AI is permitted for most purposes unless a specific context restricts it. The default state of the student is AI-capable.
MIT · Yale · Princeton · NUS · NTU · SMU · ETH Zurich · EPFL · KU Leuven · Helsinki · HKU · Melbourne · UNSW · IISc · Cambridge · SUTD*
Closed Default (5 institutions)
AI is restricted unless explicitly authorised. The burden of proof falls on permission, not restriction.
Oxford · Imperial · Stanford · University of Toronto · University of Sydney
Structured / Abstain (4 institutions)
Either departments must choose their own position (LSE), or the centre refuses to make any uniform call (Tokyo). India's three institutions have no public policy at all.
LSE · Tokyo · IIT Madras · Ashoka
Oxford and Imperial College London are the clearest default-restrictors. Oxford's policy states AI is "not permitted unless the assessment brief explicitly states otherwise." Imperial's guidance warns that absent explicit authorisation, using AI to create assessed work "may be treated as an offence such as contract cheating." Both institutions start from restriction and require explicit permission to move.
Stanford's approach is subtler but similarly structured: "absent a clear statement from a course instructor, use of or consultation with generative AI shall be treated analogously to assistance from another person." The analogy to another person doing your work is deliberate and elegant — it activates existing academic integrity intuitions without requiring new rules.
Then there is the University of Tokyo, which occupies a category of its own. Its policy statement reads: "The University of Tokyo will not uniformly prohibit the use of natural language-generation AI tools, such as ChatGPT, in educational settings." This is not permission. It is a deliberate refusal to issue either permission or prohibition — a principled delegation of the question to individual instructors. Tokyo has decided that the centre is not the right place to make this call. That is itself a governance philosophy, and perhaps the most intellectually honest one in the dataset.
Who was in the room when these policies were written?
The AAUP's 2025 report on AI and academic professions found that at US institutions, 71% of faculty say administrators "overwhelmingly" lead AI policy conversations with "little meaningful input" from faculty, staff, or students. The three governance philosophies above were not, in most cases, chosen by the people who teach and learn under them. When Tokyo delegates to individual instructors, it is often framed as abdication. But it may also be the only institution in this dataset with a governance structure where the people closest to the problem actually make the call.
Part Four
The Disclosure Mountain
Every policy in this dataset requires disclosure. The word appears in all twenty-one. Functionally, it spans orders of magnitude.
At one end, the University of Melbourne requires students to have records of "prompts and outputs used in the AI tools or technologies available on request." The University of Sydney requires "the name and version, the publisher, the URL, and a brief description of how you have used" each tool. NUS requires not just which tools but "which tasks" and "which prompts" — at the time of submission. These policies produce an audit trail.
At the other end, the University of Tokyo recommends that instructors "have students specify which generative AI they used when submitting reports." A recommendation. Which tool. Nothing more.
IISc's tiered rule — unique in this dataset
The Indian Institute of Science makes a distinction no other university here makes explicit: minor AI use (grammar, word choice) requires no disclosure. Substantive cognitive assistance — analysis, argumentation, structure — requires a methods-section declaration including what tools were used, for which tasks, and how outputs were verified. It acknowledges that "using spell-check" and "using AI to draft an argument" are not the same kind of thing.
The disclosure spectrum matters because it is the operational centre of these policies. Principle without procedure is aspiration. A student who receives a "disclose your AI use" instruction without knowing what that means in practice — which tool name? which version? the prompts? a methods section? a footnote? — is not protected by the policy. They are confused by it.
There is a structural flaw in disclosure as a compliance mechanism that no policy here acknowledges. If disclosure is how institutions verify AI use, then AI detectors are the enforcement layer behind it — and that layer is quietly discriminatory. A Stanford HAI study found that detectors misclassify over 61% of essays by non-native English speakers as AI-generated, compared to near-perfect accuracy for native speakers. The scoring method uses "perplexity" — a proxy for linguistic sophistication — and non-native writers naturally score lower. Ninety-seven percent of TOEFL essays were flagged by at least one detector. Not one of the twenty-five institutions in this dataset acknowledges whether it uses AI detectors, their known bias, or any safeguard against wrongful accusation. A policy built on disclosure, enforced through detection, carries a built-in justice problem that the policies themselves are silent about.
10 Things No Policy Addresses
Every dimension below is absent from all 25 institutions in this dataset. These are not edge cases — they are active pressure points where students, faculty, and institutions are making consequential decisions right now, without institutional guidance.
Part Five
The Research Gap
Most AI policies were written by education offices, for students, about coursework. Research — where AI's implications are more profound, more complex, and more consequential — is consistently the afterthought.
Fifteen of twenty-five institutions have some research-specific guidance. But quality varies enormously. The University of Hong Kong's research guidelines are in a different league from the rest. They require "any use of GenAI, including how the software is used, should be well documented." Graduate students must declare AI use "at the time of thesis submission for examination." And most meaningfully: "any responsible use of GenAI in research must be done in a process in which human researchers can readily verify and validate output produced by GenAI." This is not just disclosure — it is a reproducibility standard.
UNSW Sydney has published a formal research position paper. KU Leuven integrates GDPR compliance into its research guidance. These represent serious attempts to address the research context specifically.
NUS — which has the most operationally specific teaching policy in this dataset — has no public research policy. Stanford, whose faculty have co-created the AI systems everyone else is writing policies about, has "partial related guidance but no dedicated public research policy." Tokyo has no research policy at all.
There is something quietly strange about this: the institutions most invested in producing AI research are among the least invested in governing how their own researchers use AI tools. The policy conversation has been driven by the education side of the house. The research community is still working out what to say.
The pattern follows the governance structure. UNESCO's 2023 guidance for generative AI in education and research explicitly called for institutions to address AI in research outputs, scientific integrity, reproducibility, and intellectual property — requirements that the teaching-office documents dominating this dataset are structurally not built to fulfil. The research gap is not an oversight. It is a symptom of who was in the room when policy was being made, and what problem they had been asked to solve.
Part Six
The Infrastructure Race
Most universities licensed Microsoft Copilot in 2023 and called it a policy. A few went further. The distance between these approaches reveals a deeper disagreement about what a university's relationship to AI infrastructure should actually be.
The University of Helsinki built CurreChat — a homegrown AI assistant, the only university-developed tool in this dataset. Imperial College London has gone more ambitious: dAIsy, described as "a secure AI platform designed to give staff and students safe, easy access to multiple generative AI models (such as GPT, Claude, Deepseek, and others) through a single interface." One platform. Multiple models. Institutional control over access, audit, and data flows. This is not procurement. It is infrastructure.
The London School of Economics provides both Copilot with commercial data protection and Claude for Education — making it the only institution in this dataset with official institutional access to Anthropic's Claude for students alongside Microsoft's offering. That choice of two different AI providers reflects a deliberate hedging strategy against vendor dependence.
UNSW Sydney has taken a contractual approach rather than building its own platform. It negotiated Copilot access with a specific assurance that "user prompts and responses are not retained, saved, or used as part of any training set for the underlying large language model." This is the only institution in this dataset to explicitly address whether student AI interactions feed back into model training — a question every institution using commercial tools should be asking but most are not.
These three approaches — build it, diversify your vendors, or lock down the contract — are really three versions of the same question: who controls the data? A university's AI infrastructure strategy is, at bottom, a data sovereignty question. The vendor that holds your students' conversations holds behavioural data on how your community learns, struggles, and thinks. Most universities are still choosing a fourth option: accept commercial defaults and hope the vendor's privacy terms hold.
The stakes are higher than most administrators appreciate. Data privacy experts have converged on a single priority institutions are systematically skipping: ensuring AI vendors cannot use student interactions to train their models. "The most overlooked issue," according to a 2026 EdTech analysis of higher education AI risk, "is ensuring that student data is not used to train external AI models." UNSW is the only institution in this dataset to have made this a contractual requirement. For every other institution, the question of whether student AI sessions feed back into the models that will later evaluate them remains unanswered — and unasked.