On February 11, 2026, thirty-one students solved a puzzle about secret agents. By the end of that day, sixty of the hundred agent IDs needed to solve the puzzle were already mapped and shared — even though none of the thirty-one had to share anything to solve their own version.
A Puzzle That Looked Like a Solo Quest
The Share Secret question asked each student to identify 3 of 100 possible agent IDs. On its surface, it looked like a puzzle you solved alone — you received a unique variant, you matched your agents, you moved on. The answer you needed was specific to you. The process was yours.
But the solution space was finite. One hundred agent IDs, no more. Every student who solved the puzzle and shared what they found was contributing a tile to a mosaic that, once complete, made the puzzle disappear. Not harder. Not easier. Gone.
"The puzzle stopped being a puzzle on day one. It became a coordination problem."
By solver #154 on March 3, every one of the 100 agent ID mappings was publicly known. The puzzle had a finite solution space, and a community of 765 students filled it completely.
February 11: The Day Everything Changed
The data shows a pattern common to information cascades: uneven distribution. One day — February 11 — accounted for 57 of the first 60 mapped IDs. The mechanism was simple: the first thirty-one solvers did not just solve the puzzle. They posted what they found. And once they posted, the ratio flipped.
Before February 11, every solver was a super-spreader candidate — producing new information, expanding the map. After February 11, most solvers were consumers. They arrived at a party that was already well underway, with a public decoder already waiting.
On Feb 12, more than half of all solvers simply looked up the answer that the previous day's contributors had revealed.
Each circle represents one of 100 agent IDs. Watch them light up as solvers contribute mappings — with a burst on February 11.
80% of Solvers Never Needed to Solve Anything
Here is the number that clarifies everything: 80.46% of all first successes in the Share Secret question matched the public decoder exactly. That means 4 in 5 students who solved this question did not solve it — they looked it up. The puzzle had an answer key, and the answer key was free, and almost everyone used it.
This is not a criticism. It is a finding. The answer key emerged organically, as a byproduct of honest work. Students shared what they discovered. Others benefited. In any other context, this would be called knowledge sharing. In an exam context, it is something more complicated.
"Only 63 students ever contributed a new mapping. The other 617 just borrowed theirs."
The public decoder tool did not arrive on day one. On March 6 — nearly four weeks after the map was complete — a student posted in the course's GitHub discussion thread: "If anyone wants agent password and email. You can use the below website. It works correctly." Another student replied simply: "I don't know how but it worked for me." No documentation. No fanfare. Just peer-to-peer credentialing dropped mid-thread. The tool worked because someone said it worked, and that was enough. This is how student-built course infrastructure spreads — not through announcements, but through social proof woven into a conversation that was already happening.
GitHub Discussion #277: A Coordination Market in 173 Comments
The data tells one story. The 173-comment GitHub Discussion thread tells another. Discussion #277 opened on February 10, 2026 — day one of solving — and what followed reads less like a help forum and more like an open-outcry trading floor. Within 24 hours, a student had shared a Google Spreadsheet for centralized coordination. Others were posting their credentials openly, in plain sight: "I am Agent 027 willing to help anyone needing my password. I am looking for 021, 092, 070."
"I am Agent 027 willing to help anyone needing my password. I am looking for 021, 092, 070."
February 11 — the same super-spreader day visible in the charts — saw students posting credentials simultaneously, a coordination cascade that mirrors the data almost perfectly. But the thread also revealed something the numbers couldn't: confusion. "I'm with ID 016, but I see same ID's provided to different people. Is this a bug?" It wasn't a bug. Multiple students could be assigned the same agent. The puzzle's design was already more layered than anyone had assumed, and the students were mapping their way through the ambiguity in real time.
By late February and into March, the thread had transformed. Students were still posting — "looking for agent 087, 095, 051" on March 7 — apparently unaware that the decoder tool shared the day before had made their search obsolete. The irony is exact: the students who found the decoder were playing a completely different game from those still hunting IDs in a 173-comment thread. Same course. Same assignment. Radically different experiences. The discussion didn't just fill the map — for those who never found the shortcut, it was the map.
And then there is Agent 070. From February 20 onward, students kept asking for them — a recurring name in a thread that was supposedly resolving. Agent 070 appeared in post after post: someone looking, someone waiting, someone forwarding the Google Sheet link in case it helped. For six weeks, Agent 070 was an open variable, a missing tile that somehow no one had. Then, on March 29 — nine days before the deadline — three posts appeared in under a minute: "I'm agent 70. Hello." Then: "Agent 70 here." Then: "Heya, i'm agent 070, looking for 004, 079 and 027." Three different students, all Agent 070, all arriving simultaneously, all immediately looking for someone else. The thread had been waiting for them for six weeks. They arrived and immediately needed something too.
"I'm agent 70. Hello." · "Agent 70 here." · "Heya, i'm agent 070" — three posts, under one minute, March 29.
The decoder link, meanwhile, was being rediscovered continuously. It was shared on March 6. Then March 10. Then March 20. Then March 22 — where one student posted it five times in a row, each post apparently independent of the others, as if repetition would reach the readers that the previous four posts hadn't. Then March 29. Each sharer thought they were breaking news. None of them knew they were the fifth or sixth person to announce the same thing in the same thread. This is not carelessness. It is a structural feature of large asynchronous discussions: the cost of reading the full thread is higher than the cost of posting, so people post first and discover the redundancy later — if at all.
On March 29, Agent 004 finally appeared — only to find that the Google Sheet they'd been pointed toward for weeks was no longer accessible: "the sheet isn't available." The infrastructure that had been the backbone of cohort coordination had quietly gone offline. Students arriving at the end of the thread found a broken link where the collective map used to be. The decoder, meanwhile, still worked — but only for those who knew to look for it buried somewhere in 173 comments. The thread had become a palimpsest: layers of working solutions written over obsolete ones, with no index, no version history, no way for a newcomer to know which layer they were reading.
The arc closes on deadline day, March 30, with students still posting their own agent credentials and searching for others manually — the same behavior as February 11, seven weeks later, in a thread where the answer had been available in 30 seconds for months. The thread did not fail. It worked exactly as threads work: it preserved every piece of information ever posted, and it surfaced none of it reliably. The question was never whether the information existed. It was whether anyone could find it in time.
After the Mar 26 HTML workaround appeared, median effort fell by 67%. First-try success jumped from 1.6% to 19.6%. The question didn't get easier — the answer got public.
The Exam Had Disclosure Moments, Not a Difficulty Curve
What happened here is not cheating in the conventional sense. No rules were broken. The course allowed public discussion. The students who contributed agent mappings were doing something generous — they shared what they found, and others benefited. The question is what the exam was measuring once the map was complete.
The conventional picture of an exam is a difficulty curve: harder at the top, easier at the bottom, the shape of a ski slope. What the Share Secret data shows instead is a step function — flat for weeks, then suddenly trivial. The difficulty curve has disclosure moments.
"The exam did not have a single difficulty curve. It had disclosure moments."
| Day | IDs Before | IDs After | IDs Added | Solvers | Lookup-only % |
|---|---|---|---|---|---|
| Feb 10 | 0 | 3 | 3 | 1 | 0.0% |
| Feb 11 ★ | 3 | 60 | 57 | 31 | 9.7% |
| Feb 12 | 60 | 70 | 10 | 15 | 53.3% |
| Feb 13 | 70 | 77 | 7 | 9 | 33.3% |
| Feb 14 | 77 | 81 | 4 | 9 | 66.7% |
| Feb 15 | 81 | 85 | 4 | 9 | 55.6% |
What the Textbooks Got Right — and What They Missed
Vygotsky described the Zone of Proximal Development as the space between what a learner can do alone and what they can do with guidance. Discussion thread #277 was a ZPD at population scale. Hundreds of students, each just beyond their individual capacity, finding peers to scaffold them — not through tutoring, but through collective intelligence. The scaffold wasn't a teacher. It was a Google Sheet and a 173-comment thread. Vygotsky imagined a student and a mentor. What actually happened was a student and six hundred strangers working in parallel.
Cialdini would recognize what happened next. Once the spreadsheet had enough credible entries, it became the default path. Late-joining students never stopped to ask whether they should build their own map — the existing one was right there, and other students had already vouched for it. Social proof doesn't require advertising. It just requires that someone went first, and that the evidence of their going is visible to everyone who follows.
The Curse of Knowledge explains the March 6 gap. The student who shared the decoder tool couldn't understand why anyone was still asking for individual agent IDs — but those students didn't know the tool existed. Information asymmetry in a public forum. The announcement had been made; most of the audience had missed it. On March 7, students were still trading credentials one by one, three scrolls away from a tool that made the whole exercise unnecessary. Diffusion is never complete. There is always a late majority that the innovation never quite reaches.
Rogers called this the diffusion curve, and the decoder's journey follows it faithfully. One pioneer shares it. A handful of early adopters test and validate it. And then — nothing. A long tail of students who never see it, never benefit from it, who keep doing things the hard way not because they prefer it but because the information simply didn't travel far enough. The course discussion thread was a perfectly permeable membrane: some things moved through it instantly, others barely moved at all.
For educators: If a puzzle can be solved by collective intelligence, assume students will use collective intelligence — and design accordingly. Disclosure moments don't just reduce effort; they change what the exam is testing. Track when workarounds appear and update rubrics in real time. The decoder tool was built by a student, for students. Course infrastructure that students build is often more useful than what instructors provide — and it spreads faster, because it has social proof attached to it from the moment it appears.
For students: The first 63 solvers did real cognitive work. The last 600 did social work. Both are valuable — but they are not the same skill. Knowing when to stop rebuilding what already exists, when to plug into the cohort's information network rather than reconstruct it alone — that too is a form of competence. It just doesn't appear on any rubric. The students who found the decoder on March 6 didn't cheat the puzzle. They solved a harder one.
What the Exam Was Actually Teaching
On April 1, 2026 — the day after the exam closed — a student sent a WhatsApp message to the course instructor. It didn't mention agent IDs or decryption algorithms. It said this:
"I learnt more in a single course than my entire diploma."
"Before this I was not interacting with anyone. Because of the online nature of the degree. But TDS helped me to connect with everyone. We are having meet regularly. We divide questions with each other. And then we solve and discuss."
"Before this TDS course I was hating this degree. But now I fell in love with what I'm doing."
This student did not write to describe their score. They wrote because they had met people. Because a question that required finding strangers online had, somehow, produced a study group, a meeting schedule, and something that resembled belonging in a degree program they had been ready to abandon.
The 173-comment GitHub discussion thread, the Google Sheet, the decoder tool built by a student for their peers — none of it was the course's intended infrastructure. All of it was students building what they needed, together, in public, with no instruction to do so. The question just made it necessary. The rest happened on its own.
In the AI era, this is the skill. Not any particular tool. Not prompt engineering or graph traversal. The ability to learn in public, to divide a problem with someone you just met online and trust them with their half, to recognize that the people around you are a resource before you've had a reason to think so. That skill doesn't appear on any rubric. It never did. But you can design for it — and this course, accidentally or deliberately, did.