1) Collects evidence
Uses anonymous event data like test runs and submissions from coding exams.
This project studies anonymized Python exam activity from 2025. It helps non-technical readers understand where students get stuck, what kinds of mistakes repeat, and what exam improvements may help real learners.
Think of this as a research notebook plus public report site: it starts from raw logs, rebuilds metrics with scripts, and publishes easy-to-read findings for educators and program teams.
Uses anonymous event data like test runs and submissions from coding exams.
All major numbers and tables come from scripts in this repo, so they can be rerun and audited.
Publishes stories and practical recommendations: teaching support, exam design fixes, and intervention targeting.
Start with the core stories. Then browse alternate drafts and other resources.
This home page is data-driven: to add new cards or new sections (analysis or non-analysis), update a single list in the page script.
HOME_SECTIONS in this file to add more destinations.