OPPE 2025 · Student Support Intelligence · Process-First Segmentation

The Teachable 10%

How to find the students who are struggling for the right reasons — and can improve fastest with the right help

Profiles classified
Teachable-now
Teachable share
Largest teachable track

Imagine two students after the first OPPE. Both have low scores. One keeps running tests, edits code thoughtfully, and fails in a repeatable pattern. The other submits once, or keeps making random edits with no clear direction. They look the same on marks. They are not the same learner.

This story is about that difference. We are not asking, “Who scored poorly?” We are asking, “Who is most likely to improve quickly if we intervene now?”

To answer that, we used coding process traces: test-run cadence, edit behavior, error signatures, concept struggle patterns, and persistence markers. The result is a decision tree that isolates a practical intervention set: about 1 in 10 profiles.


Act I

The Sorting Hat Is a Decision Tree

The tree has seven leaves. Four are not intervention-priority today (D0, D1, D2, D3). Three are the teachable tracks: T1 syntax foundations, T2 runtime debugging, and T3 logic/edge cases.

Click any node below to open a full breakdown with real student examples, dominant error signatures, and intervention guidance.

What Makes This Different No score thresholds. No historical “who improved” dependency. The next OPPE can have completely different questions and this classification still works because it is based on behavior and error fingerprints.
Decision-path distribution (click any bar for full context)
Bars animate when you switch term filters. Teachable tracks are highlighted in color.
Term filter

The median story: most profiles are stable (D2, 57.2%). But the operational opportunity is in the colored slice. The teachable set is not giant. It is manageable. And it is rich in actionable structure.

The real intervention question is not “Who failed?” It is “Who failed in a way that can be fixed quickly?”

Act II

Not One Teachable Group. Three.

Half of the teachable set belongs to one track: runtime debugging. The rest split between logic/edge-case reasoning and syntax structure repair. That matters because each requires different coaching scripts.

Teachable rate by term
25t2 stands out with the highest teachable share. Click any bar for term-level path mix and examples.
Process fingerprint map
X: average public runs per attempt. Y: average edits per attempt. Bubble size: profile count. Color: decision path.

Notice the separation in behavior space. D0 is low-run, low-edit. D1 is high-run, high-edit chaos. Teachable tracks occupy the middle-right: enough persistence to work with, but not so much regression that coaching becomes rescue.


Act III

How to Teach Them, Not Just Find Them

Segmentation without pedagogy is just labeling. Each teachable path has a concrete 25-minute intervention recipe in the source report. Click the cards for ready-to-use coaching playbooks.

The recurring concept struggles inside the teachable group are equally actionable. The chart below counts concept mentions in teachable profiles’ top-struggle lists (max 3 per profile). Click any bar to see which path it clusters in and sample student hashes.

Top concept struggles inside teachable profiles
Counts are mention-frequency across top concept struggles. This is where targeted content can move outcomes fastest.
Operational translation If mentor capacity is limited, start with T2. It is the largest segment and has clear, teachable debugging mechanics. Keep T3 next for test-design and boundary reasoning. Treat T1 as fast wins through structured syntax scaffolds.

Act IV

From Insight to Action List

The table below is the operational surface: filter by term/path, search student hashes, sort by runs/edits/priority, and click any row for full profile context and recommended intervention track.

Loading profiles…
Student Term Path Dominant Error Avg Runs Avg Edits Avg Active Min Priority Index Top Concept Struggles
Loading…

How to read this story: This is a process-first segmentation from OPPE Wave 1 profiles. It is designed for targeted support, not for grading, ranking, or punitive decisions.

Data provenance: computed from analysis/teachable.csv, generated by analysis/teachable.py, documented in analysis/teachable.md.

Related context: REPORT.md, quick-fixes.md, and 2026-02-25-next-steps.md.