Fast where it is visible
75.8% of eligible sessions used gpt-5.3-codex after the February 5, 2026
rollout window.
On one side, you adopt new Codex models almost immediately. On the other side, the workflow features that cut the most friction stay unused. This story explains what happened, why it matters, and exactly how to close that gap.
gpt-5.3-codex, almost no pickup of new
orchestration and coordination features.
We treated each session as evidence, not anecdote. Every tool call, command, timestamp, and prompt was parsed, then matched against release windows so we only count a “missed feature” after it was actually available.
This matters because it prevents false alarms. A feature is only counted as “unused” in sessions that happened after that feature shipped.
You clearly change behavior when the upside is obvious. But the upside is currently concentrated in model choice, while collaboration and orchestration features stay mostly dormant.
75.8% of eligible sessions used gpt-5.3-codex after the February 5, 2026
rollout window.
You already use manual parallel shell patterns in 68.4% of eligible sessions, but
parallel tool usage is still 0%.
spawn_agents_on_csv and request_user_input both remain at
0% in post-release windows.
This timeline is the important forensic view. Dots near the top indicate high post-release adoption. Dots on the floor show “available but untouched.”
Release-aware data from recent_feature_coverage.csv
This is not a “you avoid change” story. You changed quickly for models. The stall is specifically around interaction patterns (coordination, approvals, and thread controls).
The top three opportunity types account for 726 of 847 missed moments (85.7%). You do not need ten fixes. You need three defaults.
Opportunity counts reveal where time leaks most often. The tallest bars are where behavior changes will pay back fastest.
457 sessions showed sequential, independent reads that could have been run with
parallel.
138 sessions had repeated permission friction where request_user_input
could have unblocked faster.
ls -la | cat .../code/SKILL.md | cat .../llm/SKILL.md
This is exactly the pattern where parallel would cut waiting time.
7 permission-related errors before completion.
A concise request_user_input choice flow would likely have resolved this earlier.
103 tool calls without spawn_agents_on_csv,
indicating heavy manual orchestration.
These excerpts are taken from structured evidence in opportunities.csv.
This simulator uses one explicit assumption: each resolved missed-feature moment saves about two minutes of execution or coordination time. Adjust the slider to see rough upside across the analyzed period.
Closing 30% of missed moments
Assumption is intentionally conservative and transparent. Change it in code if your own benchmark differs.
You do not need a new workflow religion. You need a small default prompt layer that matches how you already work.
update_plan throughout.
/permissions and /debug-config early.