Each chart solves an analytical problem for which no adequate visualization previously existed—combining novel encodings with rich, realistic synthetic data.
An 8×8 matrix where each cell encodes both how strongly two indicators correlate and which one leads the other—and by how many months.
Physics' phase portrait concept applied to competitive standings: season win% on X, recent-game momentum on Y, with animated trajectory tails.
A dual-ring radial "rhythm fingerprint" that simultaneously shows the hourly (inner petals) and day-of-week (outer ring) activity patterns.
The chart design space feels infinite. In practice, the intersection of "visually novel" × "analytically superior" × "actually readable" is vanishingly small. Most combinations have already been explored.
Before committing to any design, we searched to confirm we weren't reinventing something that already exists. Novelty claims require evidence.
| Query | What Existed | What Was Missing | Verdict |
|---|---|---|---|
| "lead lag cross-correlation matrix visualization" | CCF correlograms (single pair), static correlation matrices | No chart showing all N×N pairs with timing simultaneously | Novel |
| "temporal fingerprint multidimensional radial chart" | ArcGIS temporal profile charts (GIS context), forensic fingerprints | No compact multi-scale radial glyph for business rhythms | Novel |
| "phase portrait sports standings momentum visualization" | Momentum bars, bump charts (rank over time), standings tracers | No phase-space scatter with trajectory tails for competitive data | Novel |
| "harmonic frequency portrait time series multiple entities" | Spectrograms, wavelet transforms, VizStruct Fourier projections | All single time-series; no readable glyph for non-technical audiences | Novel |
| "radial multi-ring business rhythm hourly daily fingerprint" | Forensic fingerprint papers (wrong domain entirely) | Confirms the visualization type doesn't exist | Novel |
For every chart that made it into the deck, several more were considered and discarded. Rejection is part of the design process, not a failure of it.
Synthetic data must be designed to demonstrate the visualization's power, not merely fill the space. Each dataset was engineered with specific narrative and analytical intent—grounded in real-world evidence.
Every complex visualization involves debugging. Here is an honest record of the failures encountered during development, categorized by type.
After the first working version shipped, feedback was: "Some of the content is barely visible—mostly because of poor contrast, partly because of small font sizes." This was a systematic, not isolated, failure.
A structured retrospective on the full process: what worked, what didn't, root causes, and the process changes that would prevent recurrence.
rgba(..., .X) and ask: "readability-reducing or intentional fade?"