Technology · Open methodology

The DSP, unredacted.

TuneLab publishes its methodology because real DSP holds up to scrutiny. Execution is 99% of the battle — revealing our techniques doesn’t give competitors our data, our training corpus, or our production infrastructure. It gives developers what they actually need: confidence the numbers aren’t scraped from a black box.

Deep Dives

Five ways we turn audio into data.

Every feature below is computed from raw audio by a published DSP method. No Spotify scraping. No cached metadata. No hidden heuristics.

Accuracy first

Benchmarks we’re proud of.

TaskModelAccuracyDataset
BPM (±2% tolerance)BiLSTM ensemble94.8%GTZAN + Ballroom
Key (exact match)KeyNet CNN82.1%GiantSteps-MTG
Mode (major/minor)KeyNet CNN91.3%GiantSteps-MTG
Beat tracking (F1, 50ms)BiLSTM + Viterbi0.89SMC + Ballroom
Energy regression (Pearson r)MAEST + MLP0.81Internal 10K holdout
Danceability regression (Pearson r)MAEST + MLP0.78Internal 10K holdout

Benchmarks are reported on standard academic datasets so you can compare us directly to published papers. Numbers degrade gracefully on edge genres (classical, ambient, atonal) — see each deep-dive’s "Known limitations" section for specifics.