· TuneLab Team · 9 min read

Spotify API Changes: What's Deprecated, What Still Works, and What To Do

On November 27, 2024, Spotify restricted a large slice of its Web API — including the endpoints that every music-tech side project in the world depended on. This is the concrete list: what's gone, what still works, and what to replace each endpoint with. Dated and updatable.

TL;DR
Audio Features, Audio Analysis, Recommendations, Related Artists, 30-second previews, Featured Playlists, and Category Playlists are effectively dead for new apps. Search, track/album/artist lookups, playback control, and user playlist management still work. The only drop-in replacement for Audio Features + Audio Analysis is running real DSP on real audio — which is what TuneLab's /v1/compat/spotify/audio-features and /v1/analyze endpoints do.

Timeline

What's deprecated

Per Spotify's own announcement at developer.spotify.com/blog/2024-11-27-changes-to-the-web-api, the following endpoints were restricted to legacy apps only. For any new client, they behave as deprecated:

!
The "restricted for new apps" framing is misleading in practice.
If your app wasn't already in extended quota mode on November 27, 2024, these endpoints are gone. Spotify does not onboard new extended-mode apps without a 250K MAU threshold. Treat this as a removal, not a restriction.

What still works

The rest of the Web API is unchanged. You can still build with:

What you can't do with what's left: get a track's tempo, key, mood, beat grid, structural segmentation, or any feature you'd need to build a DJ tool, harmonic mixing app, playlist recommender, or music analysis pipeline. The metadata still exists. The audio intelligence does not.

What to do

Each deprecated endpoint has a migration path. Some are drop-in. Some require real work. Here's the honest version.

Audio Features → drop-in shim

TuneLab's compatibility shim returns the exact Spotify schema — same field names, same types, same ranges. The values are computed from real DSP on the actual audio (not cached Echo Nest metadata), so they'll match Spotify's qualitatively but not byte-for-byte.

migration · audio features
// Before — deprecated
fetch("https://api.spotify.com/v1/audio-features/" + trackId, {
  headers: { Authorization: "Bearer " + spotifyToken }
});

// After — drop-in replacement
fetch("https://api.tunelab.dev/v1/compat/spotify/audio-features/" + trackId, {
  headers: { Authorization: "Bearer tl_live_xxx" }
});
// Returns: { id, tempo, key, mode, loudness, energy, danceability,
//            valence, acousticness, instrumentalness, speechiness, time_signature }

Full field mapping and a side-by-side diff live on the migration guide.

Audio Analysis → use /v1/analyze + /v1/beatgrid

Spotify's audio-analysis was a single monolithic blob containing beats, bars, sections, segments, tatums, and pitch/timbre vectors. TuneLab splits this into purpose-built endpoints:

There is no 1:1 equivalent for Spotify's per-segment timbre/pitch vectors — those were a specific Echo Nest representation. If you relied on them for a research pipeline, you'll need to compute your own MFCC/chroma features from audio. TuneLab's /v1/analyze/upload endpoint accepts raw audio and returns structural segmentation, which covers most of what segments were used for in practice.

Recommendations → rebuild with /v1/similar

This one doesn't have a drop-in. Spotify's /v1/recommendations took seed tracks + tunable attribute targets (target_energy, target_tempo, etc.) and returned a ranked list. TuneLab exposes similarity via embeddings:

You'll need to write the scoring logic yourself — weighting similarity against energy/mood targets, enforcing artist diversity, and so on. The advantage is that you now control the recommender instead of being a customer of Spotify's black box. The disadvantage is that you now have to write a recommender.

Related Artists → no direct replacement (yet)

TuneLab doesn't expose an artist-similarity graph today. The closest workaround is to resolve an artist's top tracks, fetch embeddings, average them, and run a similarity search against other artists' averaged embeddings. It works, but it's a few API calls and some client-side math. An /v1/artist/related endpoint is on the roadmap — see the changelog.

30-second previews → iTunes Search API

For preview URLs, the pragmatic workaround is Apple's iTunes Search API, which is free, unauthenticated, and returns 30-second preview URLs for almost every commercial track. Match by ISRC via /v1/resolve/{id} first to get a reliable cross-catalog link.

Featured / Category Playlists → no replacement

There is no public API for Spotify's editorial curation anymore, and no third-party service legitimately replicates it. If your app surfaced editorial playlists, the honest answer is: curate your own, or surface user-created playlists via search. TuneLab does not try to solve this — it's a content licensing problem, not a DSP problem.

The broader point

The Echo Nest pipeline was frozen in 2014. Any track added to Spotify after that point had features computed by a decade-old DSP stack that was never updated. The deprecation is a loss, but it's also the right moment to stop treating a single vendor's cached metadata as ground truth. Run DSP on real audio. Version your pipeline. Publish your changelog. That's the entire TuneLab thesis.

If you want the drop-in path, the migration guide has copy-pasteable code. If you want to understand how the replacement is built, the technology page documents every DSP model in production.


Last updated 2026-04-09 · Suggest an edit · All posts