Hacker Newsnew | past | comments | ask | show | jobs | submit | vlaaad's commentslogin

Unrelated, but I just can't stop myself from saying that I absolutely hate Spotify even though I'm a paying customer. Fuck you Spotify. You were supposed to be a convenient way to discover and listen to music. Now you are only convenient for listening to music, and absolutely terrible for any recommendations. This is sad really. Spotify had good recommendations. It's absolutely in a position where it can provide good recommendations — it has both a vast music library and a vast amount of data on user preferences. And it chooses to push procedural/ai-generated slop instead to earn more money. I thought that maybe buying $SPOT stock will make me more at peace with its greed, but it didn't work. Spotify fucking deserves to crash and burn because it sees paying customers as idiots who might not notice they are fed garbage. Fuck you Spotify, fuck you.


I always find these takes curious because they could not be further from my experience. I'm still discovering tons of good music. Perhaps it's specific to genres, but I haven't encountered any generated junk tracks.


Since relatively recently I'm getting AI music in my automatic radio. They look/sound like soulless facsimiles of the real thing.


It depends on the algorithm which often preferences "similarity" (for whatever definition of similarity is).

This year I got into some pretty generic blues/rock when driving and really liked one of the songs in some playlist/radio [1]. Little did I know that the song was AI. So when I started a radio based on that song, the resulting radio was 99% AI though I didn't even realise that until after a second/third listen through.

So you can really fall through the rabbit hole.

[1] He Talked A Big Game, Played A Small Tune by Dumpster Grooves. A better song than most human slop that sells stadiums. https://youtu.be/L3Uyfnp-jag?si=mPBgJ_qO2AF80FGP


Wow, that channel is misrepresenting its songs as lost records. Pure cancer.


Yup.

There's also obvious care and human creation there as well.

They have several "artists". Bertha Mae Lightning gets the better lyrics and artangements. Virgil Dillard gets simpler tunes and the occasional weird grammar/lyrics. And so on.

I even saved that radio as a playlist to show people: https://open.spotify.com/playlist/072Wp3cFsziKBQlnglF5XM?si=... It has both obvious AI ("artist" by the name of promptgenix) and not-so-obvious (Enlly Blue, Dumpster Grooves).

The weird/sad/funny/ironic part? Many of those songs are still better than whatever Taylor Swift and a lot of other artists produce.


Really? How about asking google to "play bloomberg news on spotify" next time. Then see if you can remove the resulting chaos from your history so it won't start feeding you slop.


When they launched Discover Weekly thing, I used to add at least 1 track from it to my library - it was insanely good. Now it's all junk - not even close to what I listen to.

They also removed a lot of discovery features - Playlist Radio - for example. And they still do have some version of it on the backend, but you have to go through some weird mechanisms to trigger it - like play the last song in playlist, wait till it ends (or rewind) and you get the playlist radio. But it's also a crippled version of it - prefers playing the exact same popular songs for some reason.

Then they released this DJ thing, which is laughably bad. No Spotify, I don't want someone talking to me with useless information in between songs. Who though that was a good idea? Who actually uses that?

There hasn't been a change in Spotify in last 7 years or so that wasn't negative.


YouTube Music works pretty well for me. One great feature is that it includes not just a commercial music streaming catalog, but all user uploads of music on YouTube.


I had to chuck Youtube Music away when it was polluting my youtube playlists with stuff I was liking on youtube music. Me as a video viewer and me as a music listener are two completely different people.


and you can upload 100,000 of your own tracks to the service for your private use as well. It is a great service considering I am getting it as a side effect of youtube premium. Single handedly the last subscription I would cancel.


This is more frequent than you would assume. I’ve neither subscribed to Apple Music nor Spotify for this exact reason: I’m a millenial who would like to discover music.

Another extremely annoying effect is, being 40+, they only suggest music for my age. In “New” and “Trending”, I see Muse and Coldplay! I should make myself a fake ID just to discover new music, but that gets creepy very fast.


Why do you want a megacorp to tell you what to listen to!?? There are a million ways to do discovery where some enshitified corp isn’t incentivized to push something at you.


I think perhaps the assumption of the OP (I know mine was in the early days) was that "discovery" on Spotify would involve human tastemakers and some kind of dynamic aggregation of peer tastes that could lead to organic discovery of new music, no matter how niche or obscure.

As opposed to what it has now devolved into: the most basic of similarity matching always showing you the same few hundred songs, combined with increasingly numerous paid placements.


Why haven't you unsubscribed then?


Saying "other single points of failure" makes no sense whatsoever.


Cloudflare and us-east-1 are single points of failures for millions of their customers, many of which overlap.

"Single" means "you only need one," not that there is only one.


Does this person also identify performance issues by reading the code? This is completely impractical.


You totally can identify performance issues by reading code. E.g. spotting accidentally-quadratic, or failing to reserve vectors, or accidental copies in C++. Or in more amateur code (not mine!) using strings to do things that can be done without them (e.g. rounding numbers; yes people do that).

It's a lot easier and better to use profiling in general, but that doesn't mean I never see read code and think "hmm that's going to be slow".


Ok. I'll bite. How do you identify that a performance uplift of part of the code will kill the performance of overall app? Or won't have any observable effect?

I'm not saying you can't spot naive performance pitfalls. But how do you spot cache misses reading the code?


For example if someone uses a linked list where a vector would have worked. Vectors are much faster, partly due to better spatial locality.


Ok (that's a naive performance problem), and you speed that up, but now a shared resource is used mutably more often, leading to frequent locking and more overall pauses. How would you read that from your code?


Practitioners of this approach to performance optimization often waste huge swaths of their colleagues' time and attention with pointless arguments about theoretical performance optimizations. It's much better to have a measurement first policy. "Hmm that might be slow" is a good signal that you should measure how fast it is, and nothing more.


Once your code is optimized so that manual mental/notepad execution is fast enough, it will crush it on any modern processor.


> Does this person also identify performance issues by reading the code? This is completely impractical.

This sounds like every technical job interview.


Have you compared performance of your solution to rive?


No. I'm hoping someone else will do that ;-)


I love this idea; I hope it gains traction. One thing that is not clear to me is file search vs unsaved files. It's common for agents to use, e.g., ripgrep to search the file system. But if the communication protocol includes read/write access to unsaved files, there is a desync in terms of accuracy.. rg can't search unsaved files.


Yes, though I also liked the overall point of the post. Spotify recommendations are shit, and any playlist made by Spotify is full of garbage. I guess Ek knows this and, perhaps unconsciously, sees himself as evil, doing what's legal no matter how bad it is, with his weapon investments...


Not true, structured outputs enforce output formats with 100% reliability, e.g., https://platform.openai.com/docs/guides/structured-outputs says "Structured Outputs is a feature that ensures the model will always generate responses that adhere to your supplied JSON Schema, so you don't need to worry about the model omitting a required key, or hallucinating an invalid enum value"


Sure, but the need for accuracy will only increase; there is a difference between suggesting an LLM to put a schema in its context before calling the tool vs forcing the LLM to use a structured output returned from a tool dynamically.

We already have 100% reliable structured outputs if we are making chatbots with LLM integrations directly; I don't want to lose this.


And LLMs will get more accurate. What happens when the LLM uses the wrong parameters? If it's an immediate error then it will just try again, no need for protocol changes, just better LLMs.


The difference between 99% reliability and 100% reliability is huge in this case.


I misunderstood the problem then, I thought it would take only a few seconds for the LLM to issue the call, see the error, fix the call.


Last time I used Gemini CLI it still couldn’t consistently edit a file. That was just a few weeks ago. In fact, it would go into a loop attempting the same edit, burning through many thousands of tokens and calls in the process, re-reading the file, attempting the same edit, rinse, repeat until I stopped it.

I didn’t find it entertaining.


Big waste of context


I was considering making an MCP SEP (specification enhancement proposal) — https://modelcontextprotocol.io/community/sep-guidelines, though I'm curious if other MCP tinkerers feel the issue exists, should be solved like that, etc. What do you think?


Not sure; the schema of the Props argument depends on the value — not type — of another argument, so it's not just generics.


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: