Hacker Newsnew | past | comments | ask | show | jobs | submit | hysan's commentslogin

Oh, so I’m not imagining this. Recently, I’ve tried to up my LLM usage to try and learn to use the tooling better. However, I’ve seen this happen with enough frequency that I’m just utterly frustrated with LLMs. Guess I should use Claude less and others more.

Does it? It only takes like 2 min for my electric kettle to boil. If I was a more avid coffee/tea drinker, I’d get one of the always heated hot water dispensers that are common in Japanese households (def one of the appliances I miss since moving back to the US). Then you never have to wait.

Dunno how long this is normally supposed to go but it took me 10+ min of actually seriously considering the fonts at each choice and the final suggestion is a font that I actively dislike. I’m curious how it’s narrowing things down because I noticed that it started to give me only serif fonts which I don’t like. But the sans serif ones that it was using to try and narrow things down had distinct characteristics that I didn’t like like very narrow stems or very narrow or wide characters. But it wasn’t doing that with serif fonts. I’m guessing it began to think I preferred serifs because of that but in reality, I wasn’t picking the lesser of two evils most of the time.

It's just a tournament, the winner goes to next round. Play it fast, all the way to the end and you'll see how it works. It's some 16 or 32 rounds all the way.

Yes, at least in my experience on flights in the USA. It’s very rare but it does happen. I was lucky one time that the person doing it sat next to me and I politely asked them to use headphones and no fuss was had.


This is quite wrong? There are some features that get blocked from being implemented because Wayland refused to define a protocol for everyone to implement. Window positioning being a recent example of how progress can get blocked for many years due to Wayland.


Fi’s customer service has long since turned to shit, but the things keeping me on it are the data sims, simple international roaming, and international calling. That trifecta is pretty hard to find a match for. Especially the data sims. But if you don’t need that, I probably wouldn’t recommend Fi. My wife had endless trouble with multiple bad sim cards and the customer service experience was just as dreadful as every other carrier.


(Don’t take this as advice. Just writing my own experience with this.)

This is the reason why I take the time to summarize all “why” decisions and implementation tradeoffs being made in my (too lengthy) PR descriptions with links, etc. I’ve gotten into the habit of using <detail/> to collapse everything because I’ve gotten feedback multiple times that no one reads my walls of text. However, I still write it (with short <summary/>s now) because I’ve lost track of the number of times I’ve been able to search my PRs and quickly answer mine or someone’s “why” question. I do it mostly for me because I find it invaluable as I prefer writing shit down instead of relying on my flaky memory. People are forgetful and people come and go. What doesn’t disappear is documentation tied to code commits (well… unless you nuke your repo).


IMO, the spirit of the idea is to put higher information density fields first, and let that smooth out the UX for the remaining fields as you go downwards. Yes, there will be exceptions but that only matters if you’re trying to absolve the user of all work for 100% of situations. Trying to do that is a fools errand. Invert the order and use the information gathered to make inputting the rest simpler for 80% of the users. Then make it easy for the other 20% to course correct (ex: don’t disable autofilled fields, highlight all text when tabbing to the next field, etc). I think this pattern is a good one to keep in mind, but not blindly follow, when designing the UX of a UI.


While I commend Ars and the author for taking responsibility, I am a bit off put by the wording used for the retraction on the original article: https://arstechnica.com/ai/2026/02/after-a-routine-code-reje...

> Following additional review, Ars has determined that the story “After a routine code rejection, an AI agent published a hit piece on someone by name,” did not meet our standards. Ars Technica has retracted this article. Originally published on Feb 13, 2026 at 2:40PM EST and removed on Feb 13, 2026 at 4:22PM EST.

Rather than say “did not meet our standards,” I’d much prefer if they stated what was false - that they published false, AI generated quotes. Anyone who previously read the article (which realistically are the only people who would return to the article) and might want to go back to it as a reference isn’t going to have their knowledge corrected of the falsehoods that they read.


Another fascinating thing that the Reddit thread discussing the original PR pointed out is that whoever owns that AI account opened another PR (same commits) and later posted this comment: https://github.com/matplotlib/matplotlib/pull/31138#issuecom...

> Original PR from #31132 but now with 100% more meat. Do you need me to upload a birth certificate to prove that I'm human?

It’s a bit wild to me that people are siding with the AI agent / whoever is commanding it. Combined with the LLM hallucinated reporting and all the discussion this has spawned, I think this is making out to be a great case study on the social impact of LLM tooling.


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: