Hacker Newsnew | past | comments | ask | show | jobs | submit | ewoodrich's commentslogin

Every link in the "Legal" tree is a dead end redirecting back to the home page... strange thing to put together without any acknowledgement, unless they spam it on LLM adjacent subreddits for clout/karma?

I use OpenCode as my main CLI tool at this point, falling back to Claude Code and Codex as needed. It's really solid these days, highly recommend.

I use Claude Sonnet 4.5, Gemini 3 Pro Preview, and GPT 5/5-mini with great results on OC. I initially tried it so I could decouple from VS Code extensions while still using my Github Copilot plan like I had been with Roo Code/Kilo Code, but have branched out to also using it with the Claude Code backend and their free models as they come and go.

Definitely worth trying if you haven't picked it up recently.


They're being sued over an Apple Intelligence gimmick in an ad campaign that turned out to be vaporware at this very moment!

https://www.axios.com/2025/03/20/apple-suit-false-advertisin...


Apple could have avoid that by released it half arsed like all the AI stuff, claim that it does all those things and write somewhere "AI may make mistakes".

I wouldn't describe most of Claude's predictions as better, they seem to skew towards wildly over optimistic futurism/science-fantasy:

  "SpaceX announces Mars City Alpha is now self-sustaining (spacex.com)" 

  Show HN: I built an IDE for direct neural programming (thoughtexchange.io)
Gemini's SpaceX post is at least in the ballpark of plausibility 10 yrs from now:

  First successful telemetry from Starship HLS-9 on the Sea of Tranquility (spacex.com)

OK, they're more realistic then. It seems to have made an actual attempt to be accurate, whereas Gemini chose satire and was surprisingly good at it.

At least in my case, the "pinpoint accuracy" of that roast made for a pretty uninspired result, it seemed to be based on like 4 or 5 specific comments seemingly chosen at random.

Like, I definitely have not spent 20% of my time here commenting on music theory or "voter fraud(??)" (that one seems to be based on a single thread I responsed to a decade ago)? ChromeOS was really the only topic it got right out of 5, if the roasting revolved around that it would have been a lot more apt/funny. Maybe it works better with an account that isn't as old as mine?

I find the front page parody much better done. Gemini 2.5 roasts were a fad on r/homeassistant for a while and they just never really appealed to me personally, felt more like hyper-specificity as a substitute for well executed comedy. Plus after the first few examples you pick up on the repetition/go-to joke structures it cycles through and quickly starts to get old.


Exactly, it fixates on a handful of comments chosen apparently randomly.

Fully agreed, aligns perfectly with my experience hitting the 95% done wall on a solo contract project recently. I still do the majority of work using agentic tools but the multiplier effect feeling evaporated at a certain point as the accumulated tech debt, complexity and scope creep enabled by how “easy” features felt with Claude Code/Codex early on finally caught up to me.

I (probably) still would have used CC heavily with benefit of hindsight but with a view that every seemingly “trivial” feature CC adds in the greenfield stage to be radioactive tech debt as the pile grows over time. Until reaching the point where CC starts being unable to comprehend its own work and I have to plan out tedious large scale refactors to get the codebase into a state approaching long term maintainability.


It’s always tempting to start writing code before you really know what you’re going to build because it’s so satisfying and exciting to see an idea take shape. I know I’ve had more than one or two projects where I started writing before I understood the shape of the problem I was solving and ended up a few hours into the project with a useless pile of stupid. It seems like LLMs can lead you much further down that road because it just seems so magically productive.

Android on an e-reader unlocks so much potential. I've owned four or five Kindles over the years but recently switched to an Onyx Boox page 7" as my main e-reader. Expensive (relative to Kindles) but runs full Android 11 and has physical page turn buttons. I use an app called BookFusion to sync my library including reading position across all platforms. Battery life isn't Kindle grade but I can get by charging once a week which is a good enough tradeoff for the convenience of being able to run Android apps.

Anthropic's incessant cuts to CC rate limits/quota on the Claude Pro plan have nearly pushed me to cancel.

If anything they're far ahead of Google on the enshittification schedule (who still give out API keys for free Gemini usage and a free tier on Gemini CLI, although CLI is still pretty shaky unfortunately but that's a different issue).

It also doesn't help that CC will stop working literally in the middle of a task with zero heads up, or at best I get the 90% warning and then 30 seconds later have it stop working claiming I hit 100% after about two additional messages during the same task. I'm truly baffled by how they've managed to make the warnings as useless and aggravating as possible in CC and routinely shutdown while the repo is in a broken state, so I have to ask Codex to read the logs and piece things back together to continue working.


You make it sound like Google is giving out free usage out of the goodness of their hearts.

Am I? I'm just comparing the relative degree of enshittification, implicit in that is nothing will last forever, Gemini freebies included once they get their fill of training data. But I was surprised to see Anthropic used as an example of something that hasn't enshittified, considering how in less than 6 months my Claude plan went from fantastic value to constant rate limiting.

As with everything Google: if it's free, you're the product.

I went from:

  1. using Claude Code exclusively (back when it really was on another level from the competition) to

  2. switching back and forth with CC using the Z.ai GLM 4.6 backend (very close to a drop-in replacement these days) due to CC massively cutting down the quota on the Claude Pro plan to

  3. now primarily using OpenCode with the Claude Code backend, or Sonnet 4.5 Github Copilot backend, or Z.ai GLM 4.6 backend (in that order of priority)
OpenCode is so much faster than CC even when using Claude Sonnet as the model (at least on the cheap Claude Pro plan, can't speak for Max). But it can't be entirely due to the Claude plan rate limiting because it's way faster than CC even when using Claude Code itself as the backend in OC.

I became so ridiculously sick of waiting around for CC just to like move a text field or something, it was like watching paint dry. OpenCode isn't perfect but very close these days and as previously stated, crazy fast in comparison to CC.

Now that I'm no longer afraid of losing the unique value proposition of CC my brand loyalty to Anthropic is incredibly tenuous, if they cut rate limits again or hurt my experience in the slightest way again it will be an insta-cancel.

So the market situation is much different than the early days of CC as a cutting edge novel tool, and relying on that first mover status forever is increasingly untenable in my opinion. The competition has had a long time to catch up and both the proprietary options like Codex and open source model-agnostic FOSS tools are in a very strong position now (except Gemini CLI is still frustrating to use as much as I wish it wasn't, hopefully Google will fix the weird looping and other bugs ... eventually, because I really do like Gemini 3 and pay for it already via AI Pro plan).


You've convinced me to give OpenCode a try!

I largely agree with this advice but in practice using Claude Code / Codex 4+ hours a day, it's not always that simple. I have a .NET/React/Vite webapp that despite the typical stack has a lot of very specific business logic for a real world niche. (Plus some poor early architectural decisions that are being gradually refactored with well documented rules).

I frequently see (both) agents make wrong assumptions that inevitably take multiple turns of needing it to fail to recognize the correct solution.

There can be like a magnetic pull where no matter how you craft the initial instructions, they will both independently have a (wrong) epiphany and ignore half of the requirements during implementation. It takes messing up once or twice for them to accept that their deep intuition from training data is wrong and pivot. In those cases I find it takes less time to let that process play out vs recrafting the perfect one shot prompt over and over. Of course once we've moved to a different problem I would definitely dump that context ASAP.

(However, what is cool working with LLMs, to counterbalance the petty frustrations that sometimes make it feel like a slog, is that they have extremely high familiarity with the jargon/conventions of that niche. I was expecting to have to explain a lot of the weird, too clever by half abbreviations in the legacy VBA code from 2004 it has to integrate with, but it pretty much picks up on every little detail without explanation. It's always a fun reminder that they were created to be super translaters, even within the same language but from jargon -> business logic -> code that kinda works).


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: