Hacker Newsnew | past | comments | ask | show | jobs | submit | novoreorx's commentslogin

https://django-ninja.dev/ should be mentioned in this discussion, it's the best thing happen to my 12 years Django career

This article reminds me of the days before LLMs ruled the world, when the word "agent" was most commonly used in the DevOps area, representing the program that ran on a remote machine to execute dispatched jobs or send metrics. Now I wonder how many developers would look at "agent" and think of this meaning.

If I were the person at Apple in charge of this kind of matter, I would ignore this case, just as I do for other regular people. Everyone should be equally not cared for by Apple. That's how Apple sucks in a way I can accept myself still using their product.

Agreed.

If the only way to get your digital property back is a public plea to your Lord, that's called feudalism. Everyone should be treated fairly, not only those who can get their public pleas heard.


You just made it clear to me why I felt not resonated and a bit uncomfortable reading that article, despite I thought I should be. Because what I want to see is something straight like "fuck you Apple", not a begging and emphasis on how much the author has contributed to the megacorp.

"fuck you Apple" is not a correct response either. Bad Apple, good Apple, is just more of the same. Asking Lords to be benevolent is not what we should want.

Just like landlord can't just lock you out of your house, with all your property inside, but has to go trough legal process, we need to have legislation and regulation for the same with digital property.


Feudalism never left. The only change is that the majority of the serfs don’t work on land anymore, and we have the freedoms o switch lords easily.

Seems to be another great way to build local-first applications, which makes me think of CRDT, and come up with this silly question: what's the relationship between Durable Stream and CRDT, are they replacements for one another, or can they work well together?

They primarily serve different purposes, but they could complement each other.

Durable Streams are a lightweight network protocol on top of standard HTTP. When you are building a synchronisation layer for let's say a local-first app, you need to not only exchange data over some lower-level protocol (i.e. HTTP / SSE / WS), but you also have to define a higher-level protocol on how the client & server are going to communicate - i.e. how to resume data fetching once the client reconnects, based on the last data that the client received (~offset). Since the reconnect & offset should be automatically handled by the Durable Stream, you could just build your domain logic on top of it.

CRDTs are primarily meant to resolve data conflicts, usually client-side, based on a defined conflict resolution strategy (i.e. last-writer-wins). Some of the CRDT libraries, like automerge, loro or yjs, also implement a networking layer to exchange the data between nodes (could be even P2P), meaning they already have a built-in mechanism for reconnection and offset (~send me data since X). However, nobody forces you to use their networking layer, meaning that with Durable Streams, you would have a good starting point to build your own.


Great answer! I was always confused about how CRDTs were transferred. Like you said, existing implementations often come with their own in-house networking solutions. Now it's totally clear, since CRDTs are only about data, it's no wonder their transfer methods differ. That makes Durable Stream a very good companion to work with CRDTs—the boundaries are clear, and they complement each other perfectly.

I also feel that I could give Durable Stream's protocol spec to a coding agent, and it could blend into the best suited implementation for my current project (say, a Go repo). The simple yet sophisticated spec is more valuable than a bunch of SDKs.


Considering the release of GPT-5.2, this article is worth discussing together, as it managed to achieve the same high score as GPT-5.2 using Gemini 3 Pro

Am I crazy to think these models have actually surpassed human performance on ARC 2? https://www.lesswrong.com/posts/DX3EmhmwZjTYp9PBf/ai-perform...

This is not surprising, rather, it's the 100% figure that makes me skeptical. In fact, the intelligence level of ordinary people isn't that high, and AI can indeed surpass it. Otherwise, why would we use it?

A perfect simulation of a ring that would appear in a RPG, when the duration goes to 0, you permanently lose it.

I do feel that GitHub's product development has been less exciting in recent years, but that's natural for any maturing platform. While I can't judge whether there are fewer talented people involved, I've noticed they haven't increased mistakes, and the platform continues to grow. It would be unfair to overlook the hard work that goes into maintaining GitHub and shipping new features (even if some of those features aren't to everyone's taste). I'm grateful for GitHub and hope it continues to thrive. Peace.


Seeing systems used in the most advanced areas of human civilization never fails to amaze me. They have been created half a century ago yet still functioning flawlessly in the autonomous, harsh environment of space. Meanwhile, I consider it a win if my Python API server survives a month without breaking. I'm always wondering, how did those engineers create something so robust, while I, despite standing on the shoulder of decades of software engineering progress, seem unable to avoid introducing bugs with every commit?


Management then cared that their one chance would work. Today management just wants it to mostly work.

Incentives and goals are very different between the two. We could very much build even more incredible things today; and would argue that we actually do. Just only in the places that seem to matter enough to do that type of special effort for.


I really wish there were a de facto state-of-the-art coding agent that is LLM-agnostic, so that LLM providers wouldn't bother reinventing their own wheels like Codex and Gemini-CLI. They should be pluggable providers, not independent programs. In this way, the CLI would focus on refining the agentic logic and would grow faster than ever before.

Currently Claude Code is the best, but I don't think Anthropic would pivot it into what I described. Maybe we still need to wait for the next groundbreaking open-source coding agent to come out.


There is Aider (aider.chat), and it has been there for couple years now. Great tool.

Alas, you don't install Claude Code or Gemini CLI for the actual CLI tool. You install it because the only way agentic coding makes sense is through subscription billing at the vendor - SOTA models burns through tokens too fast for pay-per-use API billing to make sense here; we're talking literally a day of basic use costing more than a monthly subscription to the Max plan at $200 or so.


Aider is in a sad state. The maintainer does not "maintain" for quite some time now (look at the open PRs and issues). It's not state of the art definitely but one of the first and best ones in the past. A fork was created, Aider CE, from some members of the Discord community https://github.com/dwash96/aider-ce The fork looks and works promising but there is (sadly) so much more development in the other AI CLI tools nowadays.


Agreed. Alternatives seem too "agentic" for me, where Aider strikes the right balance of AI pair programming.


I came to the same conclusion. It's crickets over there: https://github.com/Aider-AI/aider/graphs/contributors


With increasingly aggressive usage limits (Claude weekly usage now), "agentic" style of token burning seems much less practical to me. Coming from Aider and trying tools like OpenCode, the "use models to discover the relevant files" etc pattern seems very token heavy and even wasteful - whereas with Aider you include relevant files up front and use your tokens for the real work.


Opencode (from SST; not the thing that got rebranded as Crush) seems to be just that. I've had a very good experience with it for the last couple of days; having previously used gemini-cli quite a bit. Opencode also has/hosts a couple of "free" models options right now, which are quite decent IMO.

https://github.com/sst/opencode

There are many many similar alternatives, so here's a random sampling: Crush, Aider, Amp Code, Emacs with gptel/acp-shell, Editor Code Assistant (which aims for an editor-agnostic backend that plugs into different editors)

Finally... there is quite a lot of scope for co-designing the affordances / primitives supported by the coding agent and the LLM backing it (especially in LLM post-training). So factorizing these two into completely independent pieces currently seems unlikely to give the most powerful capabilities.


> I really wish there were a de facto state-of-the-art coding agent that is LLM-agnostic

Cursor?

It’s really quite good.

Ironically it has its own LLM now, https://cursor.com/blog/composer, so it’s sort of going the other way.


I think Claude Code is the best because it is not agnostic.


Vertical integration pretty much, they if they need a feature in CLI the model doesn't support, they can just re-train the next model version.

exactly!


> Currently Claude Code is the best, but I don't think Anthropic would pivot it into what I described.

They don't have to pivot, since it already exists: Claude Code Router [1].

[1]: https://github.com/musistudio/claude-code-router


It’s not gonna happen any time soon. The model is fine-tuned on traces generated by the scaffolding (eg dependent on what tool calls are available), and the scaffolding is co-developed with the strengths/weaknesses of the specific model.


There are actually a lot of those! One of the best things about using them is that you can swap models around at will.

I love to switch models and ask them what they thought of the previous models answer


Model agnostic tools I would say:

Roo Code or maybe Kilo (which is a fork of Roo)


Goose?


Cherry Studio is my daily go-to, I hope Onyx deskktop can be a great alternative for personal users who just want a dedicated app to access any LLMs with full power of MCP and various tools


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: