Understandable, I'd probably do the same in his position. Still sucks, we've seen this pattern a thousands times before and what happens next is pretty obvious.
I was prototyping something with pi under the hood for a personal project, going to switch off it now.
Like he iterates in the blog post multiple times: It's still MIT licensed, you can fork it to your heart's content. Or keep using the mainline and merge new features to your own fork.
For me the reason to add dependencies to my projects is exactly because they are maintained upstream and I don't need to worry about maintaining them myself. If I need to fork and maintain it myself I'd rather write my own version of it that perfectly fits my use case, or use another dependency that is maintained.
> I was prototyping something with pi under the hood for a personal project, going to switch off it now.
For what it's worth, it's pretty straightforward to recreate it I found, at least it's base idea. Readline w/ nice output is a bit of a pain, but still, doable, and if you don't care about that part of it, then the overall agent loop that you'd build on top of? You could build it, I promise.
What you are suggesting might sound difficult to some people, it is possible: in the last week I co-wrote (with Antigravity with Claude as the backend) an Emacs package for agentic coding that also just uses ‘rg’ for finding relevant code for the context, call out to a model, and handle creating a diff with merging tools. I love using my own code that provides inside Emacs vibe coding and I would suggest others try building the same thing.
I suggest you make yourself a private fork of Pi so that you don't have to be beholden to Mario and his not-so-new clique.
Create a private repo in GitHub first, then do a bare Git clone of https://github.com/badlogic/pi-mono.git (ideally do it before the original repo gets moved to Earendil's GitHub org).
How long you want to continue pulling from "upstream" depends on your comfort level. At the very least, aim for v0.65.2, which is the last tagged release before today's announcement (commit hash 573eb91). Personally, I would continue to pull right up until the next tagged release.
With that little how-to guide out of the way, here's what I think:
Mario is free to do whatever and not give a shit about what the internet at large thinks of him. By that metric, he's doing a hell of a job with that rambling blog post. Likewise, I'm also free to mostly concur with the internet at large (https://news.ycombinator.com/item?id=47688794) and prepare simple mitigations like above that can blunt this to a certain degree. Let's just hope that Mario and Armin don't take the "flicker company" approach (his derogatory term for Anthropic) and DMCA the shit out of any private repos.
I have a private Gitea exactly for stuff like that and Gitea can mirror GitHub repos out of the box and keep it in sync (and it's Git so you can always revert).
Are you sure they are not just refusing to solve your UI bug due to safety concerns? They may be worried you'll take over the world once your UX becomes too good.
You could manage your subscriptions in an RSS reader, that's what I used to do. Each channel has multiple RSS feeds associated with it for different types of videos (live, vod, etc).
The whole Youtube experience has gotten so bad over the years. I love the youtube content, but I wish I didn't have to deal with the UI/UX and recommendations that the YT app forces on me.
Annoying Shorts. I'm trying to keep my watch history clean to "steer" recommendations, but YT keeps adding things to it that I didn't actually watch just because I happened to hover my mouse over a video, etc.
They see those hovers as attention. And they likely calculate how long you linger. The lingering tells them a lot when you are infinite scrolling on other platforms.
They would love to have full on eye tracking. So the next best thing is a cursor. (Even though I’d agreed with anyone who says it’s a poor signal.)
Codex is the best out-of-box experience, especially due to its builtin sandboxing. Only drawback is that its edit tool requires the LLM to output a diff which only GPTs are trained to do correctly.
Interesting, I don't like codex exactly because of its built-in sandboxing. If I need a sandbox I rather do a simple bwrap myself around the agent process, I prefer that over the agent cli doing a bunch of sandboxing magic that gets in my way.
I see these analogies a lot, but I don't like them. Assembly has a clear contract. You don't need to know how it works because it works the same way each time. You don't get different outputs when you compile the same C code twice.
LLMs are nothing like that. They are probabilistic systems at their very core. Sometimes you get garbage. Sometimes you win. Change a single character and you may get a completely different response. You can't easily build abstractions when the underlying system has so much randomness because you need to verify the output. And you can't verify the output if you have no idea what you are doing or what the output should look like.
I think these analogies are largely correct, but TFA is about something subtly different:
LLMs don't make it impossible to do anything yourself, but they make it economically impractical to do so. In other words, you'll have to largely provide both your own funding and your own motivation for your education, unless we can somehow restructure society quickly enough to substitute both.
With assembly, we arguably got lucky: It turns out that high-level programming languages still require all the rigorous thinking necessary to structure a programmer's mind in ways that transfer to many adjacent tasks.
It's of course possible that the same is true for using LLMs, but at least personally, something feels substantially different about them. They exercise my "people management" muscle much more than my "puzzle solving" one, and wherever we're going, we'll probably still need some puzzle solvers too.
I think it just really depends. There is no fixed rule to how PhD programs are supposed to work. Sometimes your advisor will suggest projects he finds interesting and wants to see done, he just doesn't have time to do it himself. That's pretty common. Sometimes advisors don't have that and/or want students to come up with their own projects proposals, etc.
That has been my experience as well. Most of the value of writing docs or a wiki is not in the final artifacts, it's that the process of writing docs updates your own mental models and knowledge so that you can make better decisions down the road.
Even if you can get an LLM to output good artifacts that don't eventually evolve into slop, which is questionable, it's really not that useful, especially not for a personal wiki.
And what happens when the bucket of knowledge gets too big and starts to overflow? I feel as if, by delegating that process of building knowledge too much, I end up accruing knowledge gaps of my own. Funnily enough it mirrors the LLM/agent's performance.
Maybe my recent prompts reflects how badly up to speed I am at a given time? I don't know. A slightly related note - I recently heard the term AI de-skilling, this treads close to it imo.
The worst part to me, by far, is having nothing more than a bunch of "smart" markdown files to show as my deliverables for the day. Sometimes this stacks for many days on end. Usually the bigger the knowledge gaps are, the more I procrastinate on real work.
I was prototyping something with pi under the hood for a personal project, going to switch off it now.
reply