Hacker Newsnew | past | comments | ask | show | jobs | submit | noupdates's commentslogin

I don't believe headlines like this. I cannot un-taste remote work. I have come to the pure conclusion that I have been faking, role-playing, acting a whole character at work for years. The morning costume, the morning routine, the persona, the blank stare at the screen ... I'm just tired of pretending.

I'm waking up at 9:30am with bed hair and letting the team know "no updates here" and going back to sleep. You can never ever convince me otherwise that there is a better way to live. Everything will get done, don't call me, I'll call you, see you next week (online). You shall never see me in the real world ever again.


As a like minded stubborn individual... is your employer hiring?


Nice. Hyperland + multiple monitors + TUIs like this make my workflow quite futuristic.


Thanks a lot!


Why would the US security apparatus outsource the model to a private company? DARPA or whatever should be able to finance a frontier model and do whatever they want.


There is a concept of going into the wilderness for some time (as we go through Lent). It's ancient. I wonder if we'll ever find out it's just as useful as intermittent fasting.


Ofcourse there are. Thats why all religuous systems produce monastic orders of one kind or another. Secular institutions are much younger so have much less mature versions of it. And out of these spaces you get some of the best human studying and thinking ever done.

The emergence of monastic life and how religion/cultures create space for it, in a sustainable way(cuz why even bother?), is fascinating for what its implications are about group behavior.

Think of monk like personality traits within any group being out of phase with the majority. Rather than filtering it out it survives. Not just in one culture. But in all.

One great example of what comes out of monestaries (trying to self sustain) is the emergence of Pastoral Care as a core feature of the church (some say more than any other feature it has contributed to how the Church survived down fall of empires, nations and kings that supported them). So Irish monks would walk into the local village listen to peoples problems in exchange for food and drink. That turned out to be so popular (probably the earliest forms of therapy) that it became institutionalized.


Into Great Silence is an excellent and beautiful documentary on the everyday lives of Carthusian monks of the Grande Chartreuse, a monastery high in the French Alps - https://en.wikipedia.org/wiki/Into_Great_Silence

Video on Youtube - https://www.youtube.com/watch?v=ODVLyqv5bE8


I remember reading about some desert monks who do that and that they had a set routine for every day including a large amount of manual physical work and cooking and cleaning. They were not just isolated in a cave doing nothing but living and working and praying. I seem to recall they were also advised to ignore any visions they had (even if they were good) which seems counter intuitive if one is thinking about stories about spirit quests. But I guess it's very wise.

I imagine the translation to lighthouses would be to ensure that your time there is spent in a good routine of keeping yourself active and to have some training on how to maintain psychological health. Over wintering Antarctic research scientists and astronauts probably also have rigorous routines although they would be in a small community which can regulate mental health.

I'm also reminded of the requirements of people being asked to move to small communities on isolated harsh islands. One would imagine that they would be attractive to people who do well alone, introverts, who work alone and are happy quietly, but they actually want and favour people who like others, need to work with others, who work well in a community and are socially outgoing.


There are a bunch of good documentaries about "Desert Fathers/Christians/Monks/etc." on Youtube about monks from the Coptic Christian Church in the Egyptian Desert.

Beautiful short documentary - https://www.youtube.com/watch?v=CMyyUVXKOrI

The Coptic Monasteries and Monks of the Egyptian Desert - https://www.youtube.com/watch?v=3K4urm3GD2I


I had the privilege of living in a remote village in the Catskills. I've never missed a place so much! I can attest to the Science of Awe. We need that connection in our lives.


Having done a bit of ayahuasca, as well as plant dietas (Shipibo tradition) that involve some pretty serious restrictions and isolation for long periods of time in the jungle, I can say that it (even without the ayahuasca) is a really conducive container for healing and deep work.


Take the following crude entities:

- Stones

- Sticks

- Some rope

Takes awhile, but humans eventually make a murder weapon out of that and build armies.

Now take the benign elements of a crud stack:

- Database

- Server

- User system

It takes awhile, but eventually humans will make something (something not good) out of that.

Sticks and stones may hurt my bones, but databases will never hurt me

Right?


Bleach and Ammonia are perfectly shelf stable on their own. Mix them up and they're literal poison.

What you've described are just benign ingredients. The poison is turning them into a "analytics" or "adtech" system.


Pay attention to the outflow of tech investment in the stock market. That money is going to move into OpenAI and Anthropic IPOs. The valuations will be as big you are thinking because the market believes these companies will represent an entire basket of startups.


It Is more likely that people are cashing out very liquid assets (tech stocks) to pay back their loans in Yen as interest rates are rising over there.

Tech stocks with all the hype are second only to crypto in terms of how easy and fast are to sell (hence BTC dropped and now tech stocks IMHO).

Btw, I was too young to fully remember, but wasn't the year before the dot com crash also full of IPOs?


Apparently the last two times the Super Bowl Ads were dominated by Tech companies was 2000 for dotCom and 2022 for Crypto.


FWIW, BTC is currently still triple what it was at that time. Crypto as a whole of course isn't. So really this seems like a "time to stick to the big boys".


in dollar yes. if you use the Swiss Franc as baseline, not.


That's a really interesting tidbit. Thanks for sharing.

And thinking about it it makes sense since the decision to pay the outrageous rates for an ad during the Superbowl must be driven by strong emotions (confidence or desperation). In this case, considering there's no clear moat for any of the big players, I believe it's the latter.


> Btw, I was too young to fully remember, but wasn't the year before the dot com crash also full of IPOs?

yes.


And why would anyone participate in their IPOs? They would be crazily overvalued, like Figma or worse.


To be fair, Facebook was at the time viewed by many as crazily overvalued.


Yes, because as a company who's main features were Farmville and posting pics of your food, it was ridiculously overvalued.

But we all underestimated just how ruthless Zuck would be in turning Facebook into a machine for disseminating propaganda and invading our privacy. It has become more akin to Palantir than MySpace because that's where the money is.


I really wonder is there even enough dump money from them to sell the stock they hold. Not to mention even raising any new capital... Is there really enough bag holders that will run after these stock with large enough piles of money?


If your thesis was correct, why wouldn't some of those "outflows" go to GOOG or NVDA?


They would. You can see how resilient GOOG has been during this recent draw-down, and how much growth it has had even as AI sells off.


AI sells off… if this is a selloff than I see what everyone is talking about when they are saying we in a bubble :)


[dead]


That’s harsh! But the alternative is to discuss economic theories on reddit.


Might as well long NVDA?


There are many bitter lessons ...


Could you be more specific? Because NVDA has a consistent 20 year growth of something like 400x and +30%/yr, so I don't think the bitter lessons are there.


To decouple this the person would have to broadcast nearly every event and rebuild the observer layers elsewhere.


And IMO that's what should be done.

Don't get me wrong, I like the idea and all that, but this is another pgsql "solution" that is tied to the database layer, when it should be in the application layer.

I like to be database agnostic, and while I prefer PostgreSQL on production, I prefer SQLite on the dev layer. You should never have to HAVE TO use a specific database to make your APPLICATION work.


You could replicate and separate your llm-postgres from the system-of-record-postgres.


I was just looking at the SWE-bench docs and it seems like they use almost an arbitrary form of context engineering (loading in some arbitrary amount of files to saturate context). So in a way, the bench suites test how good a model is with little to no context engineering (I know ... it doesn't need to be said). We may not actually know which models are sensitive to good context-engineering, we're simply assuming all models are. I absolutely agree with you on one thing, there is definitely a ton of low hanging fruit.


Quite frankly, most seasoned developers should be able to write their own Claude Code. You know your own algorithm for how you deal with lines of code, so it's just a matter of converting your own logic. Becoming dependent on Claude Code is a mistake (edit: I might be too heavy handed with this statement). If your coding agent isn't doing what you want, you need to be able to redesign it.


It's not that simple. Claude Code allows you to use the Anthropic monthly subscription instead of API tokens, which for power users is massively less expensive.


this is the real reason why people are switching to claude code.


Drug dealer business model. The first bag is free. Don't act surprised when you get addicted and they 10x the price.


Yes and no. There are many not-trivial things you have to solve when using an LLM to help (or fully handle writing) code.

For example, applying diffs to files. Since the LLM uses tokenization for all its text input/output, sometimes the diffs it'll create to modify a file aren't quite right as it may slightly mess up the text which is before/after the change and/or might introduce a slight typo in text which is being removed, which may or may not cleanly apply in the edit. There's a variety of ways to deal with this but most of the agentic coding tools have this mostly solved now (I guess you could just copy their implementation?).

Also, sometimes the models will send you JSON or XML back from tool calls which isn't valid, so your tool will need to handle that.

These fun implementation details don't happen that often in a coding session, but they happen often enough that you'd probably get driven mad trying to use a tool which didn't handle them seamlessly if you're doing real work.


I'm part of the subset of developers that was not trained in Machine Learning, so I can't actually code up an LLM from scratch (yet). Some of us are already behind with AI. I think not getting involved in the foundational work of building coding agents will only leave more developers left in the dust. We have to know how these things work in and out. I'm only willing to deal with one black box at the moment, and that is the model itself.


You don't need to understand how the model works internally to make an agentic coding tool. You just need to understand how the APIs work to interface with the model and then comprehend how the model behaves given different prompts so you can use it effectively to get things done. No Machine Learning previous experience necessary.

Start small, hit issues, fix them, add features, iterate, just like any other software.

There's also a handful of smaller open source agentic tools out there which you can start from, or just join their community, rather than writing your own.


It's hardly a subset. Most devs that use it have no idea how it works under the hood. If a large portion of them did, then maybe they'd cut out the "It REALLY IS THINKING!!!" posting


what you are doing is largely a free text=> structured api call and back, more than anything else.

ML related stuff isnt going to matter a ton since for most cases an LLM inference is you making an API call

web scraping is probably the most similar thing


It's quite tricky as they optimize the agent loop, similar to codex.

It's probably not enough to have answer-prompt -> tool call -> result critic -> apply or refine, there might be a specific thing they're doing when they fine tune the loop to the model, or they might even train the model to improve the existing loop.

You would have to first look at their agent loop and then code it up from scratch.


I bet you could derive a lot by using a packet sniffer while using CC and just watch the calls go back and forth to the LLM API. In every api request you'll get the full prompt (system prompt aside) and they can't offload all the magic to the server side because tool calls have to be done locally. Also, LLMs can probably de-minimize the minimized Javascript in the CC client so you can inspect the source too.

edit: There's a tool, i haven't used it in forever, i think it was netsaint(?) that let you sniff https in clear text with some kind of proxy. The enabling requirement is sniffing traffic on localhost iirc which would be the case with CC


Claude Code has thousands of human manhours fine tuning a comprehensive harness to maximize effectiveness of the model.

You think a single person can do better? I don't think that's possible. Opencode is better than Claude Code and they also have perhaps even more manhours.

It's a collaboration thing, ever improving.


Challenge accepted.


The model is being trained to use claude code. i.e. the agentic patterns are reinforced using reinforcement learning. thats why it works so well. you cannot build this on your own, it will perform far worse


Are you certain of this? I know they use a lot of grep to find variables in files (recall reading that on HN), load the lines into into context. There's a lot of common sense context management that's going on.


Of course, agentic tooling is the future of ai


I'm guessing it is for situations like should the Waymo stay in a particular lane or switch lanes, try to overtake another car, etc. That's probably the type of "guidance", which seems a lot like optimization.


These videos from Waymo shows what kind of guidance they provide:

https://youtube.com/watch?v=T0WtBFEfAyo

https://youtube.com/watch?v=elpQPbJXpfY

Notice how the system itself reasons about the scene and asks for help with possible options.

This whole story is a nothingburger. The only “news” here is that the operators are in Philippines.


Consider applying for YC's Summer 2026 batch! Applications are open till May 4

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: