Hacker Newsnew | past | comments | ask | show | jobs | submit | petey283's favoriteslogin

This, for that check theyll be building the autonomous robots themselves, saying "theyre food delivery robots, thats not a gun that a drink dispenser!"

Coming as a shock to only the most gullible people on Earth.

It's baffling to me that the DNC decided it was more important to support Israel than win the election and do good things at home.

3 days ago this was in the news:

> "Epstein files: DOJ withheld documents about claim Trump sexually abused minor"

https://www.cnbc.com/2026/02/24/epstein-trump-doj-garcia.htm...

Will it even make a single newspaper or talk show this weekend?


Regardless of how it ends, and it can go both ways, we're witnessing history here. This feels like a much bigger development than Russia-Ukraine. Iran is a major partner for Russia and China, mostly for military technology and oil. Hope it's not a start of WW3.

You're a good person and I feel similarly. We live under the Fourth Reich.

I do not think ceasing work is the right move, but definitely get involved politically and don't equivocate when you condemn our elected "representatives".

It might also soothe your soul to be in the company of like-minded individuals. A Quaker prayer is a sure place to find many.


Ever since the ICE stuff I've been desperate to find a way to not pay my taxes - even if it means donating 2, 3x, hell 4x my tax bill to somewhere else. Obviously it's basically impossible to do this (especially if your income is all self employment income) outside of just spending every penny you earn on something that could be viably considered a business expense. So I'm wondering if I should just straight up stop working until I can relinquish my USA citizenship.

Spend down my savings and assets till I have almost nothing to exit tax, exit, and then start working again.

I don't want to fund the bombing of strangers I have no quarrel with.


I like to make a .local folder at the top of the project, which contains a .gitignore that ignores everything. Then I can effortlessly stash my development notes there without affecting the project .gitignore or messing around within the .git directory.

Americans fail to appreciate a few things about our economy

1. We have a large homgoneous market where you can build a product and it’s expected it can succeed for hundreds of millions of Americans

2. EU is the easiest second market, and another step change of hundreds of millions of customers in a somewhat unified market

3. there’s not an easy 3rd economy that replaces EUs wealth, population, and comfort with English + technology

When we piss everyone off in the EU tech company growth gets kneecapped and limited to US / Canada. Theres not an easy market to expand to without much deeper focus on that specific market and its needs, for much fewer returns.


I'm definitely looking at the example set by hare with interest[0]. Also unironically love Shrek. I once hosted a viewing party of Shrek Retold[1] in my tiny NYC apartment :D

[0] https://harelang.org/blog/2025-06-11-hare-update/

[1] https://www.youtube.com/watch?v=pM70TROZQsI


To quote an excellent article from last week:

> The AI has suggested a solution, but the added code is arguably useless or wrong. There is a huge decision space to consider, but the AI tool has picked one set of decisions, without any rationale for this decision.

> [...]

> Programming is about lots of decisions, large and small. Architecture decisions. Data validation decisions. Button color decisions.

> Some decisions are inconsequential and can be safely outsourced. There is indeed a ton of boilerplate involved in software development, and writing boilerplate-heavy code involves near zero decisions.

> But other decisions do matter.

(from https://lukasatkinson.de/2025/net-negative-cursor/)

Proponents of AI coding often talk about boilerplate as if that's what we spend most of our time on, but boilerplate is a cinch. You copy/paste, change a few fields, and maybe run a macro on it. Or you abstract it away entirely. As for the "agent" thing, typing git fetch, git commit, git rebase takes up even less of my time than boilerplate.

Most of what we write is not highly creative, but it is load-bearing, and it's full of choices. Most of our time is spent making those choices, not typing out the words. The problem isn't hallucination, it's the plain bad code that I'm going to have to rewrite. Why not just write it right myself the first time? People say "it's like a junior developer," but do they have any idea how much time I've spent trying to coax junior developers into doing things the right way rather than just doing them myself? I don't want to waste time mentoring my tools.


I'm mostly skeptical about AI capabilities but I also think it will never be a profitable business. Let's not forget AI companies need to recoup a trillion dollars (so far) just to break even [1].

VCs are already doubting if the billions invested into data centers are going to generate a profit [1 and 2].

AI companies will need to generate profits at some point. Would people still be optimistic about Claude etc if they had to pay say $500 per month to use it given its current capabilities? Probably not.

So far the only company generating real profits out of AI is Nvidia.

[1] https://www.goldmansachs.com/insights/articles/will-the-1-tr...

[2] https://www.nytimes.com/2025/06/02/business/ai-data-centers-...


The reaction to this article is interesting. I have found AI to be useful in software contexts that most people never exercise or expect based on their intuitions of what an LLM can do.

For me, a highly productive but boring use of LLMs for code is that they excel at providing midwit “best practice” solutions to common problems. They are better documentation than the documentation and can do a lot of leg work e.g. Linux syscall implementation details. My application domains tend to require more sophisticated solutions than an LLM can provide but they still save a lot of rote effort. A lot of software development exists almost entirely in the midwit zone.

Much more interesting, they are decent at reducing concepts in literature to code practice for which there are no code examples. Google and StackOverflow turn up nothing. For example, I’ve found them useful for generating specialized implementations of non-Euclidean computational geometry algorithms that don’t really exist in the wild that I’ve ever seen. This is a big win, it literally turns months of effort into hours of effort.

On the other hand, I do a lot of work with algorithms that don’t exist in literature, never mind public code, with extremely performance-engineered implementations. There is an important take away from this too: LLMs are hilariously bad at helping with this but so are human software developers if required to do the same thing with no context.

Knowledge for which there is little or no training data is currently a formidable moat, both for LLMs and humans.


Who are these friends? Are they in the room with us right now? Look, maybe my experience is atypical but I’m an AI skeptic and I know plenty of others. I’ve never heard people claim that LLMs are a fad or going to go away.

I’ve seen lots of people:

* think that conflating LLMs and “AI” produces a lot of poorly reasoned arguments

* doubt the economic narratives being built around LLM technology

* think the current rate of progress in the technology is basically flat

* think most “AI companies” resemble most crypto companies

An addendum to the last point: very few crypto skeptics deny that BitCoin is a thing or think it’s going away, either. It’s just strawmanning.


I don't know if any of this applies to the arguments in my article, but most of the point of it is that progress in code production from LLMs is not a consequence of better models (or fine tuning or whatever), but rather on a shift in how LLMs are used, in agent loops with access to ground truth about whether things compile and pass automatic acceptance. And I'm not claiming that closed-loop agents reliably produce mergeable code, only that they've broken through a threshold where they produce enough mergeable code that they significantly accelerate development.

My philosophy is to let the LLM either write the logic or write the tests - but not both. If you write the tests and it writes the logic and it passes all of your tests, then the LLM did its job. If there are bugs, there were bugs in your tests.

The argument that I've heard against LLMs for code is that they create bugs that, by design, are very difficult to spot.

The LLM has one job, to make code that looks plausible. That's it. There's no logic gone into writing that bit of code. So the bugs often won't be like those a programmer makes. Instead, they can introduce a whole new class of bug that's way harder to debug.


This article does not touch on the thing which worries me the most with respect to LLMs: the dependence.

Unless you can run the LLM locally, on a computer you own, you are now completely dependent on a remote centralized system to do your work. Whoever controls that system can arbitrarily raise the prices, subtly manipulate the outputs, store and do anything they want with the inputs, or even suddenly cease to operate. And since, according to this article, only the latest and greatest LLM is acceptable (and I've seen that exact same argument six months ago), running locally is not viable (I've seen, in a recent discussion, someone mention a home server with something like 384G of RAM just to run one LLM locally).

To those of us who like Free Software because of the freedom it gives us, this is a severe regression.


Yes, that’s true and very cool but you’re an expert. Where do the next generation you’s come from? The ones that did not do weeks of dead-end research which built resilience, skill and the experience to tell Claude now saves them time? You cannot skip that admittedly tedious part of life for free.

I think pro-AI people sometimes forget/ignore the second order effects on society. I worry about that.


Couldn't agree more. The first time I used Claude Code was for something very much like this. We had a PDF rendering issue with Unicode characters in one of our libraries. We ultimately needed to implement a sort of bespoke font fallback system.

With the help of the agent, I was able to iterate through several potential approaches and find the gaps and limitations within the space of an afternoon. By the time we got to the end of that process the LLM wrote up a nice doc of notes on the experiments, and *I* knew what I wanted to do next. Knowing that, I was able to give a more detailed and specific prompt to Claude which then scaffolded out a solution. I spent probably another day tweaking, testing, and cleaning up.

Overall I think it's completely fair to say that Claude saved me a week of dev time on this particular task. The amount of reading and learning and iterating I'd have had to do to get the same result would have just taken 3-4 days of work. (not to mention the number of hours I might have wasted when I got stuck and scrolled HN for an hour or whatever).

So it still needed my discernment and guidance - but there's no question that I moved through the process much quicker than I would have unassisted.

That's worth the $8 in API credit ten times over and no amount of parroting the "stochastic parrot" phrase (see what I did there?) would change my mind.


I share the author's sentiment completely. At my day job, I manage multiple Kubernetes clusters running dozens of microservices with relative ease. However, for my hobby projects—which generate no revenue and thus have minimal budgets—I find myself in a frustrating position: desperately wanting to use Kubernetes but unable to due to its resource requirements. Kubernetes is simply too resource-intensive to run on a $10/month VPS with just 1 shared vCPU and 2GB of RAM.

This limitation creates numerous headaches. Instead of Deployments, I'm stuck with manual docker compose up/down commands over SSH. Rather than using Ingress, I have to rely on Traefik's container discovery functionality. Recently, I even wrote a small script to manage crontab idempotently because I can't use CronJobs. I'm constantly reinventing solutions to problems that Kubernetes already solves—just less efficiently.

What I really wish for is a lightweight alternative offering a Kubernetes-compatible API that runs well on inexpensive VPS instances. The gap between enterprise-grade container orchestration and affordable hobby hosting remains frustratingly wide.


I'm on the team at Let's Encrypt that runs our CA, and would say I've spent a lot of time thinking about the tradeoffs here.

Let's Encrypt has always self-imposed a 90 day limit, though of course with this ballot passing we will now have to reduce that under 47 days in the future.

Shorter lifetimes have several advantages:

1. Reduced pressure on the revocation system. For example, if a domain changes hands, then any previous certificates spend less time in the revoked state. That makes CRLs smaller, a win for everyone involved.

2. Reduced risk for certificates which aren't revoked but should have been, perhaps because a domain holder didn't know that a previous holder of that domain had it, or an attack of any sort that led to a certificate being issued that wasn't desired.

3. For fully short-lived certs (under 7 days), many user-agents don't do revocation checks at all, because that's a similar timeline to our existing revocation technology taking effect. This is a performance win for websites/user-agents. While we advocate for full certificate automation, I recognize there are cases where that's not so easy, and doing a monthly renewal may be much more tractable.

Going to shorter than a few days is a reliability and scale risk. One of the biggest issues with scale today is that Certificate Transparency logs, while providing great visibility into what certs exist (see points 1 and 2), will have to scale up significantly as lifetimes are cut.

Why is this happening now, though? I can't speak for everyone, and this is only my own opinion on what I'm observing, but: One big industry problem that's been going on for the last year or two is that CAs have found themselves in situations where they need to revoke certificates because of issues with those certificates, but customers aren't able to respond on an appropriate timeline. So the big motivation for a lot of the parties here is to get these timelines down and really prove a push towards automation.


In general it is a good practice to assume other parties are making a good faith effort, but after so many obvious cockups, that grace has been exhausted.

My assumption is that there is a lot of blow-back so they are restoring a few high profile names so they can say, "see, we aren't stopping science!". It is good this handful of people are back on the job, but I assume the NIH is like most other organizations -- the top name isn't doing the work themselves. They lead a team of people and their expertise is used to provide them direction and as a resource to help analyze surprising results. If the top experts lose their staff, I doubt they'll get nearly as much done. Having the 65 year old braintrust spending hours pipetting and staining samples is wildly inefficient. DOGI.


Scala has been very enjoyable and productive for me and the teams I've worked on. The first couple years of v2 -> v3 transition were a bit rough as the tooling and ecosystem was catching up, but now we happily use Scala 3 with no looking back. The language and ecosystem are evolving in a good direction, and I'm happy to play a small part in that with my open source libraries for Scala.js (which is the entire reason I got into Scala in the first place – so much simpler and safer than Typescript).

Scala is perhaps not going to replace Java in every old enterprise, but in my personal experience working for startups, it's been an awesome force multiplier for small teams who need to productively pump out safe and ergonomic code. Finance, health systems, etc. And yet it's also ergonomic and pleasant enough for me to eagerly use it in my non-mission-critical personal projects as well.


This is really more a C pattern than a C++ pattern, isn't it?

Frustrated by C's limitations relative to Golang, last year I sketched out an approach to using X-macros to define VNC protocol message types as structs with automatically-generated serialization and deserialization code. So for example you would define the VNC KeyEvent message as follows:

    #define KeyEvent_fields(field, padding)               \
      field(u8, down_flag)                                \
      padding(u8)                                         \
      padding(u8)                                         \
      field(u32, keysym)

    MESSAGE_TYPE(KeyEvent)
And that would generate a KeyEvent typedef (to an anonymous struct type) and two functions named read_KeyEvent_big_endian and write_KeyEvent_big_endian. (It wouldn't be difficult to add debug_print_KeyEvent.) Since KeyEvent is a typedef, you can use it as a field type in other, larger structs just like u8 and u32.

Note that here there are two Xes, and they are passed as parameters to the KeyEvent_fields macro rather than being globally defined and undefined over time. To me this feels cleaner than the traditional way.

The usage above is in http://canonical.org/~kragen/sw/dev3/binmsg_cpp.c, MESSAGE_TYPE and its ilk are in http://canonical.org/~kragen/sw/dev3/binmsg_cpp.h, and an alternative approach using Python instead of the C preprocessor to generate the required C is in http://canonical.org/~kragen/sw/dev3/binmsg.py.


I'm almost finished with a large, complex app written with Svelte 5, web sockets and Threlte (Three JS) [0]. Previously, I'd written React for about a decade, mostly on the UI side of things.

I vastly prefer Svelte, because of how clean the code feels. There's only one component per file, and the syntax looks and writes deceptively like vanilla JS and HTML. There's a bit of mind-warp when you realize Svelte doesn't want you passing components with props as props into another component. Svelte gives you "Snippets" instead, which work for some reusability, but are limited. It sort of forces simplicity on you by design, which I like. Most of React's deep nesting and state management doesn't exist in Svelte and is replaced with simple, better primitives.

The bigger gain though for me was Svelte(kit) vs. Next JS. It's very clear what is on the server and what is on the client, and there's none of that "use client" garbage with silly magic exports for Next JS things. The docs are great.

Svelte's biggest disadvantage is that the UI library ecosystem isn't as large. For me that wasn't as big of an issue because it was my expertise, but everyone else looking for a drop in UI library will find the Svelte versions a little worse than their React counterparts.

Because svelte is compiled, it also is by default very snappy. I think choosing Svelte would likely give most devs a speed boost vs. the spinner soup that I've seen most React projects become. A lot of that is going to be in the skill of the programmer, but I love how fast my app is.

[0]: https://bsky.app/profile/davesnider.com/post/3lkvum6xtjs2e


I've been using it for a while now. It is an excellent tool in the tool belt, making me a much better developer and more productive, a lot less messing around.

Congrats! It's amazing to see what CE has inspired! Thanks for the shout out :)

This interview with DeepSeek founder and CEO Liang Wenfeng, also co-founder of the hedge fund backing DeepSeek, might shed some light on the question: https://www.chinatalk.media/p/deepseek-ceo-interview-with-ch...

Some relevant excerpts:

“Because we believe the most important thing now is to participate in the global innovation wave. For many years, Chinese companies are used to others doing technological innovation, while we focused on application monetization — but this isn’t inevitable. In this wave, our starting point is not to take advantage of the opportunity to make a quick profit, but rather to reach the technical frontier and drive the development of the entire ecosystem.”

“We believe that as the economy develops, China should gradually become a contributor instead of freeriding. In the past 30+ years of the IT wave, we basically didn’t participate in real technological innovation. We’re used to Moore’s Law falling out of the sky, lying at home waiting 18 months for better hardware and software to emerge. That’s how the Scaling Law is being treated.

“But in fact, this is something that has been created through the tireless efforts of generations of Western-led tech communities. It’s just because we weren’t previously involved in this process that we’ve ignored its existence.”

“We do not have financing plans in the short term. Money has never been the problem for us; bans on shipments of advanced chips are the problem.”

“In the face of disruptive technologies, moats created by closed source are temporary. Even OpenAI’s closed source approach can’t prevent others from catching up. So we anchor our value in our team — our colleagues grow through this process, accumulate know-how, and form an organization and culture capable of innovation. That’s our moat.

“Open source, publishing papers, in fact, do not cost us anything. For technical talent, having others follow your innovation gives a great sense of accomplishment. In fact, open source is more of a cultural behavior than a commercial one, and contributing to it earns us respect. There is also a cultural attraction for a company to do this.”


The database is often the thing that enforces the most critical application invariants, and is the primary source of errors when those invariants are violated. For example, "tenant IDs are unique" or "updates to the foobars are strictly serializable". The only thing enforcing these invariants in production is the interplay between your database schema and the queries you execute against it. So unless you exercise these invariants and the error cases against the actual database (or a lightweight containerized version thereof) in your test suite, it's your users who are actually testing the critical invariants.

I'm pretty sure "don't repeat yourself" thinking has led to the vast majority of the bad ideas I've seen so far in my career. It's a truly crippling brainworm, and I wish computer schools wouldn't teach it.


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: