Hacker Newsnew | past | comments | ask | show | jobs | submit | more petey283's favoriteslogin

I think it's probably worth mentioning that the principal concern for tests should be proving out the application's logic, and unless you're really leaning on your database to be, e.g., a source of type and invariant enforcement for your data, any sort of database-specific testing can be deferred to integration and UAT.

I use both the mocked and real database approaches illustrated here because they ultimately focus on different things: the mocked approach validates that the model is internally consistent with itself, and the real database approach validates that the same model is externally consistent with the real world.

It may seem like a duplication of effort to do that, but tests are where you really should Write Everything Twice in a world where it's expected that you Don't Repeat Yourself.


So the team I lead does a lot of research around all the “plumbing” around LLMs. Both technical and from a product-market perspectives.

What I’ve learned is that for the most part that AI revolution is not going to be because of PHD-level LLMs. It will be because people are better equipped to use the high-schooler level LLMs to do their work more efficiently.

We have some knowledge graph experiments where LLMs continuously monitor user actions on Slack, GitHub etc and build up an expertise store. It learns about your work, your workflows and then you can RAG them.

In user testing, people most closely associated this experience to having someone just being able to read their minds and essentially auto-suggest their work outputs. Basically it’s like another team member.

Since these are just nodes in a knowledge graph, you can mix and match expertise bases that span several skills too. Eg: A Pm who understands the nuances of technical feasibility.

And it didn’t require user training or prompting LLMs.

So while GPT-5 may be delayed, I don’t think that’s stopping or slowing down a revolution in knowledge-worker productivity.


As TwoNineA posted in another thread:

"I've never wished a man dead, but I have read some obituaries with great pleasure." - Clarence Darrow, often misattributed to Mark Twain


I'm not really all that clued up on all of this, but overall seems like a Shinzo Abe-type situation: no, we shouldn't be assassinating people. But also: you can't just wreck people's lives for a living and expect no consequences. This seems just as much of a failure of society in dealing with these kind of nihilistic parasites sucking our society dry as anything else.

You can explain these problems with simple business metrics that technologists like to ignore. Right before the recent Twitter acquisition, the various bits of info that came to the limelight included the "minor detail" that they had more than doubled their headcount and associated expenses, but had not doubled either their revenue or profits. Technology complexity went up, the business went backwards. Thousands of programmers doesn't always translate to more value!

Netflix regularly puts out blog articles proudly proclaiming that they process exabytes of logs per microsecond or whatever it is that their microservices Rube Goldberg machine spits out these days, patting themselves on the back for a heroic job well done.

Meanwhile, I've been able to go on the same rant year after year that they're still unable to publish more than five subtitle languages per region. These are 40 KB files! They had an employee argue with me about this in another forum, saying that the distribution of these files is "harder than I thought".

It's not hard!

They're solving the wrong problems. The problems they're solving are fun for engineers, but pointless for the business or their customers.

From a customer perspective Netflix is either treading water or noticeably getting worse. Their catalog is smaller than it was. They've lost licensing deals for movies and series that I want to watch. The series they're producing themselves are not things I want to watch any more. They removed content ratings, so I can't even pick something that is good without using my phone to look up each title manually!

Microservices solve none of these issues (or make it worse), yet this is all we hear about when Netflix comes up in technology discussions. I've only ever read one article that is actually relevant to their core business of streaming video, which was a blog about using kTLS in BSD to stream directly from the SSD to the NIC and bypassing the CPU. Even that is questionable! They do this to enable HTTPS... which they don't need! They could have just used a cryptographic signature on their static content, which the clients can verify with the same level of assurance as HTTPS. Many other large content distribution networks do this.

It's 100% certain that someone could pretend to be Elon, fire 200-500 staff from the various Netflix microservices teams and then hire just one junior tech to figure out how to distribute subtitles... and that would materially improve customer retention while cutting costs with no downside whatsoever.


In every single system I have worked on, tests were not just tests - they were their own parallel application, and it required careful architecture and constant refactoring in order for it to not get out of hand.

"More tests" is not the goal - you need to write high impact tests, you need to think about how to test the most of your app surface with least amount of test code. Sometimes I spend more time on the test code than the actual code (probably normal).

Also, I feel like people would be inclined to go with whatever the LLM gives them, as opposed to really sitting down and thinking about all the unhappy paths and edge cases of UX. Using an autocomplete to "bang it out" seems foolish.


This is why you should _never_ make your UUID column the primary key.

For one, it's enormous. Now you have to copy that 128bit int to every side of the relation.

Two, in most cases it's completely random. Unless you had the forethought to use something other than UUIDv4 (gen_random_uuid). So now you just have a bunch of humongous random numbers clogging up your indexes and duplicated everywhere for no good reason.

Use regular bigserial (64bit) PKs for internal table relations and UUIDs (128bit) for application-level identifiers and natural keys. Your database will be very happy!


Yep sqlc is more akin to Kotlin's SQLDelight https://github.com/cashapp/sqldelight

Hello everyone, it's the author here. I initially created 13ft as a proof of concept, simply to test whether the idea would work. I never anticipated it would gain this much traction or become as popular as it has. I'm thrilled that so many of you have found it useful, and I'm truly grateful for all the support.

Regarding the limitations of this approach, I'm fully aware that it isn't perfect, and it was never intended to be. It was just a quick experiment to see if the concept was feasible—and it seems that, at least sometimes, it is. Thank you all for the continued support.


Author here.

I knew that some people would react negatively to the term, but I can assure the intention is for you to have a better understanding of exactly how and when you should use Spice and Rayon. I would recommend reading the benchmark document: https://github.com/judofyr/spice/blob/main/bench/README.md.

What people typically do when comparing parallel code is to only compare the sequential/baseline with a parallel version running at all threads (16). Let's use the numbers for Rayon that I got for the 100M case:

- Sequential version: 7.48 ns.

- Rayon: 1.64 ns.

Then they go "For this problem Rayon showed a 4.5x speed-up, but uses 16 threads. Oh no, this is a bad fit." That's very true, but you don't learn anything from that. How can I use apply this knowledge to other types of problems?

However, if you run the same benchmark on varying number of threads you learn something more interesting: The scheduler in Rayon is actually pretty good at giving work to separate threads, but the overall work execution mechanism has a ~15 ns overhead. Despite this being an utterly useless program we've learnt something that we can apply later on: Our smallest unit of work should probably be a bit bigger than ~7 ns before we reach for Rayon. (Unless it's more important for use to reduce overall latency at the cost of the throughput of the whole system.)

In comparison, if you read the Rayon documentation they will not attempt to give you any number. They just say "Conceptually, calling join() is similar to spawning two threads, one executing each of the two closures. However, the implementation is quite different and incurs very low overhead": https://docs.rs/rayon/latest/rayon/fn.join.html.

(Also: If I wanted to be misleading I would say "Spice is twice as fast as Rayon since it gets 10x speed-up compared to 4.5x speed-up")


You really think you could achieve 80% success rate with just syntaxic transformations, while the article says they only reached 45% success rate with fine grained ast transformations?

I am no vim hater, but allow me to cast a large, fat doubt on your comment!


Paul is fundamentally wrong with one thiing here: Larry knew google was going to be huge from the beginning. His brother Carl was already a multimillionare in tech and taught him everything required to set up a future successful company. Larry made it clear to me that he always planned Google to be a large cash-generating cow to invest in long-term AGI and there were only a few times at the beginning when he truly feared that wouldn't happen.

The best advice to give children is that they should be born to rich, influential and well-connected parents. That will give them a huge advantage in all walks of life including if they want to start a startup. Larry and Sergei had the benefit of rich, influential and well-connected parents, as did Peter Thiel, Elon Musk, and Paul Graham himself.

I mean, like most advice from my generation and prior ones, the notion that people in their 20s can't fuck up too badly because they have little to lose applies considerably less well to all but the fairly affluent people in their 20s today. For many, taking risks means being gated out of the financial system by bad credit, being behind in an unforgiving job market, and possibly even destitution if they don't have a safety net of some kind

To be clear, this was always true to some degree, but inequality is higher, industries have more power and thus workers have less, and safety nets that don't come from your parents being well-off are weaker (and fewer people's parents are well-off) than when I was a kid, and this has actually been true for a few subsequent generations of kids


At least from my perspective it’s just the relentless pursuit of driving stock prices up

As a data point, I'd like to chime in here. I have been a 15 year user of tmux (and screen before that) and never thought I'd change my development habits. Over the holidays I decided I would do one of those once-every-five-years upgrades to my vim setup as I had accrued dozens of vendored plugins in normal vim and wanted to see what the big deal with neovim was.

I bit the bullet and evaluated some of the "distributions" (AstroNvim and kickstarter) and played around with all the new lua plugins that I had never thought I needed (why use telescope when FZF-vim worked so well?).

Anyways, after a month of tweaking and absorbing, I found myself running Neovide only, and doing something I never thought I'd see, running tmux from within neovim/neovide. I think this only works (for me) because of session management (there are half a dozen plugins for handling quickly changing 'workspaces') and because the built-in terminal (with a very useful plugin called toggleterm: https://github.com/akinsho/toggleterm.nvim) works so well.

I have not stopped using tmux and layouts, and it sits in another fullscreen iterm2 workspace, but I find that I now spend 90% of my time using a fullscreen neovide and summoning/toggling tmux momentarily for running commands.

Of course, the caveat here is that my preferred mode of operation is being fullscreen as often as possible. I think if your preferred mode of operation is to always see splits then running neovim from the terminal within tmux is still the way to go.

As for why I like neovide? I find the animations, when tweaked to be less 'cool' are extremely useful to see where the cursor jumps to. I am also a huge fan of the fact that I can finally use 'linespace' to put some space between my lines of code -- it is an aesthetic I didn't realize I wanted.


Netflix produced some interesting original shows. Most of them got canceled after a single season, on a cliff-hanger. That got frustrating, so we stopped watching Netflix original shows. Then we stopped watching Netflix. Then we canceled Netflix.

I feel like there are some very precise ways to think about these things.

For example, a backlog is a priority queue. A priority queue can only be long if work is added more frequently than it is removed.

Work can be removed if it is either completed or abandoned.

Work can be added when users request features, users find bugs, product owners predict features will be useful, or the dev team adds technical improvements.

Talking to users will increase the bugs identified and the user requested features.

So by these relationships, talking to customers will directly increase the size of the backlog.

And the overall backlog length may be large due to many factors unrelated to talking to customers: slow development, never deleting out of date work, adding too many technical tasks, adding too much unvalidated vision work, etc.

Does anyone know of any books, blogs or youtubers that bring this kind of logical system level thinking to software work management?


Direct your passion for getting musicians paid to Spotify and the distribution system, not to this. If everyone who uses this software were to use Spotify direct, ads and all, in the long run it would make pennies for the artists at best. You're better off listening to music however you please and buying albums on Bandcamp to support the artists; a lifetime of spotify listening will make less money for an artist you like than buying a single album from them on Bandcamp.

Even if you only listen to one artist, 8 hours per day, 365 days a year, they will earn a whopping... 100 bucks from Spotify.


I have to say I'm really humbled to suddenly see this on the front page. Today was a particularly hard day; I won't go into details but taking care of a permanently disabled invalid involves a lot of ups and downs and some fairly messy manual labor to keep them comfortable and in good shape.

I love you all. Hug your kids if you have em.

EDIT: The above blog post here was one of three things I wrote in the immediate aftermath of the tragedy to try to process my feelings and exorcise my dark thoughts. I have two more which you can find below:

The Ballad of St. Halvor (a poem): https://www.fortressofdoors.com/st-halvor/

Four Magic Words (short story, somewhat dark): https://www.fortressofdoors.com/four-magic-words/



As an individual contributor software engineer, I've worked with offshore developers before and always found the experience unproductive. The problem is always management tells me to help them so they can learn. I don't mind being in a mentor role, but I'm not interested in mentoring a temp contract worker.

Google is a global company. They make money from developing countries, they should have employees in those countries as well. Or else the money flows only back to California.

But they don't want employees in Bangalore. They want indentured servants and poverty wages. 25 years ago, I was told there would be no software developers in America, and yet the number has gone up every year.


The article talks about range concerns as if they're simply incorrect or ill-founded:

> Niedermeyer said that while an electric car can meet most people's driving needs, it struggles with edge cases like road trips because of the need to recharge. Since Americans have been promised a one-to-one substitute for their gas cars, this seems like a failure; an EV should be able to do everything a gas car can. This idea persists even though in 2023 the average US driver traveled only about 40 miles a day, and in 2022 about 93% of US trips were less than 30 miles. Still, in a survey conducted by Ipsos last fall, 73% of respondents indicated they had concerns about EV range.

93% of trips are less than 30 miles, but the vast majority of drivers take occasional trips that are beyond the range of an electric car. It's no wonder that 73% of drivers have range concerns-- no one is concerned with the EV getting through their commute, they're concerned with the EV getting them to their distant family / weekend trip / vacation home / etc. The argument is a clear strawman; it's playing down what people have genuine concerns with and focusing on the range aspect that's obviously unimportant.


The edge cases are the important cases though. Those parties I throw are the highlight of my year. My parents staying here with me is important to me, I wouldn't have it any other way. Yeah I have a desk job but that landscaping work I do in my weekend is one of my favourite hobbies.

I refuse to live my life as if these things aren't important to me. I refuse to average my entire life style down to my median day. My median day is boring.


Good news for those 700: because we don't have tech worker unions, they are able to individually bargain the terms of their layoff!

...right?


This is one of the other three secret blends that I think could bring memory safety to C++!

I wrote a bit about using type-after-type as the basis for an entire language (Arrrlang, with a parrot mascot) in my last article [0] and a little bit in a post about memory safety for unsafe languages [1] which we eventually talked about at Handmade Seattle.

The downside is the extra memory usage, but I think we can combine it with temporary regions [2] to reduce it pretty drastically.

TIL the phrase type-after-type! I've also heard it referred to as type stability. [3] [4] If you squint, this is what we often do manually with Rust when the borrow checker influences us to use indices/IDs into central collections.

[0] https://verdagon.dev/blog/myth-zero-overhead-memory-safety

[1] https://verdagon.dev/blog/when-to-use-memory-safe-part-1#the...

[2] https://verdagon.dev/blog/zero-cost-borrowing-regions-overvi...

[3] https://www.usenix.org/legacy/publications/library/proceedin...

[4] https://engineering.backtrace.io/2021-08-04-slitter-a-slab-a...


Don't eat crow simply because he followed through on one thing. He's still a blatant liar and fascist, spewing bullshit about things like being a "free speech absolutist." Giving him credit on this one thing is playing into his hand by scrubbing his reputation in way that allows him to get away with his more insidious plans (e.g. making Twitter more right-wing).

And we're shocked... why?

These aren't real contractors, with complex tasks, negotiations for 3-5x salary of a normal employee. Ive done that. No benefits , but paid EXTRAORDINARILY well, like on the range of $300/hr

But "gig work companies" are about bypassing and subverting normal employment by calling it "contract work", and having NONE of the meeting of the minds of a proper contractor.

And it's blatantly obvious why - it lowers cost and transfers liability to people who don't have a clue what those externalized liabilities really are.


HN could be a little less pessimistic. People aren't choosing microservices merely because of the hype.

Here's why I'd choose microservices for a large project:

1. People don't produce uniform code quality. With microservices, the damage is often contained.

2. Every monolith is riddled with exceptional cases in a few years. Only a few people know about corner cases after a few years, and the company becomes dependent on those developers.

3. It's easier for junior developers to start contributing. With a monolith you'd need to be rigid with code reviews, whereas you could be a little lax with microservices. Again, ties into (1) above. This also allows a company to hire faster.

4. Different modules have different performance and non-functional requirements. For example, consider reading a large file. You don't want such an expensive process to compete for resources with say a product search flow. Even with a monolith, you wouldn't do this - you'd make an exception. In a few years, the monolith is full of special cases which only a few people know about. When those employees leave, the project sometimes stalls and code quality drops. Related to (2).

5. Microservices have become a lot easier thanks to k8s and docker. If you think about it, microservices were becoming popular even before k8s became mainstream. If it was viable then, it's a lot easier today.

6. It helps with organizing teams and assigning responsibility.

7. You don't need super small microservices. A microservice could very well handle all of a module - say all of payments (payment processing, refunds, coupon codes etc), or all of authentication (oauth, mfa etc).

8. Broken Windows Theory more often applies to monoliths, and much less to microservices. Delivery pressure is unavoidable in product development at various points. Which means that you'll often make compromises. Once you start making these compromises, people will keep making them more often.

9. It allows you the agility to choose a more efficient tech/process when available. Monoliths are rigid in tech choices, and don't easily allow you to adopt a different programming language or a framework. With Microservices, you could choose the stack that best solves the problem at hand. In addition, this allows a company to scale up the team faster.

Add:

10. It's difficult to fix schemas, contracts and data structures once they're in production. Refactoring is easier with microservices, given that the implications are local compared to monoliths.


Author of the article here — thanks for all the thoughtful comments and personal experiences! I thought I'd add a couple clarifications: 1. I'm not saying your product needs to look like Craigslist all the way up until the end, just that it's a key test in early stage concept validation. Yes, later on, once VALUE is proven, by all means consider design polish to make the experience even better! 2. Yes, this idea is not unique or brand new (I'm an old Balsamiq fan, Basecamp fan, etc) —— the post is just a direct response to the increase in AI-generated interface mockups. Trying to get people to think a bit more deeply about those.

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: