Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

I think you're missing some of the perspective. It is actually much harder to make a working version of what this company is providing than you describe.

Just the act of running a single VM - to do it right - requires technical expertise (just because you find it easy doesn't mean it is, or that most people would do it right), in addition to maintenance tasks, operational overhead, etc. Deal with the extra infra costs in your corp cloud budget, write the extra software to handle advanced caching on auto scaling instances on multiple platforms, understand how Docker works under the hood (far fewer people know that than you assume)... It is extra work someone has to do that has no bearing on what someone actually wants to be doing, which is just running a Docker build faster.

I would have to assign two engineers to build and maintain this for a medium sized company, at $120K per employee, plus infra cost, plus maintenance, plus the lead time to build it, etc. And they'd probably do a crap job.

So, pay $50 a month for a working solution right now? To increase velocity of sw development, with no other changes? Sign my ass up. It's a tragedy that people don't understand the value here.



> I think you're missing some of the perspective. I

Yeah that's why I'm asking. It's genuine curiosity so thanks for your answers.

Yes if you wanted to make the same product as these guys you'd have to spend the same amount or more, so sure, that'd be a poor use of money. No disagreement there. Productization is a lot of work.

But you wouldn't need a full product to solve this for your own use case! I guess what I'm struggling with is the apparent reluctance to fix this problem by just running ordinary computers. We're hackers, we're software developers, this is our bread and butter right? How can we as an industry apparently be forgetting how to set up and run computers? That's the message that seems to be coming through here - it's too hard, too much work, the people who can do it are too expensive. That'd be like chemists forgetting how to use a bunsen burner and needing to outsource it!? Computers are cheap, they're fast, they can basically maintain themselves if told to! To make your Docker builds faster you can just run them on a plain vanilla Linux machine that just sits around running builds in the normal way, the same way a laptop would run them, with permanently connected disk and cache directory and stuff.

I totally get it that maybe a new generation has learned programming with NodeJS on a Mac and AWS, maybe they haven't ever installed Linux before, in the way we all seemed to learn how to run servers a couple of decades ago. Times change, sure, I get that. Still, the results are kind of mind boggling.


Well not really. I’ve literally spent the past few weeks banging my head on these things.

Especially if you want/need multi-arch. That basically requires buildx which doesn’t cache locally by default. There’s a half dozen types of caching to figure out. Then buildx is very buggy and needs qemu setup even when building natively otherwise you run into decade old bugs doing things like running sudo in a dockerfile.

It took a couple of weeks of on and off tinkering to get a stable arm builder running on a Mac m1. To get the GitHub action server to run stably and not time out was a PITA. It required IT tuning cpu limits and page caching. Not fun.

We run native machines but I would’ve much preferred a cloud solution so I could do my actual job.


I wonder how much of this is specifically Docker related pain? (I try to avoid it) It's super fascinating to see how much ARM support is coming up in this thread. I guess ARM servers are finally happening, huh. Our CI cluster has an M1 Mac in it, it took about an hour to set up. But that's doing JVM stuff with no Docker or qemu involved, so multi-arch just isn't something we need to think about and it's no different to any other machine beyond being much faster.

For servers I'd have thought you'd make a decision up front about whether to use classical x64 machines or ARM, based on load tests or cost/benefit analysis. Then you'd build one or the other. It sounds like a lot of people are putting a lot of effort into the optionality of having both, and then they are using languages and tools that can't cross-compile or JIT compile. Are you using Rust or Go or something? Hmmm.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: