It failed because the marketing is outright bunk and overly dishonest.
If I compile with Jenkins, Actions or Earthly, that compile time is going to be the same under each build system assuming the same build node. Claiming you're 20x faster when CI is firing within seconds is kind of meaningless. Caching and parallel execution are age old concepts when it comes to CI and every modern build system can do it.
CI is all about feedback, I didn't see much here in terms of collaboration and bubbling data up, I didn't look very hard, but this should be frontline and center. Lastly, I'm not interested in adopting a DSL for my builds ever again, sorry.
In your opinion, regardless I have to set those systems up anyways to use the Earthly system. Jenkins caches by default and has parallel(), actions uses actions/cache, actions is also async so it's parallel by default. Both can use docker and take advantage of dockers caching management and make is a great tool, but again, any build system can use make.
I don't think it's harder at all, in fact in my mind Earthly is more money, more steps, more time and energy, and more vendor lock in for the same output you can get in just about any modern build system.
How do you usually do caching for complex, multi-language projects in this case? How do you guarantee _only_ changed targets are rebuilt when dependencies, like a language version or external library, change?
With Earthly you just... write your Earthfile. You get everything for free. Caching and parallelization _also_ works locally, so I see that speedup in development.
Dependency graph is a feature of the build system with better understanding of the language import rules than the CI system can have (or whatever terminology you want to use). Good ones can hermetically, concurrently and deterministic build only the files that were changed.
Earthly is just a thin wrapper on top of docker multi stage builds[1]. Such docker layers are one-dimensional, meaning if one of the steps are calling “make”, that step will have to rebuild that entire layer, not just the one c++ file that changed. There are solutions to this with --mount,type=cache but they take away the reproducibility and and are equally possible with plain dockerfiles.
[1] tbh I don’t know if this is the case, but that’s how others explain it. This is core problem with Earthly. They don’t say how it works, just that you will magically get 20x faster. Without understanding how, I don’t know how to take that seriously and I can only assume it’s snake oil.
I mean, there is a need for a better Dockerfile syntax. But then market it as such! Not as magic 20x faster builds.
> How do you usually do caching for complex, multi-language projects in this case?
As a software engineer, I do my best to keep my systems as simple as possible. Each build stage either caches artifacts or delivers deployable packages, and they consume dependencies that are either built in each pipeline run or already delivered through other means.
> How do you guarantee _only_ changed targets are rebuilt when dependencies,
Why do you want that? Other than saving a few pennies here and there in pipeline runs, you do not have anything to gain. Your builds should consume dependencies already packaged and delivered, either from a OS image, your own container image, or a package deployed by you or a third party, and your build must be reproducible and tied to each and every pipeline run.
> like a language version or external library, change?
If you're mindlessly bumping up language versions and external libraries, you have more pressing problems with your setup than what's supported by the CICD system you're using.
I think there is certainly some value to that (the other posts don't appear to be fully grasping the complexity of the problem), but Earthly feels closer to an open source consultancy project than venture capital funded startup.
It's one thing to know a build failed, it's another thing to know why and how a build failed, where it failed, and what the cause was. If I have to swim through or context shift to a separate build system's output to get that, what is this actually doing for me?
That looks really cool, thanks for the link. Something like this well integrated into Jenkins would be super useful.
Our DevOps team build something custom: they check all test logs of all builds of all branches and aggregate all that into a Grafana dashboard. We use it to monitor tests that are failing, to get a better grip on flakiness. Works okay but could be better.
That doesn't help with non-deterministic failures and I've also yet to see a true "write once, run anywhere" system ever. It may be 99% "write once, run anywhere" but there's always that 1% edge case.
Exactly. This just reminds me of "works on my computer". Building locally is more advantageous for developing CI workflow, but I want to be as close to prod as possible and doing that on snowflake developer workstations is an exercise in futility.
Earthly makes use of BuildKit, which essentially executes the build steps in containers. It provides more isolation from the CI runner / dev workstation. Instead of having developers manage their own build tools, Earthly makes it easy to have the build definition manage them.
All flakiness I’ve observed in code I’ve written has been dominated by things like “this job isn’t getting enough cpu cycles and the test assertion is too aggressive in such a scenario” or “lack of CPU cycles is triggering a race condition” or “statistical test X isn’t written robustly”. Not sure how containers solve these problems. My point being that this tool solves some problems for some teams and maybe for a lot of teams that struggle with this problem, but hard problems remain and this isn’t a silver bullet for that. You can’t outsource stability of the project-specific test infrastructure which is where most of the cost lies these days I think.
Nix solves all these problems for good.
It's a monster of a system, but if you're migrating anyways then might as well migrate to the real deal, not to some fly-by-night half-baked thing that gets you halfway there.
If I compile with Jenkins, Actions or Earthly, that compile time is going to be the same under each build system assuming the same build node. Claiming you're 20x faster when CI is firing within seconds is kind of meaningless. Caching and parallel execution are age old concepts when it comes to CI and every modern build system can do it.
CI is all about feedback, I didn't see much here in terms of collaboration and bubbling data up, I didn't look very hard, but this should be frontline and center. Lastly, I'm not interested in adopting a DSL for my builds ever again, sorry.