I started work in December at an awesome company as a build/release engineer. Our current release workflow for our open-source components:
1. Run the CI build/test pipeline
2. If it passes, run the 'Publish' promotion (manually, on any pipeline you want)
3. The pipeline runs a make target
4. The make target starts up a VM (from the VM it's running on)
5. The make target SSH'es to the VM to run a make target
4. The make target builds a binary
5. The binary runs more make targets
6. The make targets run various scripts in docker containers on the VM the other VM started
7. The scripts in the docker containers build the components
8. The binary from step five runs more make targets to publish the releases
9. We test everything with another promotion which runs make targets which build docker containers to run python scripts (pytest)
This is also built by a complicated web of wildcarded makefile targets, which need to be interoperable and support a few if/else cases for specific components.
My plan is to migrate all of this to something simpler and more straightforward, or at least more maintainable, which is honestly probably going to turn into taskfile[0] instead of makefiles, and then simple python scripts for the glue that ties everything together or does more complex logic.
My hope is that it can be more straightforward and easier to maintain, with more component-ized logic, but realistically every step in that labyrinthine build process (and that's just the open-source version!) came from a decision made by a very talented team of engineers who know far more about the process and the product than I do. At this point I'm wondering if it would make 'more sense' to replace it with a giant python script of some kind and get access to all the logic we need all at once (it would not).
Taskfile looks worse than make to me in every possible dimension (except maybe windows compatibility, but I don't believe that it meaningfully supports that).
You should at least read and understand the paper "Recursive Make Considered Harmful" before attempting to replace make with something "better":
Most people use make incorrectly (and it sounds like the system you describe makes most of the classic mistakes). The paper I linked explains how to use Make's templating language to help it scale to complicated setups.
Here are a few critiques of taskfile (from skimming their documentation):
- The syntax is incredibly verbose and weird. For instance, you have to write a tree of yaml nodes to loop over a set of files. Make's syntax is weird, but at least it is not verbose.
- The documentation makes no mention that I could find of deriving a set of tasks from a list of files (Make's $(wildcard) macro, or stemming rules like "%.o : %.c"). So, I guess the taskfile will end up being huge (like the size of autogenerated makefiles from autoconf or cmake), or you'll end up using additional ad-hoc build systems that are invoked by your taskfile.
- From the documentation, they haven't thought through having more than one project per unix account "When running your global Taskfile with -g, tasks will run on $HOME by default, and not on your working directory!"
- I suggest taking an idiomatic but non-trivial bash script (which is 99% of what make is invoking, and that taskfiles support), and then trying to port it to python directly. There's usually a 10-100x line of code blowup from doing that, and the python thing usually bitrots every year or so (vs. every few decades for shell).
1. Run the CI build/test pipeline 2. If it passes, run the 'Publish' promotion (manually, on any pipeline you want) 3. The pipeline runs a make target 4. The make target starts up a VM (from the VM it's running on) 5. The make target SSH'es to the VM to run a make target 4. The make target builds a binary 5. The binary runs more make targets 6. The make targets run various scripts in docker containers on the VM the other VM started 7. The scripts in the docker containers build the components 8. The binary from step five runs more make targets to publish the releases 9. We test everything with another promotion which runs make targets which build docker containers to run python scripts (pytest)
This is also built by a complicated web of wildcarded makefile targets, which need to be interoperable and support a few if/else cases for specific components.
My plan is to migrate all of this to something simpler and more straightforward, or at least more maintainable, which is honestly probably going to turn into taskfile[0] instead of makefiles, and then simple python scripts for the glue that ties everything together or does more complex logic.
My hope is that it can be more straightforward and easier to maintain, with more component-ized logic, but realistically every step in that labyrinthine build process (and that's just the open-source version!) came from a decision made by a very talented team of engineers who know far more about the process and the product than I do. At this point I'm wondering if it would make 'more sense' to replace it with a giant python script of some kind and get access to all the logic we need all at once (it would not).
[0] https://taskfile.dev/