Ya know what's great about debating tools? Because we have a separation, different things can innovate at different rates and people can do different things.
You can move from deploying to AWS to Kubernetes all while your CI pipeline and config management stay the same. Maybe at another time you change your CI system.
In all of this we can even have tool fights. Jenkins vs CircleCI. vim vs emacs. And new players can come along like vscode.
Different departments in the same companies, sometimes billion dollar companies, can even do this.
This is one of the reasons I personally like a separation of concerns with projects.
There is one thing about Helm...
> But to keep everyone using the upstream Helm chart, wouldn't all Helm charts eventually have to look the same, and just be a templated version of the entire Kubernetes API with all CRDs included?
I don't think so. Without being long winded, this is about user experience and encapsulations. Kubernetes is hard to learn. Then do learn the business logic for installing all the things you use. To use the old wordpress example, should someone installing wordpress into a cluster need to know the business needs of MySQL to create the k8s config for it? Most app devs don't want to learn all the k8s object configs... it's a lot to learn.
As for CRDs, that's about dependency management.
But, if we make Kubernetes really hard, with lots of CRDs we have to start to deal with what CRDs are in what clusters? And, what experience does that provide for end users? How do we not make the UX barrier to entry to high for the people who need to use it?
Are users interested in Kubernetes or are they interested in their apps and core business logic?
That's one of my favorite things about that composability of the Kubernetes tools. I don't believe there's a reason for a "tool fight" in the Kubernetes ecosystem. There might be disagreements about best practices, for example how to manage code in a cluster (gitops vs kubectl apply vs helm install vs ...), but that's not a tool fight as much as a methodology difference. And there's room for more than one pattern to emerge.
I think that any tool that is able to deploy an open source project to my cluster, while minimizing the amount of operational overhead I need to assume, is the tool that I want. I don't care if that is Kustomize, Helm, Ksonnet, or anything else, as long as it mets the requirements of a) it has to work with my environment and b) it shouldn't introduce unecessary operational overhead.
You also mention that Kubernetes is hard to learn, which is absolutely a problem. Adoption is growing, but it's getting harder to learn as more features get merged in. And you are right, nobody should have to learn Kubernetes YAML to deploy a standard, off-the-shelf Wordpress installation. But what about more complicated software that needs "last mile" customization done to work in a specific environment? This blog post (https://testingclouds.wordpress.com/2018/07/20/844/) shows a great way to combine the power of the Helm community and chart format with the last-mile tooling that Kustomize can provide to help keep charts simple but still flexible. That feels much better than forking the chart and maintaining a separate copy of it just to make a few changes that are specific to a single use case.
I have mixed feelings about Kustomize getting merged into kubectl. I don't like the idea of Google "crowning" a winner, and I hope the sig-architecture group and Google teams remain diligent to prevent that from happening. Kustomize is not a replacement for Helm, it's a very good tool to handle specific use cases that often involve Helm charts at the source.
You can move from deploying to AWS to Kubernetes all while your CI pipeline and config management stay the same. Maybe at another time you change your CI system.
In all of this we can even have tool fights. Jenkins vs CircleCI. vim vs emacs. And new players can come along like vscode.
Different departments in the same companies, sometimes billion dollar companies, can even do this.
This is one of the reasons I personally like a separation of concerns with projects.
There is one thing about Helm...
> But to keep everyone using the upstream Helm chart, wouldn't all Helm charts eventually have to look the same, and just be a templated version of the entire Kubernetes API with all CRDs included?
I don't think so. Without being long winded, this is about user experience and encapsulations. Kubernetes is hard to learn. Then do learn the business logic for installing all the things you use. To use the old wordpress example, should someone installing wordpress into a cluster need to know the business needs of MySQL to create the k8s config for it? Most app devs don't want to learn all the k8s object configs... it's a lot to learn.
As for CRDs, that's about dependency management.
But, if we make Kubernetes really hard, with lots of CRDs we have to start to deal with what CRDs are in what clusters? And, what experience does that provide for end users? How do we not make the UX barrier to entry to high for the people who need to use it?
Are users interested in Kubernetes or are they interested in their apps and core business logic?