Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

Kubernetes is engineered to solve the five hundred different problems that different people have in a way that works for them. If you don't do that then everyone will complain about their feature or some edge case being missing and not use the system (see comments on mesos in this thread). That's not over engineering, that's the required amount of engineering for this sort of system.


Yeah, k8s people keep telling me this.

But also, k8s "secrets" are not, in fact, secret, you can't actually accept traffic from the web without some extra magic cloud load balancer (cf https://v0-3-0--metallb.netlify.app/ maybe eventually) or properly run anything stateful like a database (maybe soon).

Forget covering "everyone's" use cases: From where I'm sitting, k8s is an insanely complicated system that does a miserable job of covering the 95% use cases that Heroku solved in 2008.

It's great that k8s (maybe) solves hard problems that nobody except Google has, but it doesn't solve the easy problems that most people have.


> It's great that k8s (maybe) solves hard problems that nobody except Google has, but it doesn't solve the easy problems that most people have.

But it does though. Your dissent is that you don’t understand what secrets are and think load balancers are “magic”?

Come on. Also see this[1]

1. https://www.nginx.com/products/nginx-ingress-controller/


Yes, yes, there's a million zillion Kubernetes ingress things, none of which are really good enough that anyone uses them without a cloud-provider LB in front of it. Also they only deal with HTTP/S traffic. Got other types of traffic you want to run? Too bad, tunnel it over HTTPS.

If you want a picture of the future of computing, imagine everything-over-HTTPS-on-k8s-on-AWS stomping on a human face forever.


> Got other types of traffic you want to run? Too bad, tunnel it over HTTPS.

Then you’d expose a service, not an ingress. You can do this in a variety of ways depending on your environment.

I’m going to go out on a limb here and say you’ve never really used k8s and haven’t really grokked even the documentation.

It’s complicated, some parts more than others, but if you’re still at the “wow guys secrets are not really secret!1!” level I’m not sure how much you can really bring to the table in a discussion about k8s.


> Then you’d expose a service, not an ingress.

That only works to access the service from other pods inside k8s, it doesn't help you make that service accessible to the outside world. Tell me how you'd run Dovecot on k8s?

> if you’re still at the “wow guys secrets are not really secret!1!” level

I'm just pointing out one (of many) extremely obvious warts on k8s. You act like this is some misconception on my part, but it's not that silly to assume that a secret would be.

But to answer your smarm, yes, I've used Kubernetes in anger: my last company was all-in on k8s because it's so trendy, and it was (IMHO) an absolute nightmare. All kinds of Stockholm-syndrome engineers claiming that writing thousands of lines of YAML is a great experience, unable to use current features because even mighty Amazon can't safely upgrade a k8s cluster....


> That only works to access the service from other pods inside k8s, it doesn't help you make that service accessible to the outside world. Tell me how you'd run Dovecot on k8s?

Quite the opposite. I’d re-read the docs[1]. Specifically this page[2]. If you’re on AWS you’d probably wire this up with a NLB and a bit of terraform if you’re allergic to YAML. Seems like a 5 minute job assuming you have Dovecot correctly containerized.

> I'm just pointing out one (of many) extremely obvious warts on k8s. You act like this is some misconception on my part

It’s hard not to point out misconceptions like the one above.

1. https://kubernetes.io/docs/concepts/services-networking/

2. https://kubernetes.io/docs/concepts/services-networking/serv...


From the docs you linked:

> ClusterIP: Exposes the Service on a cluster-internal IP. Choosing this value makes the Service only reachable from within the cluster. This is the default ServiceType.

Internal only.

> NodePort: Exposes the Service on each Node's IP at a static port (the NodePort). A ClusterIP Service, to which the NodePort Service routes, is automatically created. You'll be able to contact the NodePort Service, from outside the cluster, by requesting <NodeIP>:<NodePort>.

Nobody uses NodePort to expose external services directly, and I think you know that.

> LoadBalancer: Exposes the Service externally using a cloud provider's load balancer. NodePort and ClusterIP Services, to which the external load balancer routes, are automatically created.

As I mentioned above in this thread, requires cloud provider Load Balancer.

> ExternalName: Maps the Service to the contents of the externalName field (e.g. foo.bar.example.com), by returning a CNAME record with its value. No proxying of any kind is set up.

> Note: You need either kube-dns version 1.7 or CoreDNS version 0.0.8 or higher to use the ExternalName type.

This one's a new one to me, and apparently relies on some special new widgets.

Anyway, if you love k8s, I'm sure you'll have a profitable few years writing YAML. Enjoy.


> This one's a new one to me, and apparently relies on some special new widgets.

ExternalName has been around since 2016.

> Nobody uses NodePort to expose external services directly, and I think you know that.

Sure they do. Anyone using a LoadBalancer does this implicitly. If you don’t want k8s to manage the allocated port or want to use something bespoke that k8s doesn’t offer out of the box then using a NodePort is perfectly fine. You can also create a custom resource type if you’ve got some bespoke setup that can be driven by an internal API.

The happy path is using a cloud load balancer, because that’s what you’d use if you are using a cloud provider and you’re comfortable with k8s wiring it all up for you.

Has your criticism of k8s evolved from “I’m unclear about services” to “well yes it supports everything I want out of the box but uhh nobody does it that way and therefore it can’t do it”?


My criticism of k8s is it's an absolutely batshit level of complexity[0] that somehow still fails to provide extremely basic functionality out-of-the-box (unless paired with a whole cloud provider ecosystem, but then why not just skip k8s and use ECS???). I don't think k8s solves real problems most developers face, but it does keep all these folks[1] getting paid, so I can see why they'd advocate for it.

Nomad is vastly superior in every way except for mindshare; Elastic Beanstalk or Dokku is superior for most normal-people use cases.

[0] https://kubernetes.io/docs/concepts/overview/components/

[1] https://landscape.cncf.io/


> Nobody uses NodePort to expose external services directly, and I think you know that

I do. It provides a convenient way to integrate our non-k8s load balancers (TCP haproxy tier with a lot of customization) with services on kube. This is good for reusability and predictability while we slowly migrate services from our prior deployment targets to k8s.

I get the impression that this is not uncommon.


Your points mostly only matter if you're running on bare metal. If you're in the cloud then you've got load balancers and databases covered by your cloud provider. I need Kubernetes to handle problems that I already have great solutions for. I want it to handle the problems that my cloud provider provides poor or very specialized (ie: lock in) solutions for. Which for me it does very well and a lot more easily than doing so without Kubernetes.

edit: Kubernetes secrets are also either good enough (ie: on par with Heroku) or your cloud provider has an actual proper KMS.


ITT: K8s is great because it frees us from the tyranny of Big Cloud providers.

Also ITT: Oh, but of course k8s is totally unusable unless you buy it from a Big Cloud provider.


Please don't put words in my mouth. I said it frees you from lock in by a single cloud provider which it does.


The K8s docs offer ways of running on-prems, you can do it with nginx ingress daemonset + DNS pointing to the IP (or CNAME) of your workers.

It's all covered quite well here: https://kubernetes.github.io/ingress-nginx/deploy/baremetal/ and we use it in production.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: