I really wish managed Kubernetes offerings remained "free" for small use, and would only expose "empty" nodes ready for full utilization by end-user containers.
The reality however is that every managed node (like on GKE) uses quite a lot of CPU and memory out of the box, for which the user pays. On top of that there're cluster fees, just for having it around. This makes it completely unfriendly to hobbyist projects, unless one is ready to pay dozens of $s just to have Kubernetes (prior to deploying any apps to it).
(And sure, there're free tiers here and there, but they never solve this problem completely on any of the big cloud providers, at least)
Compare that to managed "serverless" offerings (even pseudo-compatible with K8s API like Cloud Run), which eliminate the management fees, but impose a tax with latency. Oh well.
One reason this is not feasible is that K8s is not designed for secure multitenancy, so for every tenant, you'll need to spin up an entire K8s control plane, which includes a database and several services - this is what's driving the cluster fees. Keep in mind that customers also expect managed K8s to be highly available, so this cost is also going into things like replicating data, setting up load balancers, etc...
Compare this to a serverless offering that is multitenant by design, the control plane is shared making the overhead cost of an extra user is basically zero, which is why they don't charge you a fee like this.
IMO if you're a hobbyist interested in K8s, your best way to go is to install K3s, which is a lightweight, API compatible K8s alternative that runs on a single node. It's pretty nice if you don't care about fault tolerance or High Availability.
I'm not so sure about the economics of what you describe. I think it could very well be that small customers don't really consume that much "bandwidth" that their resource requirements could be subsumed entirely by larger uses. It doesn't make much sense that both large and small customers have to pay the same cluster fee, for example - it would be much more fair to charge more the more you use, and approach "near zero" the lesser you use it.
At the end of the day, all resources are run by the cloud provider on KVMs sharing the same physical machines anyways, so it's up to them how much to charge. The fact that both small and large customers get to pay for the same amount of resources allocated for them, only means these resources are not allocated in the most efficient manner. So a cloud provider could fix this.
We should also not discount the net positive effect of attracting more hobbyists and startups to your platform. That's how AWS and GCP started, for example, but now they're just focusing on more enterprise business so smaller ones mean less to them (although AWS arguably less so). But we shouldn't forget that while they don't contribute as much to the revenue, they're essentially a free advertising resource that make your platform stay "relevant" (and especially more so for burgeoning startups that could grow to bring more revenue in the future!). The moment they leave, the platform just becomes another IBM that's bound to die, for better or worse.
On top of that, the anti-analogy with serverless for control plane breaks down, because one could always run it on the same shared pool of resources in gVisor or Firecracker, just like with serverless.
The reality however is that every managed node (like on GKE) uses quite a lot of CPU and memory out of the box, for which the user pays. On top of that there're cluster fees, just for having it around. This makes it completely unfriendly to hobbyist projects, unless one is ready to pay dozens of $s just to have Kubernetes (prior to deploying any apps to it).
(And sure, there're free tiers here and there, but they never solve this problem completely on any of the big cloud providers, at least)
Compare that to managed "serverless" offerings (even pseudo-compatible with K8s API like Cloud Run), which eliminate the management fees, but impose a tax with latency. Oh well.