https://operatorhub.io/ and "operator framework" (can't find the link to a readme) are the only things that really make sense here. Don't write them from scratch.
Basic idea is that you write a process that monitors your deployment, that reacts to config changes and events. You might end up thinking "Isn't that what kubernetes should be doing already" if you are not addicted to jumping on year-old hype trains. But if you want to do k8s that's the only way to really go. Forget all the other stuff of managing deployments if it doesn't integrate with your operator strategy.
Kubernetes is an engine with some built-in data types and controllers for some common container use cases (N replicas, 1 replica per node, etc). These are released along with the project on a quarterly cadence.
You can also add your own data types and controllers. The combination of these things is sometimes branded an "operator" if it controls a certain piece of software. You can release and manage these yourself on your own timeline, decoupled from Kubernetes releases.
Sadly the author doesn't understand what iterating quickly means. It's not a means of achieving greater performance in a known area, but a means of finding something that works in an unknown space.
So the completely opposite argumentation would've worked. For the last 10+ years we spent iterating over several startups to find how to use this new technology of mobile and internet, and now that we have found most reasonable usecases we can slow down and optimize them.
On the other hand there are always areas we don't know much about, like in the areas of shared knowledge, in the niches. There of course quick iteration continues to be the way to go.
I will quibble though on the "now we can slow down and optimize them" part.
What's happening in practice is that those startups who "lucked" into successes playing the risky-but-many game of mobile & internet technology... those startups now focus on defending their "moats" and optimising their revenue streams.
From the article: "Shift 2: From Rapid Iteration to Exploration"
But rapid iteration is the most efficient means of exploration for many areas. Exploration is the reason many people use the "move fast and break things" rapid iteration strategy.
not on patients, but they probably do a lot of iterations and a/b tests on mice. Fail fast is great for medicine research, the sooner you know that something doesn't work, the sooner you can try a new recipie.
The best way to avoid fraud is to become smarter oneself. It is not that hard to detect once you learned some survival skills. Sorry, but criticising someone as disrespectful for helping people to learn survival skills sounds damaging. A little like not telling your children about sex and condoms because you don't want them to have sex.
Shit will happen. Better be prepared, and better help others prepare. Stop criticizing people who try to help.
PS: He also didn't say that it isn't fraud. He said that besides fraud there are a lot of other ways a video you made could be used in other ways than you intended.
The less experience you have the more you should pay a SaaS. Then when you gain more experience you start using frameworks like self managed ELK stack more. If that is not enough at some point you can roll out your own.
All nice and true (thus upvoted), but it's not really the point he's making. The point he's making is that most modern applications require more than 1-3 developers, and therefore you need to consider the development cost and infrastructure as part of the whole consideration. So he advocates to work on decreasing these costs instead of finding another high-level distributed architecture that is running on centralized infrastructure (the internet) and that is either forgotten in a few months or built by a centralized development organization.
the TLB is just one element of the process that leads to resolve a virtual address into a physical one: it's a cache that hosts the most recently resolved addresses.
When the virtual address you're looking to resolve is not present in that cache (i.e. when you have TLB miss), the CPU falls back to walking the page table hierarchy. At each level of the tree, the CPU reads an physical address of the next level of the tree and performs a memory fetch of that page table entry (in my previous comment I erroneously said a "page fetch", but it's actually only performing a cache-line sized fetch) and repeatedly so until it reaches the leaves of the tree which contain the Page Table Entry that contains the physical address of the (4k) physical page associated with the virtual page address you wanted to resolve.
I would very simply summarize it like this: If you know which data you look for and say "give me this piece" then you want to use a hash table. If you don't know what you are looking for and need to filter/search, then you use a b-tree. So actually you want to use both structures regularly.
Not sure how that general principle really applies or if it's just a quirck my brain came up with, though. If you look at distributed hash tables for instance, they are hash tables on a bus network in some sense, but they are used for search/filter tasks over multiple nodes.
Basic idea is that you write a process that monitors your deployment, that reacts to config changes and events. You might end up thinking "Isn't that what kubernetes should be doing already" if you are not addicted to jumping on year-old hype trains. But if you want to do k8s that's the only way to really go. Forget all the other stuff of managing deployments if it doesn't integrate with your operator strategy.