Hacker Newsnew | past | comments | ask | show | jobs | submit | narnianal's commentslogin

https://operatorhub.io/ and "operator framework" (can't find the link to a readme) are the only things that really make sense here. Don't write them from scratch.

Basic idea is that you write a process that monitors your deployment, that reacts to config changes and events. You might end up thinking "Isn't that what kubernetes should be doing already" if you are not addicted to jumping on year-old hype trains. But if you want to do k8s that's the only way to really go. Forget all the other stuff of managing deployments if it doesn't integrate with your operator strategy.


Don't you feel operators are like duct tape? Something added on top that should've been part of the core architecture to begin with?


It might help to think of it like this.

Kubernetes is an engine with some built-in data types and controllers for some common container use cases (N replicas, 1 replica per node, etc). These are released along with the project on a quarterly cadence.

You can also add your own data types and controllers. The combination of these things is sometimes branded an "operator" if it controls a certain piece of software. You can release and manage these yourself on your own timeline, decoupled from Kubernetes releases.


So it is actually the other way around. The core Kubernetes objects will become CRDs, and the Kubernetes code will move to operators.


Sadly the author doesn't understand what iterating quickly means. It's not a means of achieving greater performance in a known area, but a means of finding something that works in an unknown space.

So the completely opposite argumentation would've worked. For the last 10+ years we spent iterating over several startups to find how to use this new technology of mobile and internet, and now that we have found most reasonable usecases we can slow down and optimize them.

On the other hand there are always areas we don't know much about, like in the areas of shared knowledge, in the niches. There of course quick iteration continues to be the way to go.


Well put. Concise.

I will quibble though on the "now we can slow down and optimize them" part.

What's happening in practice is that those startups who "lucked" into successes playing the risky-but-many game of mobile & internet technology... those startups now focus on defending their "moats" and optimising their revenue streams.


Hey quibbler, you defend a castle (not a moat).


In the spirit of quibbling:

Moat: from Medieval Latin mota: “a mound, hill, a hill on which a castle is built, castle, embankment, turf”

In the spirit of mixed metaphors: </checkmate>

business catchphrase for the 20s: stock your hill with crocodiles


I was wondering what the heck dictionary you found that definition in, but then I realized you're conflating the etymology with the definition.


Nowhere in the article has the author claimed that. :/


From the article: "Shift 2: From Rapid Iteration to Exploration"

But rapid iteration is the most efficient means of exploration for many areas. Exploration is the reason many people use the "move fast and break things" rapid iteration strategy.


Yes, exactly my point. Thank you.


totally agree. i don't know... maybe we are all getting linked to different articles.


I agree. If anything, tinkering and iterating is the best approach to deal with complexity, which the author says will emerge from the new technology.


oh yea... for next sprint, let's green-blue roll out the update for this new cancer treatment and a/b test it on some patients so we can iterate.


>> a/b test it on some patients

What do you think placebos in clinical trials are??


not on patients, but they probably do a lot of iterations and a/b tests on mice. Fail fast is great for medicine research, the sooner you know that something doesn't work, the sooner you can try a new recipie.


The best way to avoid fraud is to become smarter oneself. It is not that hard to detect once you learned some survival skills. Sorry, but criticising someone as disrespectful for helping people to learn survival skills sounds damaging. A little like not telling your children about sex and condoms because you don't want them to have sex.

Shit will happen. Better be prepared, and better help others prepare. Stop criticizing people who try to help.

PS: He also didn't say that it isn't fraud. He said that besides fraud there are a lot of other ways a video you made could be used in other ways than you intended.


The less experience you have the more you should pay a SaaS. Then when you gain more experience you start using frameworks like self managed ELK stack more. If that is not enough at some point you can roll out your own.


All nice and true (thus upvoted), but it's not really the point he's making. The point he's making is that most modern applications require more than 1-3 developers, and therefore you need to consider the development cost and infrastructure as part of the whole consideration. So he advocates to work on decreasing these costs instead of finding another high-level distributed architecture that is running on centralized infrastructure (the internet) and that is either forgotten in a few months or built by a centralized development organization.


Try firefox, it has a reader mode that always translates all html pages to the same structure. I bet it's also configurable.


Yes it's configurable, but no, unfortunately it does not always work.


This is mainly because of poorly designed sites - including sites with TONS of js ads that obfuscate the text.


what's the difference? "page fetch" is not really something that can be googled on the web.


the TLB is just one element of the process that leads to resolve a virtual address into a physical one: it's a cache that hosts the most recently resolved addresses.

When the virtual address you're looking to resolve is not present in that cache (i.e. when you have TLB miss), the CPU falls back to walking the page table hierarchy. At each level of the tree, the CPU reads an physical address of the next level of the tree and performs a memory fetch of that page table entry (in my previous comment I erroneously said a "page fetch", but it's actually only performing a cache-line sized fetch) and repeatedly so until it reaches the leaves of the tree which contain the Page Table Entry that contains the physical address of the (4k) physical page associated with the virtual page address you wanted to resolve.


> In a database system, the concern was traditionally to minimize page accesses

Is it really about memory access or disk loads?


Disk.

Though btrees are friendly to cpu caches too, but really it's about minimising disk accesses.


I would very simply summarize it like this: If you know which data you look for and say "give me this piece" then you want to use a hash table. If you don't know what you are looking for and need to filter/search, then you use a b-tree. So actually you want to use both structures regularly.

Not sure how that general principle really applies or if it's just a quirck my brain came up with, though. If you look at distributed hash tables for instance, they are hash tables on a bus network in some sense, but they are used for search/filter tasks over multiple nodes.


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: