Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

Background… I’ve been on good and bad projects that used microservices, and good and bad monolithic projects.

The madness is going away but the microservices are staying. There are some rationales for microservices that are conspicuously missing.

1. Fault isolation. Transcoder stuck in a crash loop? Upload service using too much RAM? With microservices, you don't even really have to figure out what's going on, you can often just roll back the affected component.

2. Data isolation. Only certain, privileged components can access certain types of data. Using a separate service for handling authentication is the classic example.

3. Better scheduling. A service made of microservices is easier to schedule using bin packing. Low priority components can be deprioritized by the scheduler very easily. This is important for services with large resource footprints.

The criticisms remind me of the problems with object-oriented programming. In some sense, the transition is similar: objects are self-contained code and data with references to other objects. The 90s saw an explosion of bad OO design and cargo cult architectures. It wasn't a problem with OO design itself. Eventually people figured out how to do it well. You don't have to make everything an object any more than you have to make everything a microservice.



WRT #2. Data isolation argument.

it is not clear to me, why data isolation is your view, is exclusive to microservices.

I have build non trival RBAC+ABAC authorization platforms, using PDP and embedeabble PEP, and did not find that it was useful by micro services only. And I did not feel that it can only be called via 'micro service' pipeline.

In a way the Authorization is a separate service, yes, but it should be offering an embeddable PEP (policy enforcement point) that one can embed (link or call out-of-process if needed), from pretty much anywhere (monolith, or any runtime component).

Authorization decisions require very very low latency, as you are authorizing pretty much every data or function interaction.

In fact, for data interaction, authorization engines offer SQL-rewriting/filtering -- so that the actual 'enforcement' happens at the layer of database you are using, not even at the layer of the component that's accessing the data.


I think you may have misread my comment. I said "authentication" and you are talking about "authorization".

Authentication can be very easily centralized in a separate service, authorization is a completely different beast. Authentication often involves access to high-value data such as hashed passwords, authorization does not.


Authorization and authentication are two different discussions. Protecting the data necessary for authentication is valid rationale. That same service could provide read-only data to another service in a single response that would allow for all subsequent authorization logic to be done without any additional latency. Additionally that data necessary for authorization may not be sensitive like the other data used for authentication.


But "OO design itself" had(still has) a major flaw: it was not clearly defined. Everybody had his/her own unique vision of "OO design".


> Better scheduling. A service made of microservices is easier to schedule using bin packing. Low priority components can be deprioritized by the scheduler very easily. This is important for services with large resource footprints.

That's harder scheduling, not easier. With a monolith you just give it all the resources and threads will use resources as is necessary. After that it's a matter of load balancing appropriately.


> That's harder scheduling, not easier. With a monolith you just give it all the resources and threads will use resources as is necessary. After that it's a matter of load balancing appropriately.

The key was "with bin packing". If you "just give it all the resources" then you're not bin packing and you're barely scheduling. At that point, your scheduler is only capable of scheduling based on CPU and IO usage, and not (for example) based on RAM. That last one is tricky, because most runtime environments won't return memory to the operating system (e.g. free() won't munmap()), and we're currently in the middle of a RAM shortage. Your machines will almost always have a different shape from your processes, it's just something you have to live with.

A bin packing scheduler is not useful for all companies and all services. It depends on the size of your resource footprint, with very large services benefiting the most.

So, microservices give you better scheduling in the sense that you can use fewer machines to run the same set of services. However, this is not important to everyone.

This stuff is built into e.g. Kubernetes so it is actually quite easy. You just can't do it with monoliths.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: