> I might have a different take. I think microservices should each be independent such that it really doesn't matter how they end up being connected.
The connections you allow or disallow are basically the main interesting thing about microservices. Arbitrarily connected services become mudpits, in my experience.
> Think more actors/processes in a distributed actor/csp concurrent setup.
A lot of actor systems are explicitly designed as trees, especially with regard to lifecycle management and who can call who. E.g. A1 is not considered started until its children A2 and A3 (which are independent of each other and have no knowledge of each other) are also started.
> Also for many system designs, you would explicitly want a different topology, so you really shouldn't restrict yourself mentally with this advice.
Sometimes restrictions like these are useful, as they lead to shared common understanding.
I'd bet an architecture that designed with a restricted topology like this has a better chance of composing with newly introduced functionality over time than an architecture that allows any service to call any other[1]. Especially so if this tree-shaped architecture has some notion of "interface" services that hide all of the subservices in that branch of the tree, only exposing the public interface through one service. Reusing my previous example, this would mean that some hypothetical B branch of the tree has no knowledge of A2 and A3, and would have to access their functionality through A1.
This allows you to swap out A2 and A3, or add A4 and A5, or A2-2, or whatever, and callers won't have to know or care as long as A1's interface is stable. These tree-shaped topologies can be very useful.
Perhaps a quantity below "a single company causes enough of a spike in global demand that it'll have demonstrable impact in nearly every single industry"
And usually trade regulators would be the entity to start being concerned.
I assume you're on a quest to assert a "let a completely unregulated free market roar" position, but do recognize that global supply issues of critical components have negative market effects, especially when it'll have some impact on nearly every industry except perhaps lawn care.
> I assume you're on a quest to assert a "let a completely unregulated free market roar" position
No. I’m genuinely curious, because I agree with you about how critical these components are. I ask because it doesn’t seem to me like the answers are immediately straightforward and wanted to hear serious replies to those questions.
How much is too much? It’s like porn: you know it when you see it.
Basically one company (or a cabal of companies) shouldn’t be allowed to exert enough market-moving pressure on inventories as to disrupt other industries depending on this supply.
Sam Altman masterfully negotiated a guaranteed supply of chips for OpenAI, and there is nothing wrong with that, by itself. But there are now a dozen other industries getting rekked as collateral damage, and that shouldn’t be something one man or one company can do.
I believe it depends on which parties are responsible for the criminal antitrust violations. Is it the manufacturers abusing monopoly power, or is it OpenAI abusing monopsony power?
I’m not a lawyer or a forensic accountant, but given how remarkably stable the RAM market was until SCAMA disrupted it, I’m inclined to think the answer to your question is a resounding “no.”
Clarifying because I think the downvoters maybe misunderstood the nature of my question: I meant, in the opinion of the parent commenter should the principals of Samsung etc. be jailed? I wasn’t taking a position myself, just asking what they thought.
Agreed! I'd go so far as to say hiring is irrational in the aggregate.
The usual "rational" artifacts, if we can call them that (coding challenges, resumés, etc.) serve almost exclusively to eliminate candidates rather than boost good candidates. Firms are generally ok with false negatives from these artifacts as simply the cost of doing business.
> From there, it follows that meeting someone and letting them know you exist increases the chances (however small) that they can and will assist you on your career path.
I've seen this described as "people hire who they vibe with", and I've yet to see it play otherwise in my career. I'm not saying this is good, or fair, or desirable. It just is.
The folks who get offers are the ones who can meet people, tell stories (even true ones!), listen, and demonstrate that they can empathize with and contribute to messy, flawed organizations.
Humans have yet to invent a technology more powerful than social relationships, and I think technologists downplay this at their own peril.
> Erlang is, by my accounting, not even a functional langauge at all.
How do you figure?
The essence of FP is functions of the shape `data -> data` rather than `data -> void`, deemphasizing object-based identity, and treating functions as first-class tools for abstraction. There's enough dynamic FP languages at this point to establish that these traits are held in common with the static FP languages. Is Clojure not an FP language?
> It takes more than just having immutable values to be functional, and forcing users to leave varibles as immutable was a mistake, which Elixir fixes.
All data in Elixir is immutable. Bindings can be rebound but the data the bindings point to remains immutable, identical to Erlang.
Elixir just rewrites `x = 1; x = x + 1` to `x1 = 1; x2 = x1 + 1`. The immutable value semantics remain, and anything that sees `x` in between expressions never has its `x` mutated.
> Erlang code in practice is just imperative code written with immutable values, and like a lot of other modern languages, occasional callouts to things borrowed from functional programming like "map", but it is not a functional language in the modern sense.
I did a large amount of Scala prior to doing Erlang/Elixir and while I had a lot of fun with Applicative and Monoid I'm not sure they're the essence of FP. Certainly an important piece of the puzzle but not the totality.
> And yet, as the industry grew and all sorts of people from all sorts of backgrounds converged in this space, the tolerance and appetite for funky/terse waned in favor of explicit/verbose/accessible. It's probably for the better in the end, but it did feel a little bit like the mom-and-pop store on the corner that had weird pickled things at the register and a meemaw in the back got replaced by a generic Circle K with a lesser soul.
This is an amazing point that I haven't seen anyone else make about languages in this way.
As someone who got into the industry right after Perl's heyday and never learned or used it but learned programming from some former Perl power users, Perl has a pre-corporate/anarchic/punk feel about it that is completely opposite to something like Golang that feels like it was developed by a corporation, for a corporation. Perl is wacky, but it feels alive (the language itself, if not the community). By contrast, Golang feels dead, soulless.
i think this is changing. in recent years, alternative shells that break with bash syntax like fish or elvish and others are gaining in popularity. oils is of particular interest because it has an sh/bash compatible mode and thus provides an upgrade path away from bash.
There’s nothing that can replace bash for what it does. People have been trying for decades. You’ll be happier if you accept that bash can and will happily coexist with anything and everything else, which is exactly why it will never go away.
Chet Ramey became the primary maintainer of Bash in the early 1990s and is the sole author of every bash update (and Readline) since then. That would be an enormous task for a team of 100, no less a team of one.
I've become quite a fan (after struggling mightily with its seemingly millions of quirks.
CLI usage revolves around text and bash is a meta layer above that. Given curl, jq, and awk, you can create a quick MVP client for almost any api. Doing the same in Python and Go is much more involved.
> Rust is a cool language and I hope it eventually settles down enough to be considered for "real" projects.
I keep seeing folks with this "when will Rust be ready" sentiment and it feels a bit dated to me at this point.
At my last job we built machine control software for farm equipment (embedded Linux, call it firmware if you like). The kind of thing that would have been in C or C++ not long ago. Worked great, Rust itself was never the issue. Code from the very first versions continued to work with no problems over years of feature additions, bugfixes, rewriting, edition upgrades, etc.
The job before that, my team wrote a large library of 3D geometry analysis algorithms code that powered some fun and novel CAD manufacturing tools. In Rust. Worked great. It was fast enough, and crucially, we could run it against user-provided data without feeling like we were going to get owned by some buffer overrun. 10 years earlier it 100% would have been in C or C++ and I would have been terrified to throw externally generated user data at it. We were able to do what we needed to do and it served real paying users. What more do you need?
Rust is everywhere. It's in the browser. It's in databases. It's in container/VM runtimes. It's in networking code. It's in firmware. It's in the OS. It's in application code. It's in cryptography. It's in Android. Rust is all over the place.
The only other thing I can think of with a similar combined breadth and depth of deployment to Rust on "real" projects (other than C/C++) is Java.
If I needed something to both work and be maintainable by somebody in 2055, Rust is one of the few things I'd bother to put on the list, alongside C, C++, Java, Python, and Javascript.
I've been writing Rust professionally and personally since at least 2018 and this has never happened to me.
There are plenty of real criticisms one can make about Rust (I've made and will continue to make plenty) but I think your argument would be more compelling if you updated your experience with the present-day Rust toolchain. The Rust project takes stability very seriously.
From your experience, would you call these behaviors bugs, or are they more known issues that result from SQLites specific implementation quirks? What kinds of workloads were you throwing at it when these types of issues happened? Asking as someone who really enjoys and respects SQLite but hasn't encountered these specific behaviors before.
I was pushing SQLite quite hard. My DB was at peak 25GB or so. Occasional queries of O(1e6) rows while simultaneously inserting etc. Many readers and a few writers too. Id expect some degradation, sure, but Id say it wasn't very graceful.
I think, however, I was well within the parameters that SQLite maximalists would describe as within th envelope of heavy but fine usage. YMMV.
I found a very small number of people online with the exact same issues. Enough to know I'm not hallucinating, but not enough to find good support for this :/ but, TLDR, forcing WAL truncation regularly fixed it all. But I had to do it from an external process on a heartbeat, etc etc
You don't need to truncate the WAL, you can checkpoint PASSIVE and the WAL will be overwritten (so your queries won't slow). Generally if you're using litestream for backups it will do checkpointing for you. If you aren't depending on the after each batch (always be batching!) works well too.
I'd say the hardest part of using SQLite is its defaults are rough, and a lot of drivers don't handle batching for you.
The connections you allow or disallow are basically the main interesting thing about microservices. Arbitrarily connected services become mudpits, in my experience.
> Think more actors/processes in a distributed actor/csp concurrent setup.
A lot of actor systems are explicitly designed as trees, especially with regard to lifecycle management and who can call who. E.g. A1 is not considered started until its children A2 and A3 (which are independent of each other and have no knowledge of each other) are also started.
> Also for many system designs, you would explicitly want a different topology, so you really shouldn't restrict yourself mentally with this advice.
Sometimes restrictions like these are useful, as they lead to shared common understanding.
I'd bet an architecture that designed with a restricted topology like this has a better chance of composing with newly introduced functionality over time than an architecture that allows any service to call any other[1]. Especially so if this tree-shaped architecture has some notion of "interface" services that hide all of the subservices in that branch of the tree, only exposing the public interface through one service. Reusing my previous example, this would mean that some hypothetical B branch of the tree has no knowledge of A2 and A3, and would have to access their functionality through A1.
This allows you to swap out A2 and A3, or add A4 and A5, or A2-2, or whatever, and callers won't have to know or care as long as A1's interface is stable. These tree-shaped topologies can be very useful.
1 - https://www.youtube.com/watch?v=GqmsQeSzMdw
reply