For some weird reason people like to simplify that war to "Napoleon vs. Russian winter", completely overlooking Kutuzov. Kutuzov was dealt a bad hand but he played his cards very well
Not good ones, and Scala devs are keenly aware they have been going in the wrong direction compared to Go/Rust, in part because of articles like this.
RedMonk shows Scala is comparable to Go and Rust [0] You can see in this chart which plots the number of projects on Github and tags on StackOverflow (ha ha.)
The upper right most cluster has the most popular languages (C++, Java, Python, JS, PHP, TypeScript) then the next cluster has Scala with Rust, Go, Kotlin, R, Swift, etc... That cluster is clearly separate from the next less popular one which has Haskell, Lua, Ocaml, Groovy, Erlang, Fortran, etc... and then you can see the long tail is a big cluster covering the entire lower left half of the chat with a clear gap between it and the upper right half.
I don't think it is a "very, very wrong" statement.
That is very different from my experience with git. I know that the kernel uses branches a lot, but that's probably because of git's history with the project. At every company I worked git is used exactly the same way as CVS or SVN was used many years ago: you make some local changes, you push these local changes to the central store, you forget about it. Branches make local switching between tasks easier, but apart from that nobody cares about branches and they're definitely not treated as an important part of the repo. In fact, they're usually deleted immediately after the change is merged.
I think you have it swapped around. This is exactly the kind of workflow that git provided better support for - lightweight branches, not integral part of master history, deleted after merge.
F. again i have minimal experience actually ever needing it in go, but guessing this is just generally the exercise of managing the lifecycle of a goroutine well in your code? proper handling so things dont get orphaned in buffer, fire and forget woopsies, etc.
early on i do feel like go kinda advertised batteries included concurrency but i kinda wished they advertised the foot-shooting-mechanisms and gaps in the abstraction a little more. overall i prefer to have enough control to choose how to manage the lifecycle. mem leaks bum me out and kill my steam, at least from my experience with c/cpp.
Love2D uses Luajit and directly calls established game libraries. The CPU usage should be far better for 2D games, luajit is faster than a browser's javascript jit. You can also create single exe games that are a few megabytes and not a few hundred megabytes.
explain that to my webgl TypeScript browser game running at 180+ FPS while rendering a large RPG tiled world with infinite procedurally JIT generated biomes, with heavy processing delegated to webworkers.
As you aren't posting code or stats I can't say much, but I'd bet a native app would still be smaller and more efficient, since you have to wrap what you're doing in an entire Chromium instance and deal with a web stack designed for documents, which is definitionally less efficient than a native alternative. Tiles aren't exactly cutting edge technology.
"Heavy processing delegated to webworkers?" That just sounds like threads but worse.
The first post in this subthread was literally a statement that "A web-based solution is usually better performing, despite all the bloatware necessary." And you literally joined in to support that assertion against "the Electron haters."
And it isn't trauma, it's literal fact. Electron isn't used because it's technically superior to native applications, it's used because web devs are a dime a dozen. It's popular for business reasons, not technical reasons. It works "well enough," but only because computers are really fast but there's only so much slack an OS can take up when even parts of it are Electron apps, and probably vibe-coded to boot.
Hammam is not as hot as sauna and not as dry. Sauna's air temperatures can reach above 100 degress Celsius and humidity is usually relatively low (around 20%).
> Hammam's temperatures are around 40-50 degrees Celsius and humidity is close to 100%.
Which makes it absolutely unbearable. By the way, that combination of temperature + humidity will cause severe hyperthermia (which can be deadly) faster than people think.
What are you talking about, there is actually too much unicode awareness in C++. Unicode is not the same thing as utf-8. And, frankly, no language does it right, I'm not even sure "right" exists with Unicode
c++20's u8strings took a giant steaming dump on a number of existing projects, to the point that compiler flags had to be introduced to disable the feature just so c++20 would work with existing codebases. Granted that's utf-8 (not the same thing as unicode, as mentioned) but it's there.
I once spent several days debugging that same mistake. Stuff worked perfectly in tests but broke misteriously in production builds. Couldn't stop laughing for a few minutes when I finally figured it out.
Atomic operations, especially RMW operations are very expensive, though. Not as expensive as a syscall, of course, but still a lot more expensive than non-atomic ones. Exactly because they break things like caches
Not only that, they write back to main memory. There's limited bandwidth between the CPU and main memory and with multithreading you are looking at pretty significantly increasing the amount of data transferred between the CPU and memory.
This is such a problem that the JVM gives threads their own allocation pools to write to before flushing back to the main heap. All to reduce the number of atomic writes to the pointer tracking memory in the heap.
reply