From what I understand you can have secure chats e2ee ? I like that I can login from multiple devices and continue the conversation. This was always annoying with whatsapp and signal. Worst case is mildly embarrassing stuff leaks.
whatsapp, facebook messenger, imessage all support multi-device and it's pretty convenient, in fairness to telegram they launched a bit before double ratched was invented, but still, they've had over a decade to switch to it...
That's still one device. If you turn the primary phone off, the secondary device stops working. WhatsApp just proxies everything through the primary device, it's like WhatsApp Web.
They used to, but that hasn't been true for a few years now.
Now it uses the Signal protocol's native multi-device capabilities, specifically in the "key per device" variant (unlike signal itself, which uses "key per account" if I'm not mistaken).
> You can now use the same WhatsApp account on multiple devices at the same time, using your primary phone to link up to four devices. You’ll need to log in to WhatsApp on your primary phone every 14 days to keep linked devices connected to your WhatsApp account.
Messenger seems to be properly multi-device, but you pay for this by some PIN code bullshit (maybe they removed that, I haven't seen a popup about this for over a year now?) and having to sync chat history in the background, through a process that is, of course, broken and unreliable.
I'm actually still jaded about this. Messenger worked fine before they broke it by introducing E2EE; it took years for them to fix the problems this caused (at least the ones that were immediately user-perceptible).
yeah messenger still has the pin code thingy, i'm curious why they do it at all that way, can't you just have your keys on fb servers encrypted with another set of keys derived from your password, which is much stronger than a 4-6 digit key?
It's still broken if you're like me and you clear cookies
"Let's take people's years-long history between each other and just utterly break it. Why? 'privacy'" but they've never cared about it, they're opportunistic fucks. It's Zuckerberg's company to do with it "as he wishes" https://news.ycombinator.com/item?id=16770818
It's not true for Signal either. Why don't you try it for yourself instead of spreading outdated (at best) information? Signal supports native multi-device capabilities without relaying everything through the "primary" device.
It's called iMessage. It's possible, Telegram just doesn't care. All their differentiating features (large groups, channels, device sync) is directly enabled by the lack of encryption.
they do have encryption, just not e2ee, and in fairness to them, it doesn't make sense to have e2ee on a channel or a group with 100k ppl in it, also device sync is possible with e2ee, it's just a slower
What are you talking about? WhatsApp, iMessage, and Signal all have multi-device support and are E2E encrypted, just to name a few very popular options.
it's unironically just react lmao, virtually every popular react app has an insane number of accidental rerenders triggered by virtually everything, causing it to lag a lot
Vue uses signals for reactivity now and has for years. Alien signals was discovered by a Vue contributor. Vue 3.6 (now in alpha/beta?) will ship a version that is essentially a Vue flavored Svelte with extreme fine grained reactivity based on a custom compiler step.
One of the reasons Vue has such a loyal community is because the framework continues to improve performance without forcing you to adopt new syntax every 18 months because the framework authors got bored.
The React paradigm is just error prone. It's not necessarily about how much you spend. Well paid engineers can still make mistakes that cause unnecesssary re-renders.
If you look at older desktop GUI frameworks designed in a performance-oriented era, none of them use the React paradigm, they use property binding. A good example of getting this right is JavaFX which lets you build up functional pipelines that map data to UI but in a way that ensures only what's genuinely changed gets recomputed. Dependencies between properties are tracked explicitly. It's very hard to put the UI into a loop.
Property binding and proxies really didn't work well in JS at all until relatively recently, and even then there is actually a much worse history of state management bugs in apps that do utilize those patterns. I've yet to actively use any Angular 1.x app or even most modern Angular apps that don't have bugs as a result of improper state changes.
While more difficult, I think the unidirectional workflows of Redux/Flux patterns when well-managed tend to function much better in that regard, but then you do suffer from potential for redraws... this isn't the core of the DOM overhead though... that usually comes down to a lot of deeply nested node structures combined with complex CSS and more than modest use of oversized images.
Yes, they do. OGs remember that Facebook circa 2012 had navigation take like 5-10 seconds.
Ben Horowitz recalled asking Zuck what his engineer onboarding process was when the latter complained to him about how it took them very long to make changes to code. He basically didn't have any.
If it's true that nobody is getting promoted for improving web app performance, that seems like an opportunity. Build an org that rewards web app performance gains, and (in theory) enjoy more users and more money.
They have no real competitors, so anything that makes the user even stickier and more likely to spend money (LinkedIn Premium or whatever LinkedIn sells to businesses) takes priority over any improvements.
It's a bit unfortunate that several calls to .setHTML() can't be batched so that several .setHTML() calls get executed together to minimize page redraws.
Well, their lowest tier devs, they have started firing and churn a lot... combined with mass layoffs... and on the higher end, they're more interested in devs that memorized all the leet code challenges over experienced devs/engineers that have a history of delivering solid, well performing applications.
Narcissism rises to the top, excess "enterprise" bloat seeps in at every level combined with too many sub-projects that are disconnected in ways that are hard to "own" as a whole combined with perverse incentives to add features over improving the user experience.
I think linkedin is built with emberjs not react last i checked…
The problem with performance in wep apps is often not the omg too much render. But is actually processing and memory use. Chromium loves to eat as much ram as possible and the state management world of web apps loves immutability. What happens when you create new state anytime something changes and v8 then needs to recompile an optimized structure for that state coupled with thrashing the gc? You already know.
I hate the immutable trend in wep apps. I get it but the performance is dogshite. Most web apps i have worked on spend about 10% of their cpu time…garbage collecting and the rest doing complicated deep state comparisons every time you hover on a button.
vpn protocols we use here nowadays are way more advanced than this, they mimic a TLS handshake with a legitimate (non blocked site, like google.com) and looks essentially like regular https traffic to that site
it looks like they are basically impossible to detect, given the failure to block them, outside of timing attacks (seeing if a request crosses Russia's border and comes back quickly after), however that is fully mitigated by just having having the vpn "disconnect" and route traffic directly to Russian unblocked sites, which would otherwise be able to perform such a timing attack detection
pretty interesting stuff, there are several versions of this system, and even the ones that have existed for a while work pretty well
Super interesting stuff, but won't this require multiple (possible untrustworthy / adversarial parties) to abide by your protocol? Like if you don't control all the nodes in the VPN then why can't the Kremlin just enforce a blacklist at said bad node?
you do/can control all the VPN nodes in this setup (most often just a single one) since your traffic doesn't actually go through the website you're masking under
and the nature of the protocol makes it extremely difficult to detect and thus get server IP banned, i got one server banned, but after that i implemented some practices (including directly connecting to websites that are inside Russia) and it's been working fine since then
- roskomnadzor just not being competent enough to implement the block fully
- they'll reserse the block, since it will likely completely cripple everything that relies on the internet (which is basically everything nowadays)
- they won't go through with the ban completely, since if they do, their job is sort of done, and they want to continue to exist to make money off of the digital infrastructure required to implement the block, and they'll just continue playing this game of cat and mouse
- outside internet connectivity will likely remain to some degree, it'll just be very slow and probably expensive, but i really struggle to see a country like Russia being completely cut off from the internet in the year of our lord 2026
i could be wrong, who knows, after all this whole situation is unprecedented, and human ingenuity sort of always finds a way
and in a somewhat positive note, mobile internet has come back today and the blocks are bypassable with a regular vpn now, even ones that aren't being hosted on whitelisted subnets
It has to be said that one of the reasons a single header library is so useful in the C/C++ world, is because it makes interfacing to Lua so much sweeter.
this is silly, we already have an algorithm for generating very efficient assembly/machine code from source code, this is like saying maybe one day llms will be able to replace sin() or an os kernel (vaguely remember someone prominent claiming this absurdity), like yes, maybe it could, but it will be super slow and inefficient, we already know a very (most?) efficient algorithm, what are we doing?
reply