Hacker Newsnew | past | comments | ask | show | jobs | submit | taminka's commentslogin

it's really unfortunate that telegram doesn't do e2ee, bc it's hands down the best messenger otherwise :(

From what I understand you can have secure chats e2ee ? I like that I can login from multiple devices and continue the conversation. This was always annoying with whatsapp and signal. Worst case is mildly embarrassing stuff leaks.

> From what I understand you can have secure chats e2ee ?

Not with bots, though.

> I like that I can login from multiple devices and continue the conversation

This is also not possible with Telegram E2E, while it is with Signal and WhatsApp.


It does, but only for chats between two specific devices. Multi-device support is one of its best features that you lose with E2E.

Key distribution is just too hard. I think we won't get a messenger for non-tech people that works well with multi-device and E2E basically ever.


whatsapp, facebook messenger, imessage all support multi-device and it's pretty convenient, in fairness to telegram they launched a bit before double ratched was invented, but still, they've had over a decade to switch to it...

WhatsApp doesn't support multi-device. You can't have it installed on two phones at once.

you can (https://faq.whatsapp.com/1046791737425017/?cms_platform=andr...)

they even have it on fb messenger and instagram (though they recently removed e2ee completely from instagram lol)


That's still one device. If you turn the primary phone off, the secondary device stops working. WhatsApp just proxies everything through the primary device, it's like WhatsApp Web.

It used to be like that but not anymore. As siblings suggested you can now use it on up to 4 (I believe) additional devices.

They used to, but that hasn't been true for a few years now.

Now it uses the Signal protocol's native multi-device capabilities, specifically in the "key per device" variant (unlike signal itself, which uses "key per account" if I'm not mistaken).


This is not true, even if the primary phone is offline you can send messages via secondary device, even whatsapp web

It’s not proxied via primary, otherwise it wouldn’t work if primary were offline


> It’s not proxied via primary, otherwise it wouldn’t work if primary were offline

That is correct, it doesn't work.


Please stop spreading misinformation that can trivially be disproved with five minutes of effort.

I just tried it. Did you?

> You can now use the same WhatsApp account on multiple devices at the same time, using your primary phone to link up to four devices. You’ll need to log in to WhatsApp on your primary phone every 14 days to keep linked devices connected to your WhatsApp account.

ref: https://faq.whatsapp.com/1317564962315842/?cms_platform=ipho...

> Use WhatsApp on your computer even when your phone is off.

ref: https://faq.whatsapp.com/378279804439436/?helpref=faq_conten...


Yes, and it works, as it has for the past few years.

So I don't need my primary device any more? I can just shut that phone down forever?

No, I think you need it to be online once every 30 days or so. That's a much weaker requirement than what you were disputing, though.

oh, i see, is it the same for facebook messenger and instagram, imessage, etc?

Messenger seems to be properly multi-device, but you pay for this by some PIN code bullshit (maybe they removed that, I haven't seen a popup about this for over a year now?) and having to sync chat history in the background, through a process that is, of course, broken and unreliable.

I'm actually still jaded about this. Messenger worked fine before they broke it by introducing E2EE; it took years for them to fix the problems this caused (at least the ones that were immediately user-perceptible).


yeah messenger still has the pin code thingy, i'm curious why they do it at all that way, can't you just have your keys on fb servers encrypted with another set of keys derived from your password, which is much stronger than a 4-6 digit key?

It's still broken if you're like me and you clear cookies

"Let's take people's years-long history between each other and just utterly break it. Why? 'privacy'" but they've never cared about it, they're opportunistic fucks. It's Zuckerberg's company to do with it "as he wishes" https://news.ycombinator.com/item?id=16770818


I don't know, I don't use those. It is for Signal, I don't think so for Instagram, since I don't think that encrypts end to end.

It's not true for Signal either. Why don't you try it for yourself instead of spreading outdated (at best) information? Signal supports native multi-device capabilities without relaying everything through the "primary" device.

It's called iMessage. It's possible, Telegram just doesn't care. All their differentiating features (large groups, channels, device sync) is directly enabled by the lack of encryption.

they do have encryption, just not e2ee, and in fairness to them, it doesn't make sense to have e2ee on a channel or a group with 100k ppl in it, also device sync is possible with e2ee, it's just a slower

you can have large groups and device sync WITH e2ee, see Matrix.

Matrix

What are you talking about? WhatsApp, iMessage, and Signal all have multi-device support and are E2E encrypted, just to name a few very popular options.

it's unironically just react lmao, virtually every popular react app has an insane number of accidental rerenders triggered by virtually everything, causing it to lag a lot


well that's any framework with vdom, the GC of web frameworks, so I'd imagine it's also a problem with vue etc..

I don't understand though why performance (I.e. using it properly) is not a consideration with these companies that are valued above $100 billion

like, do these poor pitiful big tech companies only have the resources to do so when they hit the 2 trillion mark or something?


Vue uses signals for reactivity now and has for years. Alien signals was discovered by a Vue contributor. Vue 3.6 (now in alpha/beta?) will ship a version that is essentially a Vue flavored Svelte with extreme fine grained reactivity based on a custom compiler step.

One of the reasons Vue has such a loyal community is because the framework continues to improve performance without forcing you to adopt new syntax every 18 months because the framework authors got bored.


It's not a problem with vue or svelte because they are, ironically, reactive. React greedily rerenders.

It's also not a problem with the react compiler.


The React paradigm is just error prone. It's not necessarily about how much you spend. Well paid engineers can still make mistakes that cause unnecesssary re-renders.

If you look at older desktop GUI frameworks designed in a performance-oriented era, none of them use the React paradigm, they use property binding. A good example of getting this right is JavaFX which lets you build up functional pipelines that map data to UI but in a way that ensures only what's genuinely changed gets recomputed. Dependencies between properties are tracked explicitly. It's very hard to put the UI into a loop.


Property binding and proxies really didn't work well in JS at all until relatively recently, and even then there is actually a much worse history of state management bugs in apps that do utilize those patterns. I've yet to actively use any Angular 1.x app or even most modern Angular apps that don't have bugs as a result of improper state changes.

While more difficult, I think the unidirectional workflows of Redux/Flux patterns when well-managed tend to function much better in that regard, but then you do suffer from potential for redraws... this isn't the core of the DOM overhead though... that usually comes down to a lot of deeply nested node structures combined with complex CSS and more than modest use of oversized images.


Nobody gets promoted for improving web app performance.


Yes, they do. OGs remember that Facebook circa 2012 had navigation take like 5-10 seconds.

Ben Horowitz recalled asking Zuck what his engineer onboarding process was when the latter complained to him about how it took them very long to make changes to code. He basically didn't have any.


From: https://hpbn.co/primer-on-latency-and-bandwidth/#speed-is-a-...

> Faster sites lead to better user engagement.

> Faster sites lead to better user retention.

> Faster sites lead to higher conversions.

If it's true that nobody is getting promoted for improving web app performance, that seems like an opportunity. Build an org that rewards web app performance gains, and (in theory) enjoy more users and more money.


yep. I think this is the root problem, not the frameworks themselves


If it's slow people also stick around for longer if they have something they must accomplish before leaving.


They have no real competitors, so anything that makes the user even stickier and more likely to spend money (LinkedIn Premium or whatever LinkedIn sells to businesses) takes priority over any improvements.


> well that's any framework with vdom

Is it time for vanilla.js to shine again with Element.setHTML()?

https://developer.mozilla.org/en-US/docs/Web/API/Element/set...

It's a bit unfortunate that several calls to .setHTML() can't be batched so that several .setHTML() calls get executed together to minimize page redraws.


Well, their lowest tier devs, they have started firing and churn a lot... combined with mass layoffs... and on the higher end, they're more interested in devs that memorized all the leet code challenges over experienced devs/engineers that have a history of delivering solid, well performing applications.

Narcissism rises to the top, excess "enterprise" bloat seeps in at every level combined with too many sub-projects that are disconnected in ways that are hard to "own" as a whole combined with perverse incentives to add features over improving the user experience.


I think linkedin is built with emberjs not react last i checked…

The problem with performance in wep apps is often not the omg too much render. But is actually processing and memory use. Chromium loves to eat as much ram as possible and the state management world of web apps loves immutability. What happens when you create new state anytime something changes and v8 then needs to recompile an optimized structure for that state coupled with thrashing the gc? You already know.

I hate the immutable trend in wep apps. I get it but the performance is dogshite. Most web apps i have worked on spend about 10% of their cpu time…garbage collecting and the rest doing complicated deep state comparisons every time you hover on a button.

Rant over.


you need secure channels of communication (and preferably a connection to the outside world) to solve any problem


No. You need people not being sheep to fight a police state.

Russians are sheep. Russia has become a police state.


Have you yourself fought anyone?

Besides arguing on the internet with strangers


The front line is everywhere, as you know. You are fighting an information war and some of us are here to troll you back.


Only online trolls with my NAFO brethren.


It will happen to you soon, and you won't fight.


vpn protocols we use here nowadays are way more advanced than this, they mimic a TLS handshake with a legitimate (non blocked site, like google.com) and looks essentially like regular https traffic to that site

it looks like they are basically impossible to detect, given the failure to block them, outside of timing attacks (seeing if a request crosses Russia's border and comes back quickly after), however that is fully mitigated by just having having the vpn "disconnect" and route traffic directly to Russian unblocked sites, which would otherwise be able to perform such a timing attack detection

pretty interesting stuff, there are several versions of this system, and even the ones that have existed for a while work pretty well


Super interesting stuff, but won't this require multiple (possible untrustworthy / adversarial parties) to abide by your protocol? Like if you don't control all the nodes in the VPN then why can't the Kremlin just enforce a blacklist at said bad node?


you do/can control all the VPN nodes in this setup (most often just a single one) since your traffic doesn't actually go through the website you're masking under

and the nature of the protocol makes it extremely difficult to detect and thus get server IP banned, i got one server banned, but after that i implemented some practices (including directly connecting to websites that are inside Russia) and it's been working fine since then


perhaps, there's still hope i think:

- roskomnadzor just not being competent enough to implement the block fully

- they'll reserse the block, since it will likely completely cripple everything that relies on the internet (which is basically everything nowadays)

- they won't go through with the ban completely, since if they do, their job is sort of done, and they want to continue to exist to make money off of the digital infrastructure required to implement the block, and they'll just continue playing this game of cat and mouse

- outside internet connectivity will likely remain to some degree, it'll just be very slow and probably expensive, but i really struggle to see a country like Russia being completely cut off from the internet in the year of our lord 2026

i could be wrong, who knows, after all this whole situation is unprecedented, and human ingenuity sort of always finds a way

and in a somewhat positive note, mobile internet has come back today and the blocks are bypassable with a regular vpn now, even ones that aren't being hosted on whitelisted subnets


read the post please, the precise problem is that this may soon not work


nobody actually likes it, it's just macos is still the least terrible to use option


i swear if someone starts another single header vs other options debate in this comment section i'm gonna explode


Boom! C and C++ aren't scripting languages.


It has to be said that one of the reasons a single header library is so useful in the C/C++ world, is because it makes interfacing to Lua so much sweeter.

Lua(C,C++) = nirvana

BRB, off to add Canvas_ity to LOAD81 ..


what do you mean by that?


most of the traffic is probably from open weights, just seed those, host private ones as is


this is silly, we already have an algorithm for generating very efficient assembly/machine code from source code, this is like saying maybe one day llms will be able to replace sin() or an os kernel (vaguely remember someone prominent claiming this absurdity), like yes, maybe it could, but it will be super slow and inefficient, we already know a very (most?) efficient algorithm, what are we doing?


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: