Hacker Newsnew | past | comments | ask | show | jobs | submit | scq's commentslogin

Just because a web application uses React and is slow, it does not follow that it is slow because of React.

It's perfectly possible to write fast or slow web applications in React, same as any other framework.

Linear is one of the snappiest web applications I've ever used, and it is written in React.


Sure it's possible but those are a handful of exceptions against the norm, when the general approach so easily guides you towards bloat upon bloat that you have to be an expert to actively avoid going down that route.

Does not, in the seeming absence of other snappy examples and the overwhelming evidence of many, many slow React apps, the exception prove the rule?

There are plenty of snappy examples. Off the top of my head: Discord, Netflix, Signal Desktop, WhatsApp Web.

Those are all really poorly-performing.

Discord, maybe. But Netflix and WhatsApp Web? Those are bloated cows, just less broken than average.

No. What it can affect though is the bandwidth of the cable, meaning e.g. for HDMI cables, they might not support higher resolutions or framerates. If it's on the border you might see random disconnects or screen blanks.

The quality degrading is not something you will see, as it's a digital protocol.

"Audiophile grade" HDMI cables are likely to just be a Shenzhen bargain-bin special with some fancy looking sheathing and connectors. I would trust them less than an Amazon Basics cable.


Indeed. If I want super high quality cables, I get them from Blue Jeans Cables, who tell you exactly what Belsen or Can are cable stock and what connectors, as well as the assembly methodology.

Belden or Canare. Pesky autocorrect.

With EVs, most of your charging should be done at home, with fast charging mostly just existing for trips.

I know not everyone can charge at home (especially if you live in an apartment), but the solution to that is pretty straightforward and a lot more convenient compared with trying to scale up fast charging to match petrol stations.


From my understanding, the new CT machines are able to characterise material composition using dual-energy X-ray, and this is how they were able to relax the rules.


I am not up-to-date on the bleeding edge but that explanation doesn’t seem correct? The use of x-rays in analytical chemistry is for elemental analysis, not molecular analysis. (There are uses for x-rays in crystallography that but that is unrelated to this application.)

At an elemental level, the materials of a suitcase are more or less identical to an explosive. You won’t easily be able to tell them apart with an x-ray. This is analogous to why x-ray assays of mining ores can’t tell you what the mineral is, only the elements that are in the minerals.

FWIW, I once went through an airport in my travels that took an infrared spectra of everyone’s water! They never said that, I recognized the equipment. I forget where, I was just impressed that the process was scientifically rigorous. That would immediately identify anything weird that was passed off as water.


Here's an article that talks about Dual-energy CT [1]. And another one talking about material discrimination using DECT [2].

[1] https://en.wikipedia.org/wiki/Spectral_imaging_(radiography)

[2] https://pmc.ncbi.nlm.nih.gov/articles/PMC2719491/


Neither of those articles seem to support the idea that you can do molecular analysis with x-rays. They are all about elemental analysis, which is not useful for the purpose of detecting explosives.


Not sure if they use dual-energy x-ray as in [0], but you don't need to if you take x-ray shot from different angles. Modern 3D reconstruction algorithms you can detect shape and volume of an object and estimate the material density through its absorption rate. A 100ml liquid explosive in a container will be distinguishable from water (or pepsi) by material density, which can be estimate from volume and absorption rate.

https://en.wikipedia.org/wiki/Dual-energy_X-ray_absorptiomet...


See also beepblap's comments further below where they elaborate on this a bit (it's not just simple dual-energy xray apparently).


Hm, isn't it enough to just detect water and flag everything else as suspicious?

If your liquid is 80%+ water (that covers all juices and soft drinks), it is not going to be an explosive, too much thermal ballast.


> FWIW, I once went through an airport in my travels that took an infrared spectra of everyone’s water! They never said that, I recognized the equipment. I forget where, I was just impressed that the process was scientifically rigorous. That would immediately identify anything weird that was passed off as water.

Something like 10 years ago, I had my water checked in a specialised "bottle of water checker" equipment in Japan. I had to put my bottle there, it took a second and that was it. I have been wondering why this isn't more common ever since :-).

No idea if it was an "infrared spectra machine" of course.


Cynically, it's so they can sell you another bottle on the secure side. If they spend money to give themselves a working mechanism to distinguish water from not-water, they lose the ability to create retail demand.


I understand the idea, but it's not completely true: I empty my bottle before the security check, and fill it after in a fountain.


Then you have successfully circumnavigated a problem that more forgetful people will run into head-first. It doesn't have to catch everyone for the shops who are tenants on the secure side to complain about lost sales.


There's still no evidence that peroxide-based explosives are stable enough to be practical. And nobody every explained why the few liquid ones are so dangerous, but the solid ones get a pass when they are more stable.

It's a good thing that airport brought some machinery to apply the rule in a sane way. But it's still an insane rule, and if it wasn't the US insisting on it, the entire world would just laugh it off.


Yes. The first step was upgrading to the new machines, now the size limits can be relaxed.


Mods didn't remove it, user flags did.


The issue isn't the flaggers per se. It's that moderators show no interest to seriously investigating flagging patterns.

Its very similar to ICE. Obviously they are guilty, but I place the real blame on lawmakers' hesitancy to tale actions to reel this in. They have the power to do so and won't even investigate the issue in ways the public cannot. That's complicicy.


Mods didn't restore it either.


There are definitely use cases. Pis have lower power consumption than NUCs. This is the main reason I went for one to run Home Assistant rather than a NUC.

I have a NAS/home server that I could put it on but I don't want my home automation going down when I tinker with it.


I think I’ve got too used to VM snapshotting to turn away from it for HA.


That is not how PKI works. Your cert provider does not have a copy of your private key to give out in the first place.

Having the private key of the root cert does not allow you to decrypt traffic either.


If you don't want anything to do with EA, Konami ain't much better. They actively sabotage their former employees job prospects.

> One employee from a staffing agency said that Konami "files complaints to gaming companies who take on its former employees," causing one game company to "warn its staff against hiring ex-Kon" - "ex-Kon" being a nickname for ex-Konami employees. "If you leave the company, you cannot rely on Konami's name to land a job," one former employee said.

https://www.gamesindustry.biz/konami-accused-of-blacklisting...


Rust is already making substantial inroads in browsers, especially for things like codecs. Chrome also recently replaced FreeType with Skrifa (Rust), and the JS Temporal API in V8 is implemented in Rust.


One aspect of Transmeta not mentioned by this article is their "Code Morphing" technique used by the Crusoe and Efficeon processors. This was a low level piece of software similar to a JIT compiler that translated x86 instructions to the processor's native VLIW instruction set.

Similar technology was developed later by Nvidia, which had licensed Transmeta's IP, for the Denver CPU cores used in the HTC Nexus 9 and the Carmel CPU cores in the Magic Leap One. Denver was originally intended to target both ARM and x86 but they had to abandon the x86 support due to patent issues.

https://en.wikipedia.org/wiki/Project_Denver


Code morphing was fascinating. I had no idea nVidia tried anything similar.

I always felt Transmeta could have carved out a small but sustained niche by offering even less-efficient "morphing" for other architectures, especially discontinued ones. 680x0, SPARC, MIPS, Alpha, PA-RISC... anything the vendors stopped developing hardware (or competitive hardware) for.


Here is an old doc how it worked

https://homepage.divms.uiowa.edu/~ghosh/4-18-06.pdf

I think it's correct to say Transmeta did partial software emulation, though lines get blurry here.


So glad someone else also knew about this connection :) Details about Denver are pretty minimal, but this talk at Stanford is one of the most detailed I’ve been able to find for those interested. It’s fascinating stuff with lots of similarities to how Transmeta operated: https://youtu.be/oEuXA0_9feM?si=WXuBDzCXMM4_5YhA


There was a Hot Chips presentation by them that also gave some good details. Unlike the original Transmeta design they first ran code natively and only recompiled the hot spots.


Very similar approach is used in MCST Elbrus CPUs: https://en.wikipedia.org/wiki/Elbrus-8S#Supported_operating_...


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: