Hacker Newsnew | past | comments | ask | show | jobs | submit | storus's commentslogin

Latest rumors are no Mac Studio until at least October.

I suspect Israel and Russia are looking at each other with "You first!" so that they can start nuking and blame the other for starting it.

Israel is not under existential threat even if there are occasional rockets hitting Ashkelon or downtown Tel Aviv

The rest of the Arab world is mostly placated by alliances w/ the US and thus Israel, but if Israel drops an nuke all bets are off.

The Saudis will probably have a bomb before too long following a nuclear deal: https://www.reuters.com/world/us/us-removing-guardrails-prop...


Can anyone suggest why after covid I can't do Finnish sauna anymore? Prior to that I used to do 1-2x a week a sequence of 5x(10 minutes in sauna + 5 minutes cold water immersion + 10 minutes rest) which was absolutely great for both stress reduction and blood flow. Now if I do 5 minutes in sauna I feel like my skin was burning and I am about to die, and I need to recover for 1 hour from that to be able to just walk away from sauna.

Is the stove radiating too much heat? You want it to heat the air, not fry your skin! I once got a stove glowing hot because I had failed to get a high enough temperature going on the day before. It was the first time I tried a wood fired sauna myself. As long as the stove was glowing hot, I just couldn't enjoy the sauna.

No idea. How hot is and was your sauna? Is it possible that it's hotter than it used to be? Maybe try one that's slightly less hot?

I've got the opposite problem: saunas don't seem to be able to make me sweat anymore, so I'm looking for the hottest saunas I can find.


The usual 95"C, nothing extraordinary. Sweating after covid got impaired, I might have some thermoregulation issue.

95 is pretty hot. At commercial spas I see them start at 70, and rarely above 90.

95 is normal where I live for Finnish saunas. Then there are other types of saunas that start lower, but Finnish are always around 95.

95 is high for Finnish saunas in Finland at least. Public saunas are very rarely so hot here, and few like it that hot.

Edit: to put it into some numbers, per one study[1] Finnish sauna sessions were on average at 75.9°C with SD 9.9°C. If we assume normal distribution, that means that more than 97 % of sauna sessions are at < 95°C.

[1] https://pmc.ncbi.nlm.nih.gov/articles/PMC6262976/


I actually like them that hot. I look for 90+ saunas, and once was in one that claimed to be over 100. Although I have no idea how accurate that is. They're very bearable to me. But if they're not bearable, of course you should look for a sauna that's not quite as hot. Or at least stay low; the higher you sit, the hotter it is.

You can have over 100°C. The amount of steam is a crucial factor. Even over 100°C doesn't feel that hot if it's dry, but at higher temperatures the "löyly profile" often becomes quite harsh.

I'm a big fan of soaking in hot water and have noticed that cardiac function seems to have a massive effect on heat tolerance as measured vs body temperature.

For example, if I've been totally sedentary for the whole day (and my feet are chilly+blue), a body temperature as low as 101F is unbearable. But if I've been actively moving around all day (and my feet are warm and pink), I only start getting uncomfortable at a body temperature around 103.5F-104F.

This also seems to correlate over a longer timespan re: exercise habits, consumption habits, sickness, etc.


Shot in the dark, but has your actual stove changed? When have you last changed the stones? Is the circulation of air worse?

If your skin feels hot my guess would be that the steaming effect might be disrupted by the water getting evaporated faster than before, and the circulation of air also affects the skin feel (that’s why a certain seating position can make sauna unbearable). You could also try to just turn it on at the lowest setting and see if it changes anything. Maybe the stones have gotten so old that old heat settings have sneakily turned unbearable.


Did it happen suddenly? Or did you go for a long time without using a sauna, and noticed the change only when you resumed? Did anything else about your body change, such as weight loss (perhaps from a GLP-1)?

It's possible that Covid had nothing to do with it, and your body is simply changing with age. It's depressing, but it happens!


Once in a while as I get sick I have to retrain myself to going to sauna (e.g. taking lowest level, even skipping the Aufguss, German infusion where temperature is raised gradually etc.)

Also IMO your body fat/water/lean muscle ratio may play a role. I once lost 5 kg due to Influenza A and all my sport achievements as well as sauna endurance were gone


anecdotal -but- it took me 6 months after covid for my breathing rate to go back to normal, and to be able to do consistent max our efforts of >190BPM for >5 seconds like previously

After covid I've found i cannot stand the cold. A friend of mine can't stand alcohol since.

I run them all on an old Pentium J (Atom) NUC with 8GB RAM, so I don't even care. Some Chinese N100 mini PC for $100 is all one needs.

Using local models or external?

If external, aren't they pricey? How much tokens they generate?

If local, what runs on such hardware that gives reasonable results?


Local models on different machines with multiple RTX Pro 6000 or multiple DGX Sparks or a 512GB RAM Macstudio; the agents themselves run on that Pentium J NUC and just use exposed endpoints for local models. Forgejo for Git runs on another server. Therefore I don't really care if that NUC goes kaboom and can test everything quickly (OpenClaw, Hermes, Claude Code, Codex, OpenCode, Pi etc.). Or I can just use OpenRouter API key and access 10-100x cheaper models than Opus.

They no longer show reasoning traces and are throttling more aggressively.

They never showed full reasoning traces, just post-hoc summaries.

DeepSeek still shows them, it sometimes says "I am ChatGPT", and Claude sometimes says "I am DeepSeek" so the distillation went both ways.

> Qwen/Qwen3.6-35B-A3B is intended as a superior replacement of Qwen/Qwen3.5-27B

Not at all, Qwen3.5-27B was much better than Qwen3.5-35B-A3B (dense vs MoE).


Not sure why you're being downvoted, I guess it's because how your reply is worded. Anyway, Qwen3.7 35B-A3B should have intelligence on par with a 10.25B parameter model so yes Qwen3.5 27B is going to outperform it still in terms of quality of output, especially for long horizon tasks.

Re-read that

You should. 3.5 MoE was worse than 3.5 dense, so expecting 3.6 MoE to be superior than 3.5 dense is questionable, one could argue that 3.6 dense (not yet released) to be superior than 3.5 dense.

Ok but you made a claim about the new model by stating a fact about the old model. It's easy to see how you appeared to be talking about different things. As for the claim, Qwen do indeed say that their new 3.6 MoE model is on a par with the old 3.5 dense model:

> Despite its efficiency, Qwen3.6-35B-A3B delivers outstanding agentic coding performance, surpassing its predecessor Qwen3.5-35B-A3B by a wide margin and rivaling much larger dense models such as Qwen3.5-27B.

https://qwen.ai/blog?id=qwen3.6-35b-a3b


This says a slightly different thing:

https://x.com/alibaba_qwen/status/2044768734234243427?s=48&t...

If you look, at many benchmarks the old dense model is still ahead but in couple benchmarks the new 35B demolishes the old 27B. "rivaling" so YMMV.


That's oversimplification. EU is composed of people vetted by lobbyist/old money groups, elected and approved by member countries. Their primary allegiance is not to the voter.

Management wants to get rid of people; they want to have their "wish-machine" that does what they say without any need to deal with nerds or ethical issues.

Aren't lasers driving the current 32TB+ HDD tech?

yeah but that wasn't a straight upgrade, either. HAMR has all sorts of tradeoffs.

Will RISC-V end up with the same (or even worse) platform fragmentation as ARM? Because of absence of any common platform standard we have phones that are only good for landfill once their support lifetime is up, drivers never getting upstreamed to Linux kernel (or upstreaming not even possible due to completely quixotic platforms and boot protocols each manufacturer creates). RISC-V allows even higher fragmentation in the portions of instruction sets each CPU supports, e.g. one manufacturer might decide MUL/DIV are not needed for their CPU etc. ("M" extension).

> platform fragmentation

RISC-V is addressing this issue quite directly. For things like desktops, laptops, SBCs and servers we have the RVA23 profile which defines quite specifically what features a chip must support to ensure code portability.

On top of this, there are platform specifications. For example, the server spec is about to finalize next month. It extends RVA23 which things like UEFI, SBI, and ACPI to ensure that your can take something like a Linux distro and easily install it on any RISC-V server, like you can in the world of x86-64.

> we have phones that are only good for landfill once their support lifetime is up

RISC-V will probably not solve that problem in general.

First, the ISA cannot really demand that your phone avoid a Broadcom wireless chip that requires proprietary firmware for example.

Also, the phone vendor can still lock down the devices to prevent running arbitrary code.

Thankfully, the RISC-V world is developing a culture of openness. If a company wants to create a fully “open” phone, they are quite likely to adopt RISC-V. And, because of RISC-V, even the SoC itself could be fully Open Source.

But your typical Android phone is not going to get more Open just because they contain a RISC-V CPU.


The answer is unequivocally yes: RISC-V is designed to be customizable and a vendor can put whatever they like into a given CPU. That being said, profiles and platform specs are designed to limit fragmentation. The modular design and core essential ISA also makes fat binaries much more straight-forward to implement than other ISAs.

You can choose to develop proprietary extensions, but who’s going to use them?

A great case study is the companies that implemented the pre-release vector standard in their chips.

The final version is different in a few key ways. Despite substantial similarities to the ratified version, very few people are coding SIMD for those chips.

If a proprietary extension does something actually useful to everyone, it’ll either be turned into an open standard or a new open standard will be created to replace it. In either case, it isn’t an issue.

The only place I see proprietary extensions surviving is in the embedded space where they already do this kind of stuff, but even that seems to be the exception with the RISCV chips I’ve seen. Using standard compilers and tooling instead of a crappy custom toolchain (probably built on an old version of Eclipse) is just nicer (And cheaper for chip makers).


Yes, extensions are perfect for embedded. But not just there.

Extensions allow you to address specific customer needs, evolve specific use cases, and experiment. AI is another perfect fit. And the hyperscaler market is another one where the hardware and software may come from the same party and be designed to work together. Compatibility with the standard is great for toolchains and off-the-shelf software but there is no need for a hyperscaler or AI specific extension to be implemented by anybody else. If something more universally useful is discovered by one party, it can be added to a future standard profile.


RVA23 is the standard target for compilers now. If you support newer stuff, it’ll take a while before software catches up (just like SVE in ARM or AVX in x86).

If you try to make your own extensions, the standard compiler flags won’t be supporting it and it’ll probably be limited to your own software. If it’s actually good, you’ll have to get everyone on board with a shared, open design, then get it added to a future RVA standard.


Compiling the code is not the issue. The hard part is the system integration. Most notably the boot process and peripherals. It's not actually hard to compile code for any given ARM or x86 target. Even much less open ecosystems like IBM mainframes have free and open source compilers (eg GCC). The ISA is just how computation happens. But you have to boot the system, and get data in and out for the system to be actually useful, and pretty much all of that contains vendor specific quirks. Its really only the x86 world where that got so standardized across manufacturers, and that was mostly because people were initially trying to make compatible clones of the IBM PC.

Thanks, that however addresses only a part of the problem. ARM is also suffering from no boot/initialization standard where each manufacturer does it their own way instead of what PC had with BIOS or UEFI, making ARM devices incompatible with each other. I believe the same holds with RISC-V.

There is a RISC-V Server Platform Spec [0] on the way supposed to standardise SBI, UEFI and ACPI for server chips, and it is expected to be ratified next month. (I have not read it myself yet)

[0]: https://github.com/riscv-non-isa/riscv-server-platform


There has been concerted effort to start working on these kinds of standards, but it takes time to develop and reach a consensus.

Some stuff like BRS (Boot and Runtime Services Specification)and SBI (Supervisor Binary Interface) already exist.


PC/x86 was an extreme outlier, sadly, and it was because of Microsoft/Intel business model. The architecture details was historically mostly decided on by Wintel, yet the system integration was done by many vendors, whose best interest was to stay as compatible as possible. Its unlikely that another platform would be able to reach this state, the PC architecturing was subsidized from the M$ software monopoly that nobody would have wanted to suffer thru again.

> Its unlikely that another platform would be able to reach this state...

Is this really true? The computer ecosystem is more open now than ever. The original PC BIOS (which PC-compatible manufacturers needed to implement) was never an open, documented standard. It was a proprietary, closed system made by IBM. It's pretty fair to say that IBM didn't anticipate a PC/x86 ecosystem developing around their product. They even sued companies who made their own compatible BIOSes (like Corona). Intel didn't really have much to do with the success of the product at that point in time either, much less Microsoft.

In contrast, every widely-used modern system for hardware abstraction (UEFI/ACPI/DeviceTree/OpenSBI/etc) are open, royalty-free standards that anyone can use. Their implementation in ARM is newer, and inconsistent, but that's only because of how hugely diverse the ARM ecosystem is.


> Is this really true?

I think the issue is that desktop and server computing are “open” in the sense that you have full control over the software you run on them. So people interpret the dominant desktop and server platform architecture (the world of x86-64) as being open.

The embedded world is mostly closed, you are meant to run the software your hardware comes with. The platform’s popular there are considered less open (ARM and RISC-V).

Mobile devices like phones and tablets are historically closed devices, regardless of ISA. They are generally getting more closed in the name of security.

It is not the ISA that is “open” but the industry.

That said, in RISC-V, there is a sub-current of openness. I do not think that will overcome the industry tendencies in general, but there will be a small cadre of folks trying to create an open presence in every niche. The good news is that there is nothing to stop them. They will succeed eventually.


The early PC era was a mess, and that's not the period I'm talking about. IBM was clearly not up to the task and Intel didn't care much yet, but Microsoft certainty did a lot for compatibility from the start (i. e. DOS abstracted away a lot of BIOS routines, so it would be easy to port MS-DOS to a non-IBM x86). But after IBM revealed MCA to show just exactly how much do they care about compatibility and platform openness, Intel realized they are missing out and cleaned up the MCA/EISA/VLB mess with PCI. Then Microsoft and Intel jointly released APM 1992 (which was clearly not enough), and then ACPI in 1996 (which is a total dumpster fire, but a sufficiently functional dumpster fire). I. e. ACPI and UEFI are exactly the product of the monopoly. M$/Intel profited from the abundance of cheapo white boxes, so it was in their best interest to come up with a standard even DELL can implement. The fact that AMD is going to implement the ACPI too wasn't much bother for Intel - they were so dominant that they could afford not to care.

On the other hand, ARM sells the cores to SoC vendors (and doesn't care much what becomes of it), SoC vendors ducktape the ARM cores to a bunch of Synopsys peripherals and sell the resulting SoCs to smartphone and car makers (and doesn't care much for the product). System integrators throw Android on top and sell it to the customers. Then Google, who get all the cream via Play, hides all the mess behind a thousand layers of Java abstractions.

DeviceTree is an offshot of Sun's OpenFirmware (and it leaves out all the hard stuff - OpenFirmware had Forth, DeviceTree expects the kernel to support every single brand of fan switch). OpenSBI is a disaster. I'm sorry, but what kind of bright mind came up with the idea of hiding damn *timer* behind a privilege switch? Timers were enough of a pain point on x86 already, then it settled on userspace-accesable RDTSC. RISC-V SBI? Reproducing x86 one stupid decision at a time.


Just like everything else outside PC thanks to clones becoming a thing.

One reason UNIX became widely adopted, besides being freely available versus the other OSes, was that allowed companies to abstract their hardware differences, offering some market differentiation, while keeping some common ground.

Those phones common ground is called Android with Java/Kotlin/C/C++ userspace, folks should stop seeing them as GNU/Linux.


> Will RISC-V end up with the same (or even worse) platform fragmentation as ARM?

Sadly, yes. RISC-V vendors are repeating literally every single mistake that the ARM ecosystem made and then making even dumber ones.


Please elaborate.

Consider applying for YC's Summer 2026 batch! Applications are open till May 4

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: