Hacker Newsnew | past | comments | ask | show | jobs | submit | rini17's commentslogin

You can use triggers to keep the tag table synchronized automatically.

Yes the article is very meandering. But near the end he mentions that the chips nvidia sold last three quarters alone would require about 15GW of electricity to actually run. But where are all the datacenters? And how long can this go on?

When the ground truth is someone keeps buying zillions worth of equipment from investors' money, but the chances to use it are slim, not to even mention getting some profit off it, there just has to be some fraud involved. Probably not on nvidia's side, but would not bet on it.


I am going to assume those GPU sales are AI specific chips and not gaming cards.

A lot of these data centers do seem to be being built with many more planned, either way Nvidia has sold the cards. I don't necessarily think there's fraud involved but simply a lot of demand, speculative or otherwise.

Nvidia's financial statements are externally audited, their inventory is audited and their accounts receivable functions are also audited. I'd put a lot of faith in those audits.


Whole scheme is clearly unsustainable and will end with disaster. That some parts are thoroughly audited and trustful, is not whole picture.

Why can't investors wait with buying AI chips until they actually have DCs and power supply ready? There will be improved ones next year.


That glorious day when I explained to my boss what wiki is and that we should have one internally, he fired "viki" into google, with smoothly honed muscle memory clicked first result..and got full screen of poon.

At least you weren't the guy hitting a wall when trying to get a testing library integrated because it was named Testacular

Back in college we had an old program used to analyse oscilloscope data named ANAL.

I studied analytic combinatorics in grad school. Had to be sure not to abbreviate it to "anal comb".

At school we called our module analsyn for syntactic analyser. Good times.

When I told a co-worker about https://pypi.org/project/voluptuous/ he immediately searched for the name alone, got really wide-eyed and closed the tab, then told us not to do the same.

There was a markdown library called upskirt, the authors were bullied into renaming it. They called it Misaka, because that's an anime character that uses shorts under her skirt.

I asked to have LaTeX installed at one site, several years ago. The first Google results were eye-opening.

I had a student in one of my LaTeX classes back in the 90s who had a “I lust for latex” T-shirt.

Honestly, I'm not sure which interpretation is more concerning.

What was the first result? Mine is Rakuten Viki, a streaming service focused on Asian dramas that aren’t like what you described.

Are you aware how https proxying works? Clients use CONNECT method and after that everything is opaque to the proxy. So without mitm you only know remote IP address.

There’s another difference which is what I was referring to: in both cases, your proxy has to forge the SSL certificate for the remote server but in the transparent case it also must intercept network traffic intended for the remote IP. That means that clients can’t tell whether an error or performance problem is caused by the interception layer or the remote server (sometimes Docker Hub really is down…) and it can take more work for the proxy administrator to locate logs.

If you explicitly configure a proxy, the CONNECT method can trigger the same SSL forgery but because it’s explicit the client’s view is more obvious and explainable. If my browser gets an error connecting to proxy.megacorp.com I don’t spend time confirming that the remote service is working. If the outbound request fails, I’ll get a 5xx error clearly indicating that rather than having to guess at what node dropped a connection or why. This also provides another way to implement client authentication which could be useful if you have user-based access control policies.

It’s not a revelation, but I think this is one of the areas where trying to do things the easy way ends up being harder once you factor in frictional support costs. Transparent proxying is trading a faster rollout for years of troubleshooting.


No mention of taint mode, you had to untaint all data coming from user. By comparing to fixed string or convert to number or filter through regexp at least. If this were more broadly adopted, would save everyone so many headaches.

You don't link to the book, where it is? Did you mean Loch Kelly: The Way of Effortless Mindfulness?

That's correct, it's Loch Kelly's The Way of Effortless Mindfulness.

Here's the Amazon link: https://www.amazon.com/Way-Effortless-Mindfulness-Revolution...


Thanks for pointing it out. I've fixed the title and added a link in the post.

You can enforce grammar. Llama.cpp can do that, without any need to change models.

After I had a reckoning with bitrot, would muchly recommend to use something with ECC memory for NAS. And a checksumming filesystem with periodic scrubing that won't get corrupt on you silently.

Same, but I also discovered a wonderful bonus in the difference between True ECC DDR5 and just the on chip BS stuff.

ECC DDR5 boots insanely fast since the BIOS can quickly verify the tune passes. This is even true when doing your initial adjustment / verification of manufacturer spec.


> checksumming filesystem with periodic scrubing

Do you know a system that does this? Looking for this too


ZFS, Btrfs, or SnapRAID in a chron job (not a file system, but accomplishes something similar).

ZFS is the “gold standard” here


SnapRAID is awesome, it's been bulletproof in recovering multiple failed drives for me, but note that you have to have your sync and scrub operations appropriately configured to get bitrot protection.

I have had good experiences with SnapRAID as well. I use this script to run it (in a Chron job), which is highly configurable:

https://github.com/auanasgheps/snapraid-aio-script


btrfs and zfs

Came here for this comment. I wouldn't run a NAS without ECC.

The electronics went so cheap recently, so selling it to strangers is rarely worth the effort. Then there's a question, what OS are you going to put on an old PC. And then even if they are, say, only using browser, and would be okay with linux, modern browsers need 8GB of memory at least.

I know I am in the minority and my uses/needs/requirements are not average, but I am perfectly fine with running Xubuntu on the following hardware: 1) 4GB 2011 Thinkpad with HDD (yeah really) and 2) 4GB 2009 Phenom desktop (was Win10 until a month ago).

By fine I mean running all these at the same time: firefox with several tabs, development tools, Blender and GIMP. All snappy and fast. Even the HDD in the laptop is only an annoyance during/after a cold boot. Then it makes no difference. I daily drive both for the past 8-15 years. The laptop sits at ~10-15W idle and the i5 in it is a workhorse if needed.

Of course there are uses for better hardware, I am not dismissing upgrades. But the whole modern hw/sw situation is a giant shipwreck and a huge waste of resources/energy. I've tried very expensive new laptops for work (look up "embodied energy"), and Windows 11 right-click takes half a second to respond and Unity3D can take several minutes to boot up. It's really sad.

edit: To be honest I have to add a counter-example: streaming >=1080p60 video from YT is kind of a no-no, but that's related to the first sentence of my post.


I am running Win 10 LTSC on "HP 205 G3 All-in-One Desktop PC" with 4GB RAM. Not the best experience, but plays youtube and can output to HDMI.

I am not saying you are wrong in general.


Then someone finds out you are rewarding outsiders for something they are doing for free anyway. Such cutting into company profit is inexcusable. You are supposed to ride even your own employees raw to maximize profits, not to splurge money to some weirdos just like that!

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: