Hacker Newsnew | past | comments | ask | show | jobs | submit | yorwba's commentslogin

Don't forget that there was an economy before electrification was even a thing. I doubt a Kenyan coffee farmer is using a lot of electricity to grow coffee. Meanwhile a lot of electricity use in richer countries is for household electronics that are convenient but ultimately optional.

The company building the data center probably doesn't want to also build out the necessary support infrastructure if they could get the government to do it instead, and the president emphasizing that that is a big ask is a negotiation tactic to reduce how much assistance the government needs to provide.


Yeah, and things that can't be tried out are explicitly excluded by the Show HN rules: https://news.ycombinator.com/showhn.html

OP should come back once there's an actual product, assuming it ever gets past the email harvesting stage.


Linguists had been intensively studying Ubykh for decades by the time the last speaker died. It's rather well-documented for a language with few speakers.

It's an interesting constrained-writing exercise, but I think it demonstrates that the focus on syllable count is too simplistic as a heuristic for whether children are likely to understand a word (which in any case is context-dependent.) For example, the model inserts spaces to break up words that are more usually written without space "near by," or resorts to contorted expressions using somewhat obscure monosyllabic vocabulary ("a rule that makes moves in runs that beat a base more apt" is hardly a clear explanation of the REINFORCE algorithm.)

The CHILDES corpus might provide a more reasonable proxy for how people normally talk to children.


Yes, I definitely take this as a kind of reductio of Grow-Speech, after having defined it rigorously enough to be enforceable, or as chatbots love to say now, 'auditable' (https://gwern.net/grow-speech).

The LLMs follow the rules rigorously (barring a handful of inessential remaining errors like 'wires'), but show that you can easily satisfy the letter and not the spirit of the exercise, and that Grow-Speech can be flimsy and arbitrary once you start trying to use it seriously for something much more ambitious than https://gwern.net/doc/cs/algorithm/1998-steele.pdf because you just start using phrases or obscure Anglo-Saxon words (even if you can't go full Anglish).

When I look back at it now, I realize that Steele spends a lot of the apparently impressive length on fluff or descriptive language, and trades heavily on the fact that we already know what a programming language is or what an integer or an object is. I do not think anyone who doesn't have at least a hazy grasp of 'object' is really going to grasp a definition like:

> An object is a datum the meanings of whose parts are laid down by a set of language rules. In the Java programming language, these rules use types to make clear which parts of an object may cite other objects. Objects may be grouped to form classes. Knowing the class of an object tells you most of what you need to know of how that object acts. Objects may have fields; each field of an object can hold a datum. Which datum it holds may change from time to time. Each field may have a type, which tells you what data can be in that field at run time (and, what is more, it tells you what data can not be in that field at run time).

But when you try to tell people about something genuinely unfamiliar like REINFORCE, the obscurity becomes clear. (I'm going to revise REINFORCE to "a rule that makes moves in runs that beat a base more apt, and moves in runs that fall short less apt"... but it's not that much better, honestly.)


> In case people no longer remember, when China started to require websites to register for a license before be allowed to operate, it was for "protecting the children" too.

Indeed I do not remember this, nor can I find corroborating evidence that there was much of an effort to justify the requirement to the public at all. As far as I can tell, the government simply decided that they needed more control over the internet, so they made a law to give themselves more control over the internet. https://www.gov.cn/gongbao/content/2000/content_60531.htm It has no special provisions limited to children that only later got extended to adults. (Meanwhile, restrictions on how long children may play games continue to only apply to children, AFAIK.)


The verbalizer and reconstruction models are both initially finetuned on LLM output from a summarization prompt. The resulting text is not completely unrelated, but mostly wrong: https://transformer-circuits.pub/2026/nla/png/img_18fcfc16e9... The reconstructed activations are also far from matching the verbalizer's input. It's not unusual in machine learning to have results that are shit and SOTA at the same time, simply because there's no other technique that works better.

You should benchmark the retrieval speed of each method in terms of queries per second. I suspect that the gain in bandwidth you get from slightly better compression will be defeated by decompression being much more expensive.

As the Hacks.Mozilla article notes: "We began with small-scale experiments prompting the harness to look for sandbox escapes with Claude Opus 4.6. Even with this model, we identified an impressive amount of previously-unknown vulnerabilities which required complex reasoning over multiprocess browser engine code."



Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: