Hacker Newsnew | past | comments | ask | show | jobs | submit | lifthrasiir's commentslogin

> 13 years to get to v0.0.1 is a success?

Wine took a roughly same amount of time to be versioned as well, but no one calls Wine a failure.


First, wine was widely panned for years before it stopped sucking.

Second, you're simply ignoring that parent poster mentioned Ladybird, a non-rust project which is advancing much more speedily than servo. And I think they have a valid point -- and while the jury is still out, it's possible that in other rust-centric efforts which have experienced foot-dragging (eg WASI), the root cause may be rust itself.

Parent poster expressed their point somewhat sarcastically, but if I (C++/python dev, I admit!) were a betting transfem, my money would be on them being right.

That said, I think the Tor project got this decision right. This is as close to an ideal use-case for rust as you can get. Also, the project is mature, which will mitigate rewrite risk. The domain is one where rust can truly shine -- and it's a critical one to get right.


i'd say that wine has much less dev effort and the specs they re up against aren't as public as the web ones, so huge kudos to the wine team.

That interpretation is too generous, the word "bullshit" is generally a value judgement and implies that you are almost always wrong, even though you might be correct from time to time. Current LLMs are way past that threshold, making them much more dangerous for a certain group of people.

This and add_v3 in the OP fall into the general class of Scalar Evolution optimizations (SCEV). LLVM for example is able to handle almost all Brainfuck loops in practice---add_v3 indeed corresponds to a Brainfuck loop `[->+<]`---, and its SCEV implementation is truly massive: https://github.com/llvm/llvm-project/blob/main/llvm/lib/Anal...

Do you have anything to back that up? In the other words, is this your conjecture or a genuine observation somehow leaked from Deepmind?

It's just my observation from watching their actual CoT, which can be trivially leaked. I was trying to understand why some of my prompts were giving worse outputs for no apparent reason. 3.0 goes on a long paranoidal rant induced by the injection, trying to figure out if I'm jailbreaking it, instead of reasoning about the actual request - but not if I word the same request a bit differently so the injection doesn't happen. Regarding the injections, that's just the basic guardrail thing they're doing, like everyone else. They explain it better than me: https://security.googleblog.com/2025/06/mitigating-prompt-in...

It's a customizable auditor for models offered via Vertex AI (among others), so to speak. [1]

[1] https://docs.cloud.google.com/security-command-center/docs/m...


The racketeering has started.

Don't worry, for just $9.99/month you can use our "Model Armor (tm)(r)*" that will protect you from our LLM destroying your infra.

* terms and conditions apply, we are not responsible for anything going wrong.


I think this is especially problematic for Windows, where a simple and effective lightweight sandboxing solution is absent AFAIK. Docker-based sandboxing is possible but very cumbersome and alien even to Windows-based developers.

Windows Sandbox is built in, lightweight, but not easy to use programmatically (like an SSH into a VM)

WSB is great by its own, but is relatively heavyweight compared to other OSes (namespaces in Linux, Seatbelt in macOS).

I don't like that we need to handle docker(container) ourselves for sandboxing such a light task load. The app should provide itself.

>The app should provide itself.

The whole point of the container is trust. You can't delegate that unfortunately, ultimately, you need to be in control which is why the current crop of AI is so limited


fair point.

The problem is you can't trust the app, therefore it must be sandboxed.

Gemini CLI allows for a Docker-based sandbox, but only when configured in advance. I don't know about Antigravity.

Gemini CLI, Antigravity and Jules.

It's going Googly well I see!



Thank you. I hope HN starts autodeading posts from Mastodon and other social media as most of the time it is just random people giving short, unhumorous commentary on a screenshot; they're the YouTube reaction videos of Hacker News.

I just shared what I found. I don't know why Mastodon comments would be less interesting that reddit comments.

It's in the guidelines: https://news.ycombinator.com/newsguidelines.html

>Please submit the original source. If a post reports on something found on another site, submit the latter.


Thanks! We don't want a post about a post. We want the fully-resolved canonical post.

It's not just that it's Mastodon comments, it's that it's Mastodon thread discussing a Reddit thread.

I suggest you try this: click on the gamedev.place link and see the past posts from there. They're good.

Is it just me or is the OP in that Reddit thread just more AI slop?

There’s a pandemic of engagement bait posts on Reddit now where posts make up situations that are guaranteed to evoke ridicule or anger since those often get the most engagement. OP often replies in affirmative replies, mirroring the comments. Every once in a while a subtle reference to a product is conveniently mentioned.


The account is also just 3 days old and was made for this one post

It is not wise to brag about your product when the GP is pointing out that the article "reads like PR for Pangram", no matter AI detectors are reliable or not.


I would say it's important to hold off on the moralizing until after showing visible effort to reflect on the substance of the exchange, which in this case is about the fairness of asserting that the detection methodology employed in this particular case shares the flaws of familiar online AI checkers. That's an importantly substantive and rebuttable point and all the meaningful action in the conversation is embedded in those details.

In this case, several important distinctions are drawn, including being open about criteria, about such things as "perplexity" and "burstiness" as properties being tested for, and an explanation of why they incorrectly claim the Declaration of Independence is AI generated (it's ubiquitous). So it seems like a lot of important distinctions are being drawn that testify to the credibility of the model, which has to matter to you if you're going to start moralizing.


Only if individual digits can be articulated separately from each other. Human anatomy limits what is actually possible. Also synchronization is a big problem in chorded typing; good typists can type more than 10 strokes per second, but no one can type 10 chords (synchronous sets of strokes) per seconds I think.


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: