Hacker Newsnew | past | comments | ask | show | jobs | submit | spoiler's commentslogin

I have been doing this for years, especially for libraries (internal or otherwise), anything that's `pub`/`export`, or gnarly logic that makes the intent not obvious. Not _everything_ is documented, but most things are.

I'm doing it because I know how much I appreciate well-written documentation. Also this is a bit niche, but if you're using Rust and add examples to doc-comments, they get run as tests too.

Also given we both managed to produce more than one sentence, and include capital letters in our comments, it's entirely possible both of us will be accused of being an AI. Because, you know... People don't write like this, right?


Strict grammar school teachers, who enforced a "Form And Style" level of prose, have become a liability in the AI world!

>Also given we both managed to produce more than one sentence, and include capital letters in our comments, it's entirely possible both of us will be accused of being an AI.

Could anyone explain the esoteric meaning of why people started doing that shit? I got a hypothesis, what's going on is something like this:

1. Prove you are human: write Like A Fucking Adult You Weirdo (internal designator for a specific language register, you know the one)

2. Prove you are human: _DON'T_ write Like A Fucking Adult You Weirdo (because that's how LLMs were trained to write, silly!)

3. ???? (cognitive dissonance ensues)

4. PROFIT (you were just subject to some more attrition while the AI just learned how to pass a lil bit better)

I never thought computer programmers of all people would get trapped in such a simple loop of self-contradiction.

But I guess the human materiel really has degraded since whenever. I blame remote work preventing us from even hypothetically punching bosses, but anyway weird fucking times eh?

Maybe the posts trying to figure "this post is AI, that post is not AI" are themselves predominantly AI-generated?

Or is it just people made uncomfortable by what's going on, but not able to articulate further, jumping on the first bandwagon they see?

Or maybe this "AI-doubting of probably human posters" was started by humans, yes - then became "a thing", and as such was picked up by the LLM?

Like who the fuck knows, but with all honesty that's how I felt about so many things, dating from way before LLMs became so powerful that the above became a "sensible" question to ask...

Predominantly those things which people do by sheer mimesis - such as pop culture.

"Are you a goddam robot already - don't you see how your liking the stupid-making song is turning you into stupid-you, at a greater rate than it is bringing non-stupid-you aesthetic satisfaction?" type of thing -- but then I assume in more civilized places than where I come from people are much more convincingly taught that personal taste "doesn't matter" (and simultaneously is the only thing that matters; see points 1-4... I guess that's what makes some people believe curating AI, i.e. "prompt engineering" can be a real job and not just boil down to you being the stochastic parrot's accountability sink?)

I'm not even sure English even has the notions to point out the concrete issue - I sure don't know 'em.

Ever hear of the strain of thought that says "all metaphysical questions are linguistic paradoxes (and it's self-evidently pointless to seek answers to nonsensical questions)"?

Feels kinda like the same thing, but artificially constructed within the headspace of American anti-intellectuallism.

Maybe a correct adversarial reading of the main branding acronym would be Anti-Intelligence.

You know, like bug spray, or stain remover.

But for the main bug in the system; the main stain on the white shirt: the uncomfortable observation that, in the end, some degree of independent thinking is always required to get real things done which produce some real value. (That's antithetical to standard pro-social aversive conditioning, which says: do not, under any circumstance, just put 2 and 2 together; lest you turn from "a vehicle for the progress of civilization" back into a pumpkin)


What?

What?

There's many JS implementations out there. Quality kind depends on what you need, and there's some engines more or less complete in which quirks are supported.

And for example, v8 doesn't make much sense in embedded contexts


There are definitely plenty of other JS engines, but they aren't always up to date on newer JS features. I'm pretty sure this is the 3rd JS engine to fully support the Temporal API (even JSC hasn't shipped it yet).

More like 8th. These pass nearly all Temporal tests as well: v8, spidermonkey, libjs, boa, escargot, kiesel, jint. Almost there: graaljs, yavashark.

Random aside: I've seen a 2015 game be accused of AI slop on Steam because it used a similar concept... And mind you, there's probably thousands of games that do this.

First it was punctuation and grammar, then linguistic coherence, and now it's tiny bits of whimsy that are falling victim to AI accusations. Good fucking grief


To me, this is a sign of just how much regular people do not want AI. This is worse than crypto and metaverse before it. Crypto, people could ignore and the dumb ape pictures helped you figure out who to avoid. Metaverse, some folks even still enjoyed VR and AR without the digital real estate bullshit. And neither got shoved down your throat in everyday, mundane things like writing a paper in Word or trying to deal with your auto mechanic.

But AI is causing such visceral reactions that it's bleeding into other areas. People are so averse to AI they don't mind a few false positives.


It's how people resisted CGI back in the day. What people dislike is low quality. There is a loud subset who are really against it on principle like we also have people who insist on analog music but regular people are much more practical but they don't post about this all day on the internet.

perhaps one important detail is that cassette tape guys and Lucasfilm aren’t/weren’t demanding a complete and total restructuring of the economy and society

An excellent observation. When films became digital the real backlash came when they stopped distributing film for the old film projectors and every movie theaters had to invest in a very expensive DCP projectors. Some couldn’t and were forced to shut down.

If I had lost my local movie theater because of digital film, I would have a really good reason to hate the technology, even though the blame is on the studios forcing that technology on everyone.


It is not. People resisted bad CGI. During the advent of CGI people celebrated the masterpiece of the Matrix and even Titanic. They hated however the Scorpion King.

Not really. The scale is entirely different. I think less of someone as a person if they send me AI slop.

  > I think less of someone as a person if they send me AI slop.
n=1 but working on side projects for others, i could easily generate ai images (instead of using stock photos) for a client, but i resist because i also feel this but as the sender...

there is the fact that such images 'look ai' but even if it were perfect, idk somehow i feel cheap doing that.


Agreed. Even in low value stuff I’d so much rather use basic stock images, ms paint drawings or almost anything over AI images. Seeing them is almost like being near someone who stinks or is sick/coughing. It’s a very visceral reaction.

No, I don't think most people are really against AI Gen works "on principle". Or at least not in any interpretation of "on principle" that would allow for you to be dismissive of complaints in this way.

I think principles are important. Especially when it comes to art, principle might be all we have. Going back to the crypto example, NFTs were art that real people had made. In some cases, very good art. People railed against NFTs despite the quality of the art. That is being against something on-principle. Comparatively, if my local grocery chains were owned by neonazis, I'd have a much harder time of standing on principle, giving that doing so may have a negative impact on my ability to survive and prosper.

AI Gen works, on the other hand, most often do not come with readily available marking that it is AI Gen. What people are complaining about is the lack of quality in the work. If they accuse a poorly human-written article of being AI Gen, that's just a mistake. But the general case is a legitimate evaluation of the quality of the material and the conditions under which it was made and presented.

In my own case, while I certainly have plenty of "principled" reasons to dislike AI Gen works, I also dislike it because it's just garbage. Oh yeah, sure, it's impressive that a computer can spit out reasonable content at all. It would equally be impressive for a chimpanzee to start talking in full sentences. That doesn't mean I'm going to start going to the chimpanzee for dissertations on the human condition.


I think literally everyone could agree CGI has been detrimental to the quality of films.

"Literally everyone" can't even agree on whether Polio is bad.

I myself would disagree that CGI itself is a bad thing.


Not just in the obvious ways either, even good CGI has been detrimental to the film (and TV) making process.

I was watching some behind the scenes footage from something recently, and the thing that struck me most was just how they wouldn't bother with the location shoot now and just green-screen it all for the convenience.

Even good CGI is changing not just how films are made, but what kinds of films get shot and what kind of stories get told.

Regardless of the quality of the output, there's a creativeness in film-making that is lost as CGI gets better and cheaper to do.


it may be an unpopular opinion but i feel like that watching any of the marvel movies... its like its just a showcase for green screens and ridiculous rubber-band acrobatics cgi everywhere...

that kind if stuff might work in anime or cartoons, but live action just looks ridiculous to me for the most part.


I could maybe agree in the sense of "has had detrimental effects", but certainly not in the sense of "net detrimental".

Project Hail Mary is a great example of not relying on CGI.

Anecdata-- from me. I think cgi can be a net positive.

90% of the time, you wouldn't know CGI if you saw it. That's the 'good' CGI.

Same thing is true of AI output.


Not the same. The more effort you put into CGI the more invisible it becomes. But you can’t prompt your way out of hallucinations and other AI artifacts. AI is a completely different technology from CGI. There is no equivalence between them.

But you can’t prompt your way out of hallucinations and other AI artifacts

That's not the case, and hasn't been for some time, but it sounds like your mind's made up.


Hallucinations have been solved?! That’s great news! Must have missed that.

Hallucinations have been solved?!

Apparently not, because no one but you implied that they had been.

There are prompting strategies that improve the odds greatly, but like the GGP, you've made up your mind, so it's a waste of time to argue otherwise.


i think they are referring to statements that they have "solved" hallucinations and it wont be a problem anymore (which it obviously isn't yet anyways)

[1] https://news.ycombinator.com/item?id=44779198


My guess is that post-training has gotten a lot better in the last couple of years and what people are attributing to better models are actually just traditional (non-LLM) models they place on top of the LLM which makes it appears that the model has increased in quality (including by seemingly fewer hallucination).

If this is the case it would be observed with different prompting strategies, when you find a prompt which puts more weight on the post-training models.


I guarantee you have encountered AI content and not realized it was AI. I assume you've heard of the survivorship bias?

I have and I hated it.

The story is that I was getting into a new genre of music, namely Japanese City pop from the 1980s. I was totally unfamiliar with the genre and started listening to it on YouTube. I found one playlist, which I listened to a lot, thinking: “wow, this is very formulaic, and the lyrics are very generic” but I kind of thought that was just how the genre went. Finally had planned to use it for during a small local event, but when I went to find out who the artists were I embarrassingly found out it was all AI generated.

Thing is, in this instance I knew nothing of the source material, when I went to get actual songs, written by actual people, the difference was start. I would be able to recognize AI generated City pop in an instant now 8 months later. This experience kind of felt like I had been scammed. That my ignorance of the genre had been taken advantage of. It was not pleasant.


You don't understand. I mean content that even now, you don't know it is AI.

Obviously you think the AI content that you can identify is bad. But there is content you've encountered that you think is good and not AI content, that actually is AI generated.

That's the survivorship bias.


This sounds dangerously close to a No True Scotsman argument. Any example one could provide, you've teed it up nicely to claim that no, you didn't mean that one, obviously, because you could tell. No, it's some other thing that you haven't found yet. That's the passing-AI.

I had a very similar experience, looking for music to play during D&D sessions. Not paying close attention to the music, it seemed like it fit the bill. Once I started listening more closely, there were lots of issues that became readily apparent.

My dad has also started sharing with me links on Facebook to pop songs that have been re-arranged in different genres. This was a big area of fun for a number of folks in my family several years ago as we discovered YouTube artists like Chase Holfelder who put significant effort into making very high quality rearrangements. But I kept noticing these weird issues in the new songs.

I've gotten to where I can identify an AI generated song almost immediately: there's a weird, high frequency hiss in the mix that sounds like heavy noise getting to overcome compression artifacts but the source from which it's coming should be clean. There's a general lack of enthusiasm to the lyrics and a boring, nonsensical progression to the lyrics on original arrangements. Sometimes, the person generating the song tries to hide that last issue by generating instrumentals only or they use one of those try-to-hard-to-sound-badass Country Rock genres that are popular on Tik Tok to stick on top of clips from the TV show Yellowstone (WTF is with that?!), but then when I check the details, there's an obviously AI cover art for artists I've never heard of. The accounts will be anthologies full of these artists that have never existed.

So, I know people keep parroting "a good artist can use any tool". But I've yet to see it. All this "democratizing art" (didn't know anyone was gate keeping it to begin with, certainly have not seen any lack of talent online in several years) doesn't seem to be producing results. It becomes pretty obvious very quickly it's all just a pump and dump scheme to Get Them Clicks.


No there is a very loud minority of users who are very anti AI that hate on anything that is even remotely connected to AI and let everyone know with false claims. See the game Expedition 33 for example.

Especially true in gaming communities.

IMO it's a combination of long-running paranoia about cost-cutting and quality, and a sort of performative allegiance to artists working in the industry.


And E33 is also a good example that these users are a minority and effectively immaterial. They don't affect sales or the popular opinion.

People don't care about AI. They only care whether the product is good.


And yet, no game has problems selling due to these reactions. As a matter of fact, the vast majority of people can't even tell if AI has been used here or there unless told.

I reckon it's just drama paraded by gaming "journalists" and not much else. You will find people expressing concern on Reddit or Bluesky, but ultimately it doesn't matter.


All that is needed to solve that is to reliably put AI disclaimer on things done by AI

Which of course won't be done because corporations don't want that (except Valve I guess), so blame them.


> all that needs to be done

The honor system is never a sustainable solution. It's not even down to corporate greed, it's just not something that works at scale, especially when there's money to be made, and even more especially when there isn't.


And all that's needed for world peace is for everyone to stop fighting.

Which of course won't happen because we live in reality and not fantasy where we can dream that "people should just do X"


I know this is a spicy take, but it probably just means you're more eloquent in your writing than most netizens...

And that's not really a hard bar to clear if you look at how people write comments online (including places like GitHub).

Anyone that uses punctuation, and capitalises words, probably automatically gets past the 70% confidence line.


It baffles me when I see ostensibly smart people refusing to click shift. Especially programmers. I know you can do it! I've seen you use curly brackets!

I recently battled this and reverted to using DOM measurements. In my case the measurement would be off by around a pixel, which caused layout issues if I tried rendering the text in DOM. This was only happening on some Linux and Android setups

To be fair, early wine (when I first tried it) wasn't very usable, and for gaming specifically. So if you were an early enthusiast adopter, you might've just experienced their growing pains.

Also, I assume some Windows version jumps didn't make things easy for Wine either lol


The hype/performance mismatch was significant in the 2000s for Wine. I’m not sure if there was any actual use case aside from running obscure business software.

Yes, there was “the list” but there was no context and it was hard to replicate settings.

I think everyone tried running a contemporary version of Office and Photoshop, saw the installer spit out cryptic messages and just gave up. Enough time has passed with enough work done, and Wine now supports/getting to support the software we wanted all along.

Also, does anyone remember the rumours that OS X was going to run Windows applications?


I used WINE a lot in the 2000s, mostly for gaming. It was often pretty usable, but you often needed some hacky patches not suitable for inclusion in mainline. I played back then with Cedega and later CrossOver Games, but the games I played the most also had Mac ports so they had working OpenGL renderers.

My first memorable foray into Linux packaging was creating proper Ubuntu packages for builds of WINE that carried compatibility and performance patches for running Warcraft III and World of Warcraft.

Nowadays Proton is the distribution that includes such hacks where necessary, and there are lots of good options for managing per-game WINEPREFIXes including Wine itself. A lot of the UX around it has improved, and DirectX support has gotten really, really good.

But for me at least, WINE was genuinely useful as well as technically impressive even back then.


I remember it being surprisingly decent for games back then. Then a lot of games moved to Steam, which made it way harder to run them in Wine. Of course there was later Proton for that, but not on Mac.

Games are one of the easier things to emulate since gaming mechanics are usually entirely a compute problem (and thus not super reliant on kernel APIs / system libraries). Most games contain the logic for their entire world and their UI. The main interface is via graphics APIs, which are better standardized and described, since they are attempting to expose GPU features.

I worked on many improvements to wine's Direct3d layers over a decade ago... it's shockingly "simple" to understand what's happening -- it's mostly a direct translation.



Also these apps changed, A lot of windows programs were simple executables and I remenber for a moment it was very popular for developers to write portable apps that were just a .exe that you ran,also excel and other programs worked fine, but then microsoft and others started to use msxis or whatever it's called and more complex executable files and it was not longer posible, and microsoft and adobe switched to a subscription based system.

I ran Wine end of 90s to run CS (Half-Life), and I had not only more FPS than Windows. It was more stable as well.

It took some futzing. The crusty PlayOnLinux UI is permanently etched into my brain.

Transgaming! It worked for ons or two games for me, bit it was glorious.

Most of the transfors you describe are still unfortunately destructive (ie the only way to go back is to undo). I'm not an expert on this, but I think the only way this could be key framed would be to take snapshots of the pixels and insert the modified raster data as keyframes? I'm not sure there's a good/correct/obviously way to interpolate betweens say a before and after liquefy operation the way it currently works. Maybe some of them coul store brush+inputs (pressure, cursor movement, etc) but that seems difficult to work with as an artist. Again, not done much animation (as a dev or artist) so maybe I'm just out of the loop completely

But yeah I agree with you in principle though, it would be nice if these were non-destructive and could be keyframed.


They are all non-destructive in Krita. Just use a transform mask and go to tool options, select liquefy and after you liquefy however you want you can just hide the transform mask and it stops liquefying the layer.

Yes, Krita has had this feature for years. Non-destructive filters (adjustment layers), too.

GIMP still doesn't have it. Only in 3.0 it got adjustment layers for filters.


Oh, this is news to me! I've used Krita to pain (recreational noob, not on a professional level) and I never realised this. I'll play with this tomorrow

No horse in this race, but your phrasing seems a bit weird, honestly... If reduced, your comments read as:

"You don't know about X? Well, at least I know about X and Y..." Doesn't seemed like a good faith comment to me either?

And then you say "You misunderstood my intentions so I'm going to disengage". For what it's worth, I didn't interpret your argument as insulting someone, but also it wasn't a useful or productive comment either.

What did you hope to achieve with your comments? Was it simply to state how you know something the other person doesn't? What purpose do you think that serves here?


If AI writes a for loop the same way you would... Does it automatically mean the code is bad because you—or someone you approve of—didn't write it? What is the actual argument being made here? All code has trade offs, does AI make a bad cost/benefit analysis? Hell yeah it does. Do humans make the same mistake? I can tell you for certain they do, because at least half of my career was spent fixing those mistakes... Before there ever was an LLM in sight. So again... What's the argument here? AI can produce more code, so like more possibility for fuck up? Well, don't vibe code with "approve everything" like what are we even talking about? It's not the tool it's the users, and as with any tool theres going to be misuse, especially new and emerging ones lol


If this is your opinion, I ask you: are you okay with AI reviewing the PRs as well, or do you prefer a human to do it?

Think carefully before responding.


I don't know why you have to qualify your sentence with "think carefully before you respond" it makes it seem like you're setting up some rhetoric trap... But I'll assume it's in good faith? Anyway...

I don't mind if a review is AI-assisted. I've always been a fan of the whole "human in the loop" concept in general. Maybe the AI helps them catch something they'd normally miss or gloss over. Everyone tends to have different priorities when reviewing PRs, and it's not like humans don't have lapses in judgement either (I'm not trying to anthropomorphise AI, but you know what I mean).

My stance is same about writing code. I honestly don't mind if the code was written `ed` on a linux-powered toaster from 2005 with 32x32 screen, or if they wrote it using Claude Code 9000.

At the end of the day, the person who's submitting the code (or signing off a review) is responsible for their actions.

So in a round-about way, to answer your question: I think AI as part of the review is fine. As impressive as their output can be sometimes be, it can be both impressively good and impressively bad. So no, only relying on AI for review is not enough.


It sounds like what you'd send to an LLM lol.

"Think carefully, make no mistakes."


Yeah, it never works though, as you can see from this example.

You should use AI.

The PR touched a lot of internals, including module code and mirrors the fs APIs. So, yes it was big, but the commit history was largely clean and followed a development story, and it was tested. The code quality was decent too. I didn't review all of it because I don't have a personal stake in this though.

I suggest EVERYONE in this thread go read the the GitHub PR in question. There's some good arguments for and against AI, and what it means for FOSS... But good lord you will have to sift through the virtue signalling bullshit and have patience for the constant moving of goalposts


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: