Hacker Newsnew | past | comments | ask | show | jobs | submit | gdulli's commentslogin

We should absolutely use the lessons we learned about how the web unfolded to inform our predictions about AI.

Our naivete back in 1996 could be forgiven. Not applying the lessons since then and anticipating how subsequent technologies will be captured and used against us is irresponsible.


Okay so give me an application of a web lesson

The only relevant lesson is that predictions are likely to be more wrong than right tbh

The enshittification we're bathing in every day is the lesson that matters.

> AI is just another disruptive technology like the loom, the steam engine or the airplane.

Or social media, or targeted advertising, or fast food.


I do want to know what new movies are coming out. I do want to know what new restaurants open. Advertising is information, most of it isn't useful but some of it is, and a halfway intelligent adult can separate the fact from the opinion.

I don't want ad tech and surveillance capitalism, I don't tolerate services with unskippable ads. Regular TV commercials are skippable with a DVR, which is critical. It's tech companies that turned advertising from sometimes helpful/usually annoying into unacceptable.

There's a game of one-upsmanship here to express the least possible tolerance for advertising in general, which I don't get. It's not a cancer on society, it's a necessary thing that's gotten dumb and out of hand.


It's really the whole tech industry as it exists right now and AI is a victim of bad timing. If this AI had been invented 40 years ago there'd have been a lower ceiling on the damage it could do.

Another way of saying that is that capitalism is the real problem, but I was never anti-capitalist in principle, it's just gotten out of hand in the last 5-10 years. (Not that it hadn't been building to that.)


> Another way of saying that is that capitalism is the real problem, but I was never anti-capitalist in principle, it's just gotten out of hand in the last 5-10 years. (Not that it hadn't been building to that.)

Capitalism is a tool and it's fine as a tool, to accomplish certain goals while subordinated to other things. Unfortunately it's turned into an ideology (to the point it's worshiped idolatrously by some), and that's where things went off the rails.


Agree. Capitalism is good in limited domains. Applying it generally is ludicrously stupid and will lead to another revolution in the West unless we get it under control

Computer graphics have been improving for decades but the uncanny valley remains undefeated. I don't know why anyone expects a breakthrough in other areas. There's a wall we hit and we don't understand our own consciousness and effectiveness well enough to replicate it.

We have credible deepfakes on demand. (To be fair, there have been deceptive photos as long as photos have existed, but the cost of automating their creation going to basically zero has a social impact)

We can use AI to make video clips to trick boomers on Facebook into thinking Obama eats babies. They already want to believe it. AI isn't outputting real full-length books and movies.

In computer graphics we understand how it works, we just lack the computational power to do it real time, but we can with sufficient processing produce realistic looking images with physically accurate lighting. But when it comes to cognition its a lot of guesswork, we haven't yet mapped out the neuron connections in a brain, we haven't validated it works as popular science writing suggests. We don't understand intelligence, so all we can do is accidentally bumble into it and it seems unlikely that will just happen especially when its so hard to compute what we are already doing.

In the last 12-24 months the price of fast food in particular has risen at a higher rate than inflation has hit other types of food and goods. Fast food makes no economic sense anymore.

And you're right, the food has gotten worse as well.


What economic sense does fast food ever make? You're paying a premium for a convenience.

It made sense to pay less money for lower quality prepared food. Now the price is comparable to better food.

Paying for convenience does make sense if you value the convenience.


Fast food is gut wraughting

An elephant in the room is that if you have too much data to process without AI, you have too many results to check for correctness when they come out of the AI.

This has been true since before LLMs, but now so many more people and use cases are enabled so much more easily. People are undisciplined and quick to take short term gains and handwave the correctness.


It is less of a problem if the output is explicitly marked as AI-generated and unverified, so people can treat it as a rough first draft. But mix AI output with well-vetted human-reviewed data, and you've basically made your entire data set worthless.

Modern software engineering culture is a treadmill ever in search of the next best practice that must be applied to a field made up of bespoke scenarios.

So much of the current internet is posts that read as a superposition of sincere and parody, and until that's resolved how do you know how to respond?

If that was a jab it my writing then yes, I am absolutely being sincere because I am an expert on this topic. LLMs went from being ok at one-shoting a function a to being so good at hacking that it's difficult to evaluate them. Prospective customers get back to us after a demo and tell us about the exploits it found on their services that are so vague and technical that they wouldn't think to look for them.

> Prospective customers get back to us after a demo and tell us about the exploits it found on their services that are so vague and technical that they wouldn't think to look for them.

Um, have you actually verified that those are actual exploits then? Vague and technical sounds exactly like a description of AI slop...


Yes, that's how they become customers.

Just wait until you see the same showing up in compliance realms...

Edit: to be slightly less implicit, consider the cargo cult madness that erupts from people thinking they can address risk management and compliance by auto-generating documentation and avoid really doing the legwork.


Why would you give this sort of work to a machine that can't be responsibly used without checking its output anyway?

It's not obvious to me that LLMs can't be made reliable.

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: