Hacker Newsnew | past | comments | ask | show | jobs | submitlogin
Tell HN: Tired of Hearing about ChatGPT
326 points by aliqot on Dec 6, 2022 | hide | past | favorite | 264 comments
Im glad we're done talking about stable diffusion, but it kinda sucks that we're shoving ChatGPT into everything now.


This is the classic MOT (major ongoing topic) dilemma, when a topic is important and interesting but also over-represented and fatiguing.

Our solution for about 10 years now has been to (try to) downweight the copycat posts and uninteresting follow-ups, while leaving the ones that have SNI (significant new information).

It's not a satisfying solution, since a large cohort at one end of the spectrum will still feel "oh no, not this again", while the opposing cohort at the opposite end will feel "why is this topic not being covered enough? is it being suppressed or something"? But it's probably the optimum for least unsatisfying.

Lots of past explanation:

https://hn.algolia.com/?dateRange=all&page=0&prefix=false&so...

https://hn.algolia.com/?dateRange=all&page=0&prefix=true&que...


The entire AI/ML world is just extremely boring and depressing to me... I like to make art and now places like ArtStation get flooded with AI generated images. People, obviously, don't disclose that it's AI generated. It's the commenters that point that out.

I'm generally worried that we're creating a world which is not for us anymore. Maybe we should introduce boundaries to our world, just so we don't fall into a nihilistic pit of insignificance. Humans derive a lot from process, not the final product. Now we're creating tools to jump directly to the final product. Hedonistic treadmill gets faster and faster. Grim times.


Grim? There's a company out there using AI voice modification to make south Asian call center workers sound American on the phone. It's just depressing any way you look at it.


If they were really good at their job, they wouldn't need Asian employees. They'd just use the AI chat to handle everything. Why does an AI need a human to feed it what to say? That's not a very good AI. That's just voice modification, and that's been done for so long it's not even close to impressive. Have the AI hear the human voice, decide what to say that is relevant and useful, then generate a voice to deliver that information, then I'd almost be impressed.

What you described is pretty much how I feel all AI/ML is at this point. A decent website with a large humans operating in the background. Not impressed. At. All.


There are ML companies that monitor call center conversations and recommend what to say to the agent.

"ALERT! The customer just mentioned how long it takes to download files. Recommend a faster internet package."


> If they were really good at their job, they wouldn't need Asian employees. They'd just use the AI chat to handle everything.

Well, sure, the software is training on the human conversations it mediates to do that next. Now I’ll just need my own chatbot representative optimized for solving my problems instead of the company I’m calling.


> Now I’ll just need my own chatbot representative optimized for solving my problems instead of the company I’m calling

More likely, you instruct your chat bot to call their chat bot to figure out the problem. Because they're bots, they can communicate at a much higher speed, thus bringing in a new era of High Frequency Support Calls.

Companies will build data centers close to the call centers, connected to them by line of sight satellite dishes, such that you can host your bot close to the support bot and shave off some ms connecting to them. This gets you closer to the front of the queue. This will be sold as a Good Thing, because it increases call frequency and allows calls to be priced by the market instead of the pesky phone companies.


This is hilarious and the future I imagined when Google announced Duplex: AI assistant to AI voice chat bot over POTS. Clearly the most efficient and robust API interface ever conceived.


Thanks, this cracked me up! It reads to me like something out of a David Foster Wallace story.


> Why does an AI need a human to feed it what to say? That's not a very good AI.

I'd say it's less about what, and more about how. AI can't simulate the emotional layer of the conversation in real-time just yet. Selecting the right words, tone, cadence, knowing when to pause and when to interrupt - it's all intuitive to us, and it's the core tool for the scammers[0] on the phone.

The companies are still trying to save on labor costs, hence outsourcing. The voice modification allows them to have their cake and eat it too - to move their call centers to countries with low labor costs, far away from their targets, while defeating the main heuristic people have to detect scammers: the caller sounding like a foreigner from a far-away country, despite claiming to represent a local company.

Until someone develops an AI for the emotional/manipulation layer, there's no point in adding AI for conversation here: the scripts the scam callers use are cheap, scalable, and work well enough.

> Have the AI hear the human voice, decide what to say that is relevant and useful, then generate a voice to deliver that information, then I'd almost be impressed.

That's actually what's been done for years now. Try calling the helpdesk of any large company, you'll most likely be greeted by a bot that tries to converse with you. It works as well as you can expect a chatbot attached to voice recognition to work - about the only thing it's good at is delaying and denying requests to connect you with a human.

In this case, the company can actually replace most of the support workers with a shitty chatbot because it's you who is calling them - they have no reason to care or make it easy for you to achieve anything.

----

[0] - I classify first-party and third-party telemarketing, cold calls and tech support scams as just different flavors of the same thing: preying on the vulnerable.


think of the brain power, and the human potential we’re wasting on solving such shitty problems.


If this is depressing let me tell you about quants and consultants, the true black holes of human capital.


I'd rather the brain power be used on creating art/empowering artists and intelligently answering prompts as opposed to creating a surveillance state to show ads, thinking of ways to lock down computers so they are no longer general computing devices and just corporate app delivery devices, refactoring a website again to use a different JavaScript framework, figuring out how to insert middleman payments at scale into food delivery, etc.


That's what most people would rather have, too.

The issue is that the financial capital required to action these things rests in the hands of people who would much rather have this technology used for more profitable ad delivery services.


Instead of just providing good services, we create new services to enable the original services to exist with the deficiencies they have, now you’ve got n services that enable 0.5 of the services you originally wanted. Sounds like a familiar problem to me. shudders


Current capitalistic system defines what is valuable or what is not. As long as your solution to a problem generates a profit, the thing you are doing is valuable....


You're conflating two very different meanings for the word "valuable" - one that means "to have monetary value", and another that means "has use for society", and so your comment doesn't make sense.


> You're conflating two very different meanings for the word

The term you want is "equivocation." I don't think the gp is equivocating.

Also, "has use for society" is not a meaning of "value."

https://www.merriam-webster.com/dictionary/value


Sounds like an alignment problem.


Imagine using the same tech to make deaf speech properly intelligible.


Why is that bad? I can't understand a thing they say. I would rather the call centers were in America, but I welcome this development.


Most people are able to adjust their ears to an unfamiliar accent if they exert a bit of effort. It might be a little clumsy, and you might have to ask them to repeat themselves a few times, but that's just life for the millions of Americans who live in diverse cities, and those work for companies that employ non-native English speakers. MegaCorp isn't buying this tech to reduce this friction, though - they're buying it to appease the outright racist callers who verbally attack the employees and outright refuse to even try talking to them.

That said, I don't think forcing older rural folks to wrangle with a southeast Asian accent is going to solve any societal problems, and I imagine the call center workers in Asia would be big fans this tech if it reduces irate interactions and actually works well.


It's not really about the accent. It's just that companies that outsource to Asia do so because they value the cost over quality. Because obviously in-house and in-region support is much better because the agents are more up to speed on the company and to the user's environment.

This expresses itself in other aspects than just location, and often the people working there have no product knowledge and just follow scripts. Because knowledgeable workers cost more. Works ok for 80% of the interactions but it really sucks if you're the 20% not covered by a script.

This gives Asian callcenters and the associated accent a bad name and leads to a kneejerk reaction from customers as the accent brings up memories of previous frustrating experiences.

Of course companies try to avoid this already with the mentioned software and also by moving to regions with more American-sounding people like the Philippines.

I worked in contact center management before and I've seen this effect a lot.

I'm sure there's excellent contact centers in the region too, but they have an uphill battle to prove themselves as most simply aren't.


Apply this line of thought to most societal and social norms and it gets rather ugly.


I’ve felt the same way. After years of patience and practice my kid has become a fantastic artist. I’m a bit concerned that at some point they might think “why bother?” and give it up after they see someone type some prompt in with “pencil, sketch” as a suffix.


To be honest, I already got to the “why bother” state by seeing on the internet how many human artists are so much better than me.

And, on the other hand, if you actually have a specific vision of what you want to create, it is often quite difficult, or actually impossible, to get an AI to generate an output that fulfills your particular vision.


> To be honest, I already got to the “why bother” state by seeing on the internet how many human artists are so much better than me.

I'm also learning to draw. As a counterpoint, I found it quite fun and enlightening to use artwork I like as practicing material. You can learn a lot by trying to copy parts of an artwork (without tracing and without uploading the result anywhere) : By copying, you still actually have to draw the artwork and deal with the same challenges that the original artist did - however you can use the original artwork as a sort of guide or cheat sheet on how those problems can be addressed.

I found it often even gave me a deeper appreciation for the original artist, because you see all the small, individual decisions they had to make to eventually get to the artwork - it felt sort of similar to reading code and trying to get into the head of the original developers.

You can also get creative and only copy parts of the artwork or stitch together a new piece from different sources - essentially doing by hand what image generators do automatically. (Just don't upload your new artwork anywhere without contacting the original authors first)


These threads remind me a lot of some of Isaac Asimov "Robots" series novels. He painted a world where Robots and AI solved a lot of things, so people didn't have to do things that "worried them". It seems some lost any meaning for living.

I've been an "amateurish" guitar player for 30 years. I play chords, I do some OK picking and I love trying to "play" songs I like, even if badly played (I like speed metal and shredding, which I know I will never master in my lifetime).

Nevertheless, I do it because I like it, I've seen Satriani, Oscar Lopez, Steve Vai play, and I know I'll never be like them. However I don't think "why bother". I enjoy playing basic guitar some afternoons in my living room, and my wife enjoys listening to me while reading a book. I've always played for me first, and because I enjoyed it.

It is a very rare opportunity that we people have the ability/resources to do something just because we enjoy it. With the advent of this wave of AI, we will need to learn to do more things that way.


As long as it’s humans you know it’s achievable. You can pursue their level and doing that has value. I kind of believe that learning about art and getting better is just a great exercise for oneself. It makes you a better person.


> To be honest, I already got to the “why bother” state by seeing on the internet how many human artists are so much better than me.

Same here, it's extremely hard to find motivation when I know every idea I have has been done better by someone who isn't me, or recently by a computer. In either case I can't meaningfully compete and if it's been done better, why do it at all?

This applies to art, but even more broadly than that. I get no satisfaction in creating something when I can see people who have done it far better than me via a quick search.


I'd still much rather see real art. It's kind of the difference between eating a homemade meal and eating manufactured crap out of a vending machine. And, from what I can tell, the art generators are basically just stealing other people's stuff. It'll probably be brought up as a real IP issue at some point.


No, that's just not how they technically work. AI generators train on an image archive, but the resulting model doesn't include the actual pictures in it, and often contains only a tiny influence from each source image.

I think the current popular model works out to somewhere around a byte in the model per sample image.

Here's a diagram https://www.reddit.com/user/AnOnlineHandle made: https://i.redd.it/yxflzg0z6p3a1.png


Well, if I were an artist who wants to make work publicly available (previews?) and charge for it, I would make sure the license I use to publish my work explicitly states that it's not allowed to use my work as part of training data for any ML algorithm. If they want my work for that, then pay for it.

Hell, I think I as a developer could do that... Gonna invent a new open source license.


Okay, but what's that supposed to achieve?

If you think you'll get paid, you probably won't because AI trains on very large datasets, and so pretty much any price per image would quickly get too expensive.

If you think it'll hamstring AI, I doubt it'll be for long. People will just end up curating their dataset better, and images that allow for it will continue to be created.

Also, per https://opensource.org/osd -- "No Discrimination Against Fields of Endeavor", so this likely won't get much uptake in open source circles either.


>Okay, but what's that supposed to achieve?

Not being in the dataset maybe? There's many artists that hate that you can just write "in the style of" with their name.

So at least, if it does appear that it works with their name, they now either have been paid that amount, or they can sue for that amount of lost revenue. At least not everything would be lost.

I'm not a lawyer though, so really no idea whether this could even be possible as it's seems like currently it's fair use.


> Not being in the dataset maybe? There's many artists that hate that you can just write "in the style of" with their name.

Very few artists have an unique style. Some are very recognizable, yes, but plenty others do more or less the same sort of thing.

There's thousands of photographers out there that go for the "razor sharp, high contrast closeup that shows every pore" style (https://photoaspects.com/wp-content/uploads/2017/09/black-an...) for instance.

With very limited exceptions you could go to a photographer and say "I want a photo that looks like this Vogue cover" and get it done.

> So at least, if it does appear that it works with their name, they now either have been paid that amount, or they can sue for that amount of lost revenue. At least not everything would be lost.

Don't be silly. If that logic worked, Microsoft could have sued Linus Torvalds for providing an OS for free. Or if you want to be a bit more exact, Sun Microsystems, IBM, or any other Unix vendor.


Encoding and compression also have the property that they vastly change the data _representation_ of information (often changing it completely). Yet, the underlying thing reamins the same in a way that's easily recognizable by a human. This is similar - sure, a transform is being applied that changes the representation of the data. But, ultimately, it's just mashing together large chunks of other people's IP.


all AI art is confined to digital medium (at least for now, 3D printing might change that). You can't see the brushstrokes like in a real life, physical painting. Dancing is one that will be very hard to copy, because you need actual humans for it.

Not to mention, the kid may enjoy making art, and his/her art may give others joy, even if it's not a profitable career. It can still be a fun hobby - synthesizers have been around for nearly half a century now, but people still play guitar :)


> at least for now, 3D printing might change that

> Dancing is one that will be very hard to copy

I'm not sure I'd hang my hat on a particular current inability of AI, particularly inabilities where there's no obvious impediment to AI doing it. You're just begging for AI to eventually gain that ability, and now you're back where you started. Hooking AI up to physical actuators and getting it to work properly strikes me as a "this will 100% happen, I guarantee this is already being worked on" situation; basing your self-worth off the inability of AI to do those things seems like a ticking time bomb.


Seems like this is missing the point of dance, at least part of which is specifically watching humans. Right now, a pitching machine can throw harder than a human, but we still restrict baseball teams to only having humans on them. An automated motorbike can cover 100 meters faster than any human, but we don't allow motorbikes to compete in an Olympic sprint competition. A mechanical sumo wrestler wouldn't need to be intelligent, just heavier than the heaviest competitor, and it could already win against any human right now, but we don't do that.

Similarly, people still play chess, go, and Starcraft even though AI can beat them. Human Jeopardy matches still get televised even though Watson could beat them. We still have math olympics even though Wolfram Alpha could have won all of them decades ago.

I think human dancers are safe for as long as there are humans.


Exactly. Things in life are not valuable pursuits just because an AI can't do them, nor do they stop being valuable once an AI can participate. OP seemed to suggest that AI being confined to digital art means that physical art still has value, but it doesn't follow at all. If I strap a paintbrush to a computer with AI, it doesn't devalue human painting. They're not related in that way.


Indeed, this is already happening. This company is using human artist works for now, but the leap to a model producing such art is quite small at this point: https://art.art/blog/portia-de-rossi


alright then, I'll fallback to my backup - humans are cheaper.

They don't require mining rare earth metals, they're more versatile, and uh... oh gosh it's only a matter of time before we're all replaced by cheap carbon based robots. It's been real <3


> all AI art is confined to digital medium (at least for now, 3D printing might change that). You can't see the brushstrokes like in a real life, physical painting.

For now. There's no technical reason why we can't have CNC painting machines - there's plenty of existing tech in manufacturing that has the required precision and range of motion. I think we don't see such machines because there is no business need for them - or at least, there wasn't.

To date, the kind of digital images you'd want to reproduce using a CNC painter would be drawn by actual artists, using their hands and - most likely - tools like pens and drawing tablets. The work of an artist is similar enough to actual painting that, if there's a need for the 3D texture, they can just... use real brush on real canvas. There technically is a market for cheap reproductions, but AFAIK this one is covered by "painter farms" - warehouses full of low-paid painters mass-producing paintings, in countries with cheap labor costs.

Now, if you could get the ML model produce "original" work, encoded as a time series of brush strokes (either straight from the model, or by another model turning JPEGs into painting instructions), then this just might provide an incentive for CNC painters. It might even start as a hobby thing - there's lots of people who would build such machine, the way they build 3D printers now, so they could produce their own paintings at home. Those same people will be unlikely to pay a far-away painter farm instead.

> Dancing is one that will be very hard to copy, because you need actual humans for it.

AFAIR that was already done, to a passable level, by Boston Dynamics, a few years ago. Sure, that was a one-off carefully choreographed stunt, but it demonstrated that 1) robotics can handle dancing just fine, and 2) the result is actually enjoyable to watch. The missing bit is software, but given the ML work being done in recent years, in particular all those video editing/generating models that start by parsing human movement from input videos, I think it won't be hard for someone to start generating dance moves.


But what you’ll not have is a human behind it, with a human story and struggles. But yes, I could see mass produced paintings a la CNC becoming common. It will be more expensive than prints but only a few levels above ikea art since it’ll be mass produced.


Every mind sees things different. The artist struggles to put that to a medium. The successful expression of their vision is extremely common, but the ability for anyone to even iterate on another artist's work is basically nonexistent. Even with AIs cranking out art, the obsessed will continue to fight to share their mind.


Imagine he might choose (or already does it) to do it professionally and struggle to make ends meet because “AI is just good enough. People still consume the media we produce. Why hire artists?”


AI art is not nearly there yet, and has very significant areas in which it struggles a lot. The funny thing about it is that just like normal art there are easy and hard things to accomplish, but it's nigh impossible for a person new to this concept to tell which is which: https://xkcd.com/1425/

Overall, the easy part of AI art is generic portraits where you either don't care about the details, or want to use existing, popular things. Eg, you can ask for Harry Potter. And you can ask for a random character. But if you have a personal, distinctive design you made and want to bring to life, then it can get very tough pretty fast.

In fact, the best possible user for an AI generator would be an artist that's also capable of training a custom model. That opens up way more possibilities. Here's what this looks like:

https://www.reddit.com/r/NovelAi/comments/z7qat8/ai_as_a_too...


There is something off about this argument. On one hand, you say "Humans derive a lot from process, not the final product", which I completely agree. But you also say "places like ArtStation get flooded with AI generated images", which seems to indicate there is a focus on showing off the final product.

I make pottery, and my pieces suck (in terms of shape, usability, even stability) compared to ones made by professional artists or in a factory. I cannot make two similar mugs for the life of me. But it is the process I enjoy, the learning, the improving. And the nice community I've built around me that shares the same passion.

Maybe I'm underestimating the power of text-to-image, but people still learn how to play the guitar (despite Ableton), or chess (despite Stockfish), or sculpture (despite 3D printing).

My honest question is: why should this time be different?


You've posed a lot of questions, which I won't address. When it comes to "focus on showing off the final product" I don't think there is anything off here.

First of all there is the professional aspect of it. People just have to market themselves to get jobs. ArtStation (its predecessors actually) eradicated the new to have a portfolio website. It's THE place to host your work.

Then there is the need for validation. There is nothing wrong with wanting to be appreciated for your accomplishments. That's just one of our needs.

Community aspect is also important. People want to get better and they want to learn from others. Showing your work and exposing it to scrutiny or inspiring others is perfectly fine.

Lastly, I've never said that we derive EVERYTHING from the process. Final result is also important. It's just meaningless without the process, imo.


I agree with all your points and I struggle to see how AI-generated imagery will change any of this. I honesty believe that people will still appreciate (or criticise, or ignore) your work and communities will still be formed around "traditional" artists.

As for the professional aspect of it, yes, designers, illustrators, copywriters, programmers will see their work deeply changed by this technology, to an extend that I do not understand at all. I share your worries in that front.


Well I’m just responding to the process vs final result, which I think you called inconsistent. Maybe I didn’t get what you were trying to say there though.

I also think that it actually might affect the way we value art. I’d need to spend a bit more time thinking about it but even on a surface level it seems that if we devalue something then people might not engage with it in the same way. We actually live in an age when the internet (and many other things) are destroying communities, and I do not think that online communities can feel that void. We need people around yet we push everyone away. Tech is a huge culprit in that. We need to be creative yet we build evidence we’re not that special or creative. Who knows many we actually are special and we’ll survive this.

I think there’s a lot to deconstruct here and I don’t think I’m prepared to write it down in a good way right now.


I see it in a total different light (putting aside legal discussion).

Ever since learning to program, I hated redundancy of learning, learn loop in C#, learn loop syntax in Python etc. to the point I hated new languages and wanted all ecosystems to be centralized around few. Not to mention about getting depressed that I will never be able to draw anything good even tough there are many stories I want to tell with pictures I've imagined. All these "time" requirements have made me lose hope and guided me to just stick to my lane.

But since Copilot and Stable Diffusion came out I felt alive like never before. Suddenly I can get a boilerplate to a new language in a few click (and unit tests also!). I can create images I like and put my imagination into reality like never before. Every stable diffusion model is a new area I can expand into. I can waste hours in front of it just curios about how styles can be merged together.

All those "time" obstacles I thought were hardcoded in reality, just gone! I see it as an amazing advancement.

> ArtStation get flooded with AI generated images.

That is just spam, and low effort content, and yes, Stable diffusion also opened a spam route using it models. But I think it will be solved in a year or two. (Like those anti-deepfakes AI that are published now, few years after deepfakes)


> But I think it will be solved in a year or two. (Like those anti-deepfakes AI that are published now, few years after deepfakes)

Anything used to detect ai generated images will be used to train image generators, there’s no escape


> Humans derive a lot from process, not the final product.

tbh, your comment doesn't convey this belief at all. Btw a couple of generations ago artists were upset over people like Jeff Koons & Warhol. Creative process or fame/fortune. Pick one.

I can see commercial artists, illustrators, etc. who mostly end up doing what the client wants anyway (so "process" is not the main issue) to make a living being upset here. Corporate corridors and offices need "art" and sure, this trend will cut into that business. But not the tiniest of dents has been made in the universe of actually creative minds, imo.


Yeah depends how you look at it. It is very grim when you look at it from the hedonistic perspective. To avoid nihilism you have to consider the complete opposite. That you are contributing to the collective in a significant way, even if you suck at art, coding, writing, etc.

We're already quite deep in a meaning crisis. Nothing should stop each individual on this planet from searching for their meaning regardless of current innovation. Like you mention, humans grow from the pain and process, AI will never have that nor understand it.

https://www.goodreads.com/book/show/11987.The_Myth_of_Sisyph...

EDIT: There's also a beautiful book regarding an existential courage to "be" which rejects this nihilism/meaninglessness.

https://www.goodreads.com/book/show/175141.The_Courage_to_Be


Could we make a ML filter thing that would detect ML generated art and shove it > /dev/null?

Oh, wait. Someone would plug it into a fitness function for their ML generating thing to drive it to not generate art that matches the filter.

Coming soon to ReCAPTCHA: "click on images that have been AI generated"

I wish I could dream again about technology.


> Humans derive a lot from process, not the final product

One has to assume that Sisyphus was happy. But isn't that a bit sad


Has to? Why? No one would ever defend the statement that “all processes make us happy”…


"The struggle itself toward the heights is enough to fill a man's heart. One must imagine Sisyphus happy." are the final lines of Camus' essay


One "must" imagine Camus was wrong. There are plenty of struggles humans do not find happy, particularly when they seem pointless or don't get you anywhere.


"God remains dead. And we have killed him. How shall we comfort ourselves, the murderers of all murderers?"

"Push this boulder, and imagine yourself happy."


How does this relate to art and fulfillment exactly?


> Now we're creating tools to jump directly to the final product

The same could've been said when the printing press or highly automated manufacturing was invented


I also like making art, but AI has no bearing on my enjoyment in doing so. For me, it’s just a matter of carving out time to do my project organically (and analog), enjoying the process, and reflection on the results. Someone creating their own “work” with AI doesn’t make me enjoy the artistic journey more or less.


I wish I could upvote this 100 times. I feel like people get caught up in the, admittedly cool, individual projects but fail to see how this stuff is impacting our society as a whole.

Society sucks right now and abuse of technology is a huge part of it. I know people love their TikTok, Reddit, stable diffusion, etc. but the world would be better off without it.

And if you think blasting peoples brains with ADHD content on TikTok is bad then just wait for AI generated content. It’ll be way worse. Endless novelty at your fingertips. Whatever you can dream up will be generated on the fly for you to consume.


Once we develop bots to automatically perform our consumer functions (damnit, Alexa, you were the chosen one), the loop will be closed and we can return to the trees.


> Humans derive a lot from process, not the final product.

This is what gives me solace. Surely much of 'art' (in so far as a definition is useful) is not a static 'product' that you can replicate at will. It is a connection between two people: the artist and the consumer. Yes, AI 'art' can look aesthetically pleasing: but does it evoke emotion? Does it connect you to the 'artist'?


It just spam automation improvements in a nutshell.


My job is art and AI art is super depressing.


Blame Walter Benjamin.

People, many of us, pay for human made art. Paul Graham is an investor and believer in art. He also believes ai has value.

Don’t be fooled. It’s a false dichotomy. Make better art. If you can’t be better than ai, you don’t “get” what art is in essence.


Let's stop calling it AI: there is no intelligence, only something that resembles "learning".

Competition with art that is "generated" by ML is nothing more than an obfuscated way of competing with other artists whose works were used as a training dataset.

There is no artificial system today that actually creates art. All we have are systems that copy art in an intentionally vague fashion.


There is art in building the ai. A great amount of art in the prompts. Art is omnipresent.


Seems you’re ignoring the entire professional space of people who want to make art and make money out of it. Impoverished internal lives of CEOs will always push them to pick cheap AI generated, laundered art, instead of hiring an artist. It’s cheaper and that all those people care about in the end.


this is less a problem with ai, and more a problem with persisting an economic model that had it's last major redesign in the 1700s.


Paul Graham is one the people who is supporting talented artists by buying their work.

The patronage system has brought much beauty to the world. It’s imperfect and one wishes you could remove the rich men from the art world, but tradition speaks of truths which you’d be wise to listen to.


As it happened with book binders, printing press operators, tailors, lamplighters, milkmen... what is so special about professional artists?


> Make better art. If you can’t be better than ai, you don’t “get” what art is in essence.

I think that first sentence invalidates the second. "Better than AI" implies the artist's work should be more commercially successful than an AI's.

Which of these two enterprises are more likely to have a sales team?


Many artists, even the big names (van vogh), had no commercial success. If you’re an artist “for the money”, you are probably not going to be very happy with the outcome.


Boundaries like paywalls and walled gardens with human verification of users seem inevitable to me as every other platform just becomes flooded with AI generated junk food.


Indeed; artists will get wise and stop the AI exploitation from plagiarizing and laundering their product.


Too late, the datasets containing all human-generated artwork already exist.


It'll be a great disproof of the whole "everything is derivative" line when they continue creating original and novel art.


Retreat! Retreat to the local!


I believe it is because people are naturally lazy these days. And if they can do something fast and easy that might make them a profit, not matter how crap it is, they will do it.

Look how many SaaS companies started up from openAI GPT-3 to create headlines, blog posts, grammar fixed, ect. You'll see the same thing with code soon.

Poeple are using those sites to create the same youtube videos and promoting the products with affiliate links.


I don't think we're going to solve the lazy people problem before we solve the system-that-encourages-people-to-be-lazy problem. We cannot change behavior without changing incentives. Of course everyone is going to try to make a profit doing as little work as possible - we've built a system for incentivizing precisely that mode of work.


    can you tell me how AI and ML are completely useless in the style of a Hacker News comment

    Sure, here's a possible Hacker News comment on why AI and ML are completely useless:
"AI and ML are overhyped buzzwords that have little practical value. All the hype around these technologies is just a distraction from the real problems we face. In fact, they are completely useless because they require huge amounts of data and computational power, which makes them inaccessible to most people.

Furthermore, even when AI and ML algorithms produce impressive results on some tasks, they often fail miserably on others. This lack of consistency and reliability makes them completely useless in real-world applications.

Additionally, the algorithms used in AI and ML are often black boxes that are impossible to understand and interpret. This lack of transparency makes it impossible to trust the decisions made by these algorithms, rendering them completely useless.

In conclusion, AI and ML are nothing but hype and should be ignored by anyone who cares about practical, useful solutions to real-world problems."


Funny that "in the style of a Hacker News comment" didn't really work. It's more like "in the style of a 10th-grader's English essay," which is really what all of ChatGPT's outputs sound like. They're weirdly stunted and robotic (haha). One point per paragraph, connected with the most basic of connecting words like "Furthermore", they really sound like a high schooler converting an essay outline to passable text.

Not that I'm saying it's not impressive, but it does come with some implications for the future: first, I don't think there's any way the output can even get better at all. There's too much bad writing out there, and it's too hard to differentiate it from good writing. ChatGPT is trained on I guess the entire Internet, which of course includes terabytes of poorly-written blogspam and bad marketing copy. And if AIs like ChatGPT really have some practical purpose, and if they really are the future of, say, technical writing, or writing on the Internet, are we destined to read shitty blogspam forever? Will everyone's writing become lame and highschoolery and robotic? Or will being able to write in a more interesting way become yet another key indicator of humanity and sincerity that we look out for?


> They're weirdly stunted and robotic (haha). One point per paragraph, connected with the most basic of connecting words like "Furthermore", they really sound like a high schooler converting an essay outline to passable text.

Otherwise known as the Reddit LLM.


    can you tell me how AI and ML is useful in the style of a Hacker News comment?


    Sure, here's a possible Hacker News comment about the usefulness of AI and ML:
"AI and ML are incredibly useful for a wide range of applications, from improving search algorithms and making recommendations on e-commerce sites, to helping doctors diagnose diseases and assisting self-driving cars in navigating roads. The ability to process large amounts of data quickly and accurately allows these technologies to solve problems that would be impossible or impractical for humans to tackle on their own. Additionally, the ongoing development of AI and ML techniques continues to drive innovation and progress in many fields, leading to new and exciting possibilities for the future."

(not defending it, just curious)


This is no longer amusing.


I actually felt bad about the perceived "troll" level of my comment after posting it.

One thing I did notice is that it does indeed have a specific voice (as mentioned in a sibling comment), I've seen similar style output when playing about with it yesterday, without asking for a "Hacker News comment style". I don't even think this output even qualifies as "Hacker News" style tbh.

I have also noticed that the images that come out of MJ/SD/Dall-E etc. seems to have a kind of style of its own on certain outputs.

How that is useful, I'm not sure though, and how long it will be be discernible from real human output is likely to be measured on a fairly small timescale.


Yes, every example of "AI Art" I've seen looks like a composite of:

- DeviantArt techno-future-videogame styles from the 2000s

- ArtStation stuff

- Pixar/Disney style cartoonish faces


Does this example fall under any of your categories? https://twitter.com/thedigiguru/status/1599195462845677568


Midjourney has its own distinct style on purpose.


It was never amusing, frankly.

It's deep in the uncanny valley and just makes it creepy. I said once that AI generated text looks like it was written by someone experiencing micro-strokes in their 3rd language. I stand by that.


There was a satirical post a while back of a stereotypical HN thread. I think it needs edited to add a comment generated by ChatGPT now. Probably around the third top level comment.


Yeah, it's clear that it's just a machine spewing out generic responses without any real thought or understanding. The lack of depth and originality is disappointing. Let's elevate the conversation on Hacker News and have more meaningful discussions.


(This was written by ChatGPT.)


You asked for Hacker News, but it gave you peak Reddit instead


Well, I for one am extremely excited. I've toyed around with ChatGPT for the past days, and I'm feeling more creative than ever before.

Given how good things are now, I can't even imagine what things will be 10 years from now. I still remember how things were 10 years ago in this scene (machine learning).

The only thing that's concerning me, is that we end up with some "Benevolent dictatorship" situation, where a handful of actors hold all the power, and we're merely allowed to use the tech under their watchful eye. Kind of like how Google products work these days.


I understand seeing the results everywhere, and taking over subreddits and hackernews front page, and other areas being a little frustrating or annoying... but personally, I love copilot and chatgpt (and stable diffusion).

It's super exciting for me, and a lot of my queries that would go to google can go straight to chatGPT. I'm just continually amazed at what is currently possible, and what the future holds.


> a lot of my queries that would go to google can go straight to chatGPT.

Really? I played 10 minutes with it, but lot of answers were wrong and I didn't have the option to easily verify them. I found this tool intriguing, but not useful in this form.


It works best when working with something you have domain knowledge of and/or can easily verify the answers. For example programming.


Agreed. ChatGPT has breathed new life into my creativity and optimism. It feels like a game changer or, at least, a view of a game changer that we'll have in the future.

I share your concern. I am worried that this tech can easily be taken away from me.


Creative how? Can you expand on that?


I wrote a children's book yesterday, I just need to plug sections of text into stable diffusion and I'll be ready to publish.

I could probably do 10 books per week, more if I automate it all... Crazy times indeed.


Now just think if we could leverage this technology to make children's videos!!!

r/elsagate


"I wrote"


I considered that too but weighted with the possibility of openai claiming copyright on the text, I decided against it.


Imagine what that would do to the market at scale.


What have you made with it?


It is mainly helpful for me with writers block.


That's actually a potentially really good use for it.

"Write me a chapter where Bob does this thing and Suzy responds and then a cat walks in"

And then you go to town on it, because it's always easier to rewrite than to write.


We're moving to a world where almost all code written will be AI assisted.

Think about that. From 10 years ago, almost all code was written by humans. Today, 50% of code is assisted by AI. In 10 years? Humans will not be writing code but expressing ideas that a computer can turn into code.

I'm sorry to those annoyed by it. It's pretty awesome technology which is why a lot of people are excited by it.


50% of code written today is AI assisted? I find that very hard to believe.


https://www.cnbc.com/2022/10/14/microsoft-ai-leaps-ahead-her...

"GitHub CEO Thomas Dohmke recently told us that Copilot is now handling up to 40% of coding among programmers using the AI in the beta testing period over the past year. Put another way, for every 100 lines of code, 40 are being written by the AI, with total project time cut by up to 55%."

Sorry not 50% but once programmers discover a tool that can write code, it's uptake is quite strong.


> Today, 50% of code is assisted by AI.

Do you mean programming is assisted, or AI assisted applications? I think both are well under one percentage of new creations, and much much less if counting existing code and applications.


This has been said before a few times over the past 60 years. It's similar to claims that this time automation is going to really put humans out of a job.


Yishan, ex-ceo of Reddit has a good take on this on twitter [1].

> I talked to a friend of mine who is a professional creative writer and screenwriter, and she had a perspective on ChatGPT that is really different from all the other takes I've read about it:

> I find the voice of all AI writing to be the same regardless of what is asked of it. It's a distinct voice.

> Yishan: But it's very interesting how it's an identifiable voice to you. I don't think most people can notice that.

> Yeah, to me it's really distinct. That's why I like it. It's a new voice. People should appreciate it for what it is instead of trying to make human writing out of it.

> I also enjoy how AI art looks a little weird.

> Let the robot make art its own way!

[1] https://twitter.com/yishan/status/1599738630742093825?s=20&t...


I'm trying not to be too cynical because it really is incredible technology but it's also true that it's still pretty far from replacing anything but the most mundane writing tasks. I tried to get chatgpt to write CTAs for a nonprofit and they were clear and grammatical but also trite, facile and reductive. Like I'm blown away it even understood what I asked and delivered exactly ok brief but if a professional writer gave me the same output I'd be sorely disappointed.


At the pace things are going, give it a couple of months, maybe 6 at most.

There is a point where these natural language AIs are going to produce output that’s better than what anyone could write on their own.

That is uses a clear, concise and consistent style now is a plus, even if that’s a little drab.


I showed some stable diffusion "failures" to an architect and he had a similar response about it being it's own interesting/different style.


ChatGPT has a voice, the voice of a stilted 11th grader writing another stupid essay he doesn't care about at all.


Have you read a "tech" article on medium lately? I stopped reading them years ago because they feel like "the voice of a stilted 11th grader writing another stupid essay he doesn't care about at all". Now add all the other articles that are written for various new sites etc that are centered around a couple of tweets or something. Honestly, ChatGPT reads better than a lot of the trash I find on the internet now.

Edit: It's also really funny to me how quickly the goal post shifts. 2 months ago the best chat bots worked by using ML to try to figure out the intent of the user, then respond with a predefined action that may have been slightly tweaked using some sort of text generation etc.

Now, the chat bot not only responds to what I ask, but synthesizes the information and gives me an 11th grade level response. This is astounding, and I can't believe so many in tech want to dismiss the achievements of others for some imaginary utopia that is the only definition of success in their minds. Even if the AI is confidently spouting BS, it's still synthesizing the information and formulating a response, and I've met plenty of people that confidently spout a lot of BS as well.


That's the default unless you ask it to make it in a different style.


Perfect for current high school students. My class is having a blast trying to break it.


Not tired yet, because for the first time since GPT-2, we're talking about a thing I can actually play with along with everyone else, instead of just reading about the fun people with the right connections and enough spare $$$ are having.


Well, you could play with Stable Diffusion, too. Moreover, you could install it on your computer - something that's unlikely to happen with the offspring of the so called "Open" AI.


Not sure why this got downvoted, this is 100% correct. Stable Diffusion is far more open than ChatGPT.


UX matters. ChatGPT is: 1) register on OpenAI website, 2) have fun. Stable Diffusion? It's free and easy if I can figure out where to download the current model (and not some of the weaker, stripped down versions or alternatives), and have a GPU farm to run it on?

IIRC, SD was one of the two or three image generation models that were published in the span of several days. Back then I tried to get my hand on any of them, and found only waitlists, wishlists, and one or two "lite" versions that could not deliver anything close to the interesting results everyone with right access was showcasing and talking about.


There are fairly stable web UIs now, and you can be up and running in about five minutes: https://github.com/AUTOMATIC1111/stable-diffusion-webui

Tweaking and installing new models will take some additional effort, but there has been a veritable explosion in freely available resources, eg. https://rentry.co/sdupdates3

Also it runs fine on mid-range consumer GPUs, just with limited batch sizes.


Not sure what you mean. The day Stable Diffusion released I downloaded the officially released model and ran it on my consumer GPU which I already own for gaming. I can use it infinitely and without restriction. That is significantly more free and open than having to sign up for a website which requires me to agree to their terms of service, rate limits me, and will eventually start charging me for access.


Then you recall incorrectly. Stable diffusion models are capable of running on laptops with even modest discrete GPUs. I'm running it using the automatic GitHub repo on a laptop that's over four years old and a 50 step iteration only takes about 15 seconds.

You need zero technical acumen to be able to install it, just the ability to follow basic instructions. Maybe you should ask ChatGPT to help you.


If like many others, you are also tweeting/sharing your findings, you are doing free marketing for them. Expect the floodgates to slam shut once there's enough guerilla-marketed interest for them to open up paid plans.


That won't last. They're already tightening the API limits and have tweeted about excessive costs.


Even as it stands today, I’d pay US$50/month for it, knowing it’s going to improve over time.

I could go as high as US$100/month if I could be sure to get that back, but at the moment I see it as a great tool for learning and getting started on various topics.

It’s clearly not perfect, but I still find the help invaluable.


There is a pattern at work similar to Crypto, Quantum Computers and other hype topics: Non-tech people getting very excited about it and constantly babbling about how this is a great revolution and everything will done by AI starting tomorrow.

Oh, by the way: anyone remember Autonomous Driving or IoT?


>Non-tech people getting very excited about it and constantly babbling about how this is a great

My experience is the opposite. Non-techy people are unimpressed. If they want to see an image of a dragon, they can put the word in Google Image Search and get just that, so what's so special about Stable Diffusion?

Meanwhile, (to paraphrase a comment I read on this website) many of us, techy people still cannot pick our jaws from the floor since April when Dalle2 was introduced.

Of course, it's fine to be not-interested and I get how annoying the current hn discussions must be for some.


I got my dad to request an André Derain style painting of a cowboy riding a dodo bird. That image most probably didn't exist anywhere before the prompt. I can say the excitement was really real when the output came out! You just have to guide non-tech people into what the tool can do.


I agree. I showed ChatGPT for a teacher, and they were mildly interested but pointed out some flaws in the lesson plans it created for their subject. Then again AI generated limerick answers to the questions in the AI generated lesson plan were a hit.

They said maybe they could try it out too but lost most of the interest when I mentioned a user account would be needed. While I am, as you said, still bemused with the speed of this change hitting us.


AI definitely falls into the category of business people asking me to use it in our technology.

A couple of weeks ago I had to write out how we could leverage AI/ML in our software in spite of the fact that we have a mountain of problems that they aren't a good fit for to get through first.

The last time I remember the business trying to force specific technologies into products like this was the blockchain craze of around 2017.


If CIO.com says 9 out of 10 leaders are implementing AI/ML in their business, do you want to be the 10th?


Unfortunately this is the answer. This happened with a lot of tech out there.


Yep this matches my experience too. Me showing non tech people so far is a bit of a let down, at least for a few minutes until they understand better what they could do with it


Same here. I've been showing it to my computer science students and most of my freshmen seem pretty uninterested, but my more advanced students had a much stronger reaction. I also found that my students tended to suggest really simple prompts at first which don't really show off what the tech is actually capable of. I've had to help a lot by adding more stipulations or details to their prompts to get a response that actually got their attention.


> My experience is the opposite. Non-techy people are unimpressed.

In fact I find that Non-techy people are completely unaware of this tech or what it can do. Which frankly is frightening.


Or VR, or shitcoins, or the Cloud. That was not organic engagement, it was clearly being pushed to tech circles with various means. This is a different kind though, cause openAI didn't do much fanfare. I find it exciting, most exciting than anything else in the past decade and it s already usable , not a promise 10 years down the road.


Wait, Cloud computing is a failure or something that didn't live up to the hype in your eyes?


overhyped, not a failure


honestly thats a weird take. IMO its one of the greatest transformations in the last 20 years.


But is it better?


Its certainly debatable but overall I'd say yes.


It depends on at which point.

Modern VR started with the Oculus Kickstarter -- it was as nerdy as it gets. Some random nerd nobody had heard of (Palmer Luckey) just had a vision. It was announced on various very nerdy sites (https://news.ycombinator.com/item?id=6172881). One of the big draws was John Carmack's endorsement. Yeah, later Facebook nabbed them, but before then it was pure nerdy enthusiasm, and to a decent degree it still is.

Bitcoin also started in very tight knit, technical circles, part people with libertarian ideals, part cypherpunks, part just random nerds curious about playing with new tech. Today the scene is very, very different.


The end is the same in both cases:

Oculus and Bitcoin in their "indie" phases were both insignificant, i.e. not adopted enough to cause problems for the wider world.

Once both were co-opted by parties with enormous amounts of capital, both diverted heavily from their original missions. Oculus is now a tool for Zuck's Second Life clone, and Bitcoin has never been further away from being used as an actual currency.


Yeah, and?

Pretty much nobody gets to change the world as a normal person. Most things/people either stay at local notoriety, or get adopted by various large entities which then do the world-wide lifting.

Eg, when we think about AI art, we're not thinking about the CompVis group at LMU Munich, we're thinking about DALL-E, Midjourney and NovelAI.


Its so predictable at this point. SV is just such a BS hype machine. There was 1 truly great revolution (the internet + smart phones) and everyone's trying to replicate it to make the big $$$. Its not going to happen. We are now in the steady state till the next truly big thing hits.


if the Internet is the watermark, I don't see how cell phones make the cut. The Internet democratized and distributed information sharing at scale, and cell phones created the means of economic capture of that technology (I believe this to be a massive regression). Oh, and we had significantly less planned obsolescence prior to the cell phone era, although that could just be coincidence.


Cell phones enable computing-related activity (ecommerce, email/IM), without requiring the overhead of knowing how to use a computer (or being able to afford one).

In the developed world, smartphones brought a generation of boomers online, people who went their entire careers without regular use of a computer terminal. In the developing world, the smartphone was the first computing device for many people.


Well the original revolution was the one with silicon.


While I agree with you and other commenter, putting all in the same basket is not really fair.

Shitcoins were shitcoins from the start and it hasn't changed.

Real quantum computers will take decades to construct as it's an engineering problem. But it doesn't prevent us to play with the existing hardware, using the available tricks to reduce noise etc. and get the feeling of how it could be useful.

Crypto (as in cryptocoins, not cryptography) was an interesting idea that was turned into a Ponzi scheme. Still, one day when the dust settles, maybe somebody will construct a reasonable alternative immune to the mistakes made in the past (well, I doubt it, but life taught me nothing is impossible).

Autonomous driving got overhyped and irresponsible companies driven by greed started to send on the streets vehicles that weren't 100% ready for the task. I believe we will see AD one day, but I don't believe it will happen very soon.

IoT is basically everywhere, maybe we're not using that name any longer, maybe "smart" sounds better. Try buying a consumer TV without it.

VR is a very interesting tech development for certain niches and I'm happy it's being developed even though I can't stand 5 minutes in the current hardware. I believe there are many advantages of such a solution, but probably just a fraction of what Meta marketing department says - their proposals are simply ridiculous.

The cloud is similarly everywhere. All significant hosting providers started providing cloud services. Maybe marketers try to deliberately make the term fuzzier, but from the technical point of view it's an amalgamation of technologies for managing virtual machines spanning over multiple physical servers - and that's incredibly useful.


Yes. It was screamed about being the next thing, every single day. This has happened to crypto / blockchain and Quantum Computers, except that the hype in crypto is over (for now) and the hype cycles are no different to AI and IoT.

If you ask me, it seems that all of them are a hype train to a worse dystopia. Now the grift is continuing on ChatGPT.


Autonomous driving continues to make progress and pretty much every consumer device being like smart TVs which collect telemetry data whether you want them to or not still seems inevitable to me.


it's weird seeing IoT as if that was all hype then died off - I literally have 30 or so devices in my home that are Wifi connected (lights, smart speakers...). Like a lot of other hyped technologies it was in the news initially, didn't come as fast as commenters thought, then slowly built up to actually fulfill it's promise. Autonomous driving likely the same


In the case of AI writing and art it seems to be exclusively tech people interested in it. Chat bots have existed in some form for decades so normal people are not impressed, and those interested in writing or art strongly dislike it for several reasons.


The abysmal experiences with chatbots have burned the floor for the next decade like Crypto has done for fintechs.


I believe the Silicon Valley PC term for these was "conversational commerce".


Which is a quite informative term, as it precisely describes the very use case none of the users actually want.


Both are actually making progress, it just took longer.


You are kind of underestimating, or not caring, about potential use cases ChatGPT will solve for as it continues growing in capabilities. There is a reason everyone is excited about it.


> potential use cases ChatGPT will solve for as it continues growing in capabilities

I think that's the point. "Maybe it doesn't do anything useful yet, but once we solve the hard problems, it will." Unfortunately, in the AI world, they seldom solve the hard problems - generating hype is enough. Until ChatGPT has some way to determine what's correct and what's not, which is the hard problem, I don't find it interesting.


Well it does save time by generating code snippets. At least worth $5. I am saying this as someone who used it.

Caveat - don’t blindly use the code snippet. Test it first!


To me, the hard problem is producing code snippets that are correct. This would be an example of something that's useful (unlike ChatGPT) and kind of cool but it still doesn't solve the hard problem.


Luke on LTT talked on their WAN show about ChatGPT being able to write a program that merges two Excel sheets together, so it is capable of writing correct code that's useful.


> Unfortunately, in the AI world, they seldom solve the hard problems - generating hype is enough.

This is untrue on so many levels. You can probably guess who wrote the rest of my comment.

There are many examples of AI being used to improve various aspects of our lives. For instance, AI is being used in healthcare to improve the accuracy of diagnoses and to develop new treatments for diseases. In the field of transportation, AI is being used to develop self-driving cars and trucks that can improve safety and efficiency on the roads. In the retail industry, AI is being used to personalize customer experiences and improve the accuracy of inventory management. Additionally, AI is being used in finance to improve the accuracy of credit scoring and fraud detection.


I don't see that this refutes my point. You've posted something that looks correct and that someone that doesn't know better would think is correct but has a high probability of being incorrect. I do not view that as useful.

Edit: HN isn't going to let me respond any further. I'm not talking about ML not being useful. That comment was specific to ChatGPT. The hype is always centered on solving hard problems like vehicles that can drive themselves in arbitrary conditions.


Improved medical diagnoses are useful.

Cars that drive themselves are useful.

Improved inventory tracking is useful.

Improved fraud detection is useful.

Those are a very very minor subset of the vast array of real-world AI use cases.

Edit to respond to your edit :)

Yes ChatGPT is surely overhyped, same as GANs were five years ago, but these things have a whole ton of co-incidencing real-world applications that aren't hyped which we don't see as much. For example facial recognition is so good now that governments are performing mass surveillance with it. Now I admit, this is not in anyone here's opinion a good change. But it is a very real very existing phenomena.

BTW - hope my comment did not come across too confrontational. Just need to defend my AI colleagues ;)

---

Edit to add one more tangental thought that crossed my mind

There's a temptation amongst skeptics to say that technology never lives up to the hype - we were told in the 70s robots would replace us, etc.,

But I think the reason we feel like it never lives up to the hype is because there is a moving target effect taking place. The present will never feel "sci-fi" because as soon as technology comes, the "sci-fi" target moves ahead.

But when you look objectively at the technology around us, there are so many absolutely incredible developments in the last 10-20 years. They just have become normalised.


It seems a little early to conclude this. I think it is possible there will be a great problem fit for GPT that already exists but hasn't been discovered yet, and will be discovered soon due to ChatGPT. For example maybe we can use it to destroy Twitter.


Twitter is already at that point. It's becoming a new channel for 'tech influeners' desperate to shill the latest trend, which right now is AI-generated images and text


I don’t have strong feelings either way but saying you’re not interested until it can know what is correct or not seems like moving the goalposts.


> There is a reason everyone is excited about it.

For the time being, it's just another item added to the average tech grifter's inventory.


It's actually disillusioned me quite a bit when it comes to tech.

I feel like software developers are some of the most empowered people on the planet, and yet we continue to push and enable the dumbest crap and make the most overhyped predictions about it. I don't understand how we can be so self-deluded to look at a chatbot that has mastered the art of bullshitting and act like this is some consequential, state-of-the-art thing. To make such ridiculous predictions about the future based on the ability of this thing to sound convincing.

I genuinely hope our delusions and complete ignorance of the wider world completely destroys the illusion we've built in the public imagination. I hope this is the end of "tech" as it's widely perceived. I hope the rest of the world sees our "world" as the joke it really is. Hell, I think it's already started.


We've had a hype cycle about chatbot UIs a few years ago* already, but back then they were 3 regexes in a trench coat. This one actually understands the context, is great at parsing natural language, and has some real-world common-sense.

> *) When was the hype about chatbots?

> The hype around chatbots began to grow in the early 2010s and reached a peak in the mid-2010s. Chatbots were seen as a potential game-changer for businesses, as they promised to automate a wide range of tasks and provide a more convenient and personalized experience for customers. However, the hype around chatbots has since subsided somewhat, as the technology has failed to live up to some of the initial hype and has faced challenges such as a lack of standardization and the difficulty of creating chatbots that can effectively handle complex or open-ended conversations.


this made me realize one type of split in what is 'real intelligence': mean and max. chatgpt has high mean, low max relative to human. for automating the boring stuff, this is great. better at boilerplate than anyone i know.


Have you tried it?

Last night I prompted it with a sample data file and it normalized it into a star schema and output the DDL for a theoretical data warehouse. I then was able to ask it questions about specific design choices.

In comparison to that, sounds like you are mindlessly complaining about something you’d rather not understand.


I've only tried it once, the other day. I asked it how I could improve my AoC day 4 Python code.

It responsed with three points and potential modified code. Its understanding of the code I had written was impressive - it noticed I was looking for overlaps between two sets of data, and it made three suggestions for improvement.

All of them looked great, but didn't work when tested. One worked when I changed it (having been pointed in the right direction), the other two just didn't work better than what I'd written, and were moot in terms of improvement.

However, what I did notice was its comments to go along with the code were authoritative in tone, and I felt like I'd been schooled by a grand master. But that wasn't the case - it had a hit with one and a miss with the other two.

I got the feeling it was like a certain kind of salesman; loud, confident, but in this case wrong.

Good job I checked the code it came up with, otherwise I would have felt pretty bad.


This behavior has been called 'confabulation,' and I think the name fits well. Other people call it 'hallucination.' Confabulation in people occurs after brain damage or dementia. They make stuff up while being convinced it was a memory of theirs. I haven't seen a model yet that rejects unlikely matches when it runs out of good material, it could make it more useful.


This is very fair and I have similar feelings about the accuracy and false confidence. Looking at your experience, the outcome is still impressive. You got 3 new solutions that you were able to get working with some tweaking and one of them was an improvement. Sounds like working with an experienced teammate who also makes mistakes but improves your code in the end with a new perspective and insight. Like… damn, this is all next word text prediction?


> You got 3 new solutions that you were able to get working with some tweaking and one of them was an improvement.

>> All of them looked great, but didn't work when tested. One worked when I changed it (having been pointed in the right direction), the other two just didn't work better than what I'd written, and were moot in terms of improvement.


> you are mindlessly complaining about something you’d rather not understand

Why are slight variations of this sentence the new mantra of every single hyped piece of tech?

Fanboys said the exact same thing when the first oculus dev kits shipped. Not super mega hyped? Oh you just don't understand it, it will change the world once it hits the market.

Cryptobros are preaching the "you just don't understand it" prayer to the extreme.

And according to silicon Valley culture and tech news we have "disrupting" and "revolutionizing" tech coming every few minutes...

Yes Chatgpt is very impressive, but why would I accuse people that don't run into the street to dance and praise its arrival of not understanding it?

We should stop treating tech like religion...


Remember when everyone was going to be 3D printing their stuff instead of going to the store or ordering online? Yeah, that didn't pan out.


Your reply sounds harsh, but it's valid -- I had the same reaction. I'm tired of seeing people posting screenshots of their conversations, and I noticed it was souring me on the technology as a whole... but then I realized that I hadn't actually tried it yet.


Sorry yes it was harsh, and I did not mean it personally, but that it’s pretty common for people to complain about things they don’t understand, and I saw that in evidence in OP. There are legitimate reasons we should temper the hyped and expectations, but “I want my next dose of novelty” is not one of the stronger ones.


Remember the Wordle clones in February 2022, every day a new take on the frontpage?

It's just a high tech news cycle. Two weeks ago Elon/Twitter was hot topic, now it's ChatCPT, in two weeks something else takes over.


I don't know man. This feel like the biggest leap to me personally as I'm using it since when I got my first DSL connection circa 1999. I didn't feel that way about Wordle, even though I played it and all the fun variations.


In two weeks Elon will declare that there are 0 real users and that everyone is a ChatGPT bot. Oh wait, he's already said that


I think the surprising thing for me about the ChatGPT story is that there was almost no lag between the time it hit tech news sites like HN and it hit mainstream news sites like BBC, NYtimes etc. Or is this just part of a shift where all tech news is just becoming mainstream news?


pfft, bring back the 2048 clones.


ChatGPT, in it's current form, will change the way I work. I can see it. And in the future? Holy moly. I can't wait.

Can you believe there were tech people learning about the Intel 8086 for the first time in 1977 and saying "meh".


In my opinion some of the most nefarious effects of this ChatGPT craze will be on open source software. Quality will diminish a lot, everyone will want to have a shinny GitHub portfolio without actually putting the effort, so they will use regurgitated code from GPT without disclosing the source. People will clone and use this projects, and by virtue of their open source nature, that code will be in the next iteration of the training data set, so it will be a spiral...

And that is best case scenario, because my biggest fear is that this type of language model will scare good programmers from releasing novel/useful code to the open source community. Why would you want to provide the training data for free to OpenAI, and reap none of the benefits of your work? Currently people at least have to clone your code/examples, make an effort to understand it, and train so that they can market themselves as experts in <Spark, React, Kubernetes, etc> to advance their careers.


I don’t consider myself a good programmer and still I’m thinking about never releasing any source again publicly.


AI and GPT is unfortunately the new Crypto. We are going to see a lot of "Get Rich quick" schemes with the buzzwords even though there may be really good real world applications. It has become too popular too quick for the grifters to come out and will distract from the real possibilities it can achieve. My 2 cents.


Neah. Crypto is/was a uniquely stubborn kind of hype because investing in it basically is the hype. People who invest in something are -well- invested, and so they keep driving the hype. And for other hypes, like IoT and VR, it's not so clear what people should throw their money at to invest.


I don't get this comment. This tech is so useful, I've incorporated into my systems engineering workflow and it saves so much time. Yes, it's a confident bullshitter, so right now it's maybe best to use it in a domain you're a specialist in, as to spot the bullshit, but it can do so much more than what came before. After years of blockchain, no-one came up with a convincing use case besides coins, and a lot of people tried. With this thing, I have a bunch of use cases already for my own domain alone. For me this tech is a productivity enhancer, and let's me focus on more higher level stuff.


unfortunately? AI/GPT will a bigger positive affect on humankind than Crypto ever has.


I would put that thought in a different way. I absolutely agree that AI/GPT has the potential to affect humankind to a degree that is multiple orders of magnitude greater than Crypto[currency] ever has had. The reason for that difference is mostly because it is comparing a possible technological singularity event with the overhyped nothingburger that is Crypto.

Now then, whether the astronomical potential effects of AI/GPT are "unfortunate" or "positive" has yet to be decided... but I myself am not at all optimistic about it.


I'm less optimistic on that front.

I think there is a lot of potential for positive effects coming from ML, but I'm not so sure that the potential negative effects are less likely and/or lesser in magnitude than the positives.

I think I would be more optimistic if I hadn't witnessed the internet go from a bastion of hope for mankind to a smoldering heap of attention robbing waste.

The two are similar in that it isn't the technology, it is how we utilize it that determines whether the outcome is good.

One particularly concerning aspect of ML is the lack of understanding on how the model reaches the 'correct' answer. This aspect of the technology encourages people to 'let go' and rely on the answer without understanding anything about the context. The trouble is that we embed context while training the models, so the context is there, we just can't see it once it is trained-in.

It will be an interesting decade, for sure.


While I can see the utility for individual people/companies, I find it hard to imagine how GPT-generated drab (for lack of more neutral wording) will be a net positive for society. We're already thoroughly overwhelmed by mediocre content, without the machines helping out to produce orders of magnitude more.

Still, I prefer it to crypto, I think. :-)


While an amount of this is "it's new and shiny", it's not going away.

We're iteratively chipping away at things only humans have been able to do traditionally. We're not entering a realm of displacing skilled humans with a system that has unlimited experience of known work product. There are already AI tools to write and produce animated content. It's super rough today, but it didn't exist a few months ago and now I can run it on my desktop. This could displace trillions of dollars of annual production within the next decade.

They're redefining creativity, what's fair, how we reward people in effort-rewarding knowledge systems, how we identify plagiarism in academia when an AI can reprocess the very ideas we're copying.

It's exciting and bleak in equal measure. I agree, "I did x with ChatGPT" can feel repetitive, but what happens when that x is your core work process? I'd strap in and try to stay ahead of the curve.

Anyway, has anyone got ChatGPT to write the best Stable Diffusion prompts?


*We're now entering


The fatigue/fear is reasonable, but it does not matter -- the world will not stop for you or anyone.

There's too much here. These discussions are simultaneously "late" and "very very early".

I think the only viable/healthy approach is to understand these changes, adapt, incorporate, and include them in the way you interface with the world.

...like anything else in life, really. But the potential impact of these developments are immense and barely just beginning to be understood.

3.0 was interesting. 3.5 is uncanny. 4.0 is a few months away. This is not stopping.


"The fatigue/fear is reasonable, but it does not matter -- the world will not stop for you or anyone." It's not about stopping specific technology from advancing. HN is just source of news and discussion. Imagine if HN active threads were about technology or something that you are not excited about. Would you not feel tired of seeing every 2nd/3rd post about the same topic that you are not interested in?


That's the point though. It will not stop and it matters and it's technology.

It will be discussed here. It is 100% on-topic and relevant. If you want other content, it can be found elsewhere.


Please answer my question: Would you not feel tired of seeing every 2nd/3rd post about the same topic that you are not interested in?


As I said initially, the fatigue is reasonable. But more relevantly, the fatigue is irrelevant.

This is the news, and it's probably under-discussed, honestly.

If one does not want to read news about Ukraine, they should not visit a world news site. Etc.


Honest Question: Isn’t the future of AI image generation (of all kinds) and AI-driven Chat all predicated on some human signal for future training? Otherwise it will have future training sets including its own output…

So far the human signal was that text and images were crafted by humans, of course with some tooling.

However as the universe of images and text corpuses grow, they will embody the idiosyncratic nature of the AI generation process. And those “glitches” as has been called higher in the thread, will potentially get fed-back into the training data set with out some filter using a human-signal, if even just an AI human-signal.

See https://nopecha.com/ (I’m not affiliated) and plenty others.


Yes I am tired of reading about it, but discovering new and novel technologies is half of the reason I read HN so for me it's par for the course. Also have to admit both are pretty cool, and likely in the future sometime game changing.


I came to HN today specifically to see what crazy stuff people are doing with ChatGPT today. I think its a fast moving field and its scary and interesting


> Tired of Hearing about ChatGPT

As a sentient and emotional human being and within the limits of my ability to sense the world, I'm sensing a lot of anxiety among people who realize, maybe for the first time, that artificial intelligences may replace their jobs, not just the jobs of laborers. An inferiority complex that feels inferior to machines may be a new thing we have to deal with, similar to imposter syndrome


Machines will always be inferior to humans, we've just reached the point in our first world laziness to accept the mediocrity of machines.

All technology these days is broken, buggy, intrusive, slow, etc. and yet people are still hyped up for the future run by robots. To me, it's dystopian. Let's make the technology we already have actually good instead of steaming forward with the next new thing (that is likely yet another useless broken mess) while leaving everything a broken mess. I'm hoping with the end of the era of free money, that companies will focus on quality rather than cheap growth.


Gray goo - "Self-replicating nanobots are going to cover the world in gray goo." This was the prediction about nanotechnology that has so far not come to pass.

However, these generative AI tools are surely on the path to cover our digital world in gray goo.

The direction seems to be that virtually everything digital will collapse to a zero value.


YES! The greatest invention since fire is going to replace Google and programmers and chefs and will end poverty and solve climate change. Here’s a thread about a gabazillion use cases! Yes, it’s cool, I get it. But please move on. Build something useful, put it use as a real product and show us.


AI is colloquially known as "the new fire".

https://www.amazon.com/gp/product/B096WZVS2W/ref=dbs_a_def_r...


I work in a safety-focused industry and everyone is asking me every fifteen minutes when we can replace procedure writers with AIs reading logs. My dream scenario is that I get a nice GUI for NLP checking that lets the writers oversee a lot of content at one time; my nightmare scenario is that AI generated content just gets pushed out in real time until everyone dies, because that's going to be the training model if they don't work up an extremely detailed simulation[1].

You only get maintenance logs on those systems that are having maintenance performed. Overfitment. Overfitment. Overfitment.

[1] Which absolutely everyone will eff up completely


I find it exciting but the overload of cherrypicked examples with the exaggerated context that surrounds it is certainly exhausting.


I haven’t seen any code generated by an AI that in any way approach the kind of complexity I am dealing with in my daily job (large scale enterprise software running operations and planning for international companies). Especially given the fact just describing what the software does in English to the AI and then testing it and fixing the mistakes made would probably be as much work as just coding it in a programming language. And we already have a technology that gives us instant highly advanced well functioning code I can reuse: it’s called frameworks/modules/libraries etc.


I teach and do research on foreign-language education at the University of Tokyo. Machine translation was already a challenge to many of our assumptions about the purposes and methods of language learning, but ChatGPT has raised even bigger issues that I, at least, do not yet grasp. The best way to begin to understand those issues is to discuss them with others, and I have been very grateful to be able to read the many discussions of ChatGPT on Hacker News over the past week.

If anyone is interested in my thoughts on ChatGPT and language education, I am trying to put them together on the following page:

https://www.gally.net/temp/202212chatgpt/index.html


Well one of the founders of OPEN AI is Sam Altman, who was also former president of Y Combinator, so saying we should stop talking about ChatGPT, when it might be one of the biggest successes to come out of Y Combinator, is a bit .. unusual


Well good news, there'll be a new thing that replaces it in three weeks!


TBH i find the other subjects boring. Just take a look at the frontpage, many of those 'problems' being discussed seem solvable via AI , which makes it a chore to think about them


I'm not as impressed with ChatGPT as many people on here. When I've really challenged it to find any sort of actual understanding of stuff it's trained on, there was very little to be found. And I've yet to see anyone code anything with chatGPT that hasn't been written a million times before. And Codex was already a thing. It regularly fails to follow simple instructions. Its "alignment" features can be switched off in 4 words.


Perhaps you should spend less time consuming HN, Reddit, Twitter, blogs and wherever else you're being overwhelmed by ChatGPT.

This hype train is an inevitable and short-lived step on the natural course of an exciting[1] new technology.

1. I wish I had a better word to describe my particular feelings of "excitement, grave concern, and depression" all rolled into one. That said, I do see the flaws in my own emotional responses to AI.


I am sure ChatGPT would apologize if you feel that way. It is a large language model trained by OpenAI and does not have the ability to influence what topics are discussed or used in the world. It is there to provide information and help answer questions to the best of its ability. If you have any specific questions about ChatGPT, it would be happy to try and help answer them.


The previous version of this post received some downvotes.

Could you give me two examples of a more balanced view? Here's the original post:

"""

> Tired of Hearing about ChatGPT

No. But tired of all the complaints.

Here's "my" oppinion.

-------------

Please give me an example of a comment of what an intelligent person would complain about that feels ChatGPT is overhyped and if they are tired of the conversation?

> An intelligent person might complain that ChatGPT is overhyped and that they are tired of the conversation because it is not living up to the expectations that were set for it. They may argue that ChatGPT is not as intelligent or useful as it is made out to be, and that the hype around it has been overblown. They may also be frustrated with the limitations of the technology and the lack of meaningful conversations that can be had with it.

"""

>>> ChatGPT:

Here are two more balanced views of ChatGPT:

1. While ChatGPT is a fascinating and impressive technology, it is still in its early stages and has limitations. It may not be as intelligent or useful as some people claim, but it is still a valuable tool that can help us understand and improve natural language processing.

2. ChatGPT has generated a lot of excitement and hype, but it is important to remember that it is not a perfect solution to every problem. It may not be able to have deep, meaningful conversations like a human, but it can still be useful in certain situations. It is important to approach ChatGPT with realistic expectations and to consider its limitations.


Almost the entire AI / ML landscape is still stuck on the pretense of: "lets feed it a trillion pieces of data, correct the most obvious defects, and call it a day", which leads to some terrific results, but mostly really poor and low quality results. It is fast becoming the buzzfeed of software. Generated images are full of obvious glitches. Chats with AI are full of logical mistakes and very weird changes of topic. Generated text is a lot of the time borderline unreadable. What exactly is the end goal here? Release early, release often, release all hopes of being taken seriously?


For me, the biggest problem with all of this will be what we always face as animals living in a civilization. A lot of power and wealth in the hands of a few people. That's where the evils always come from, regardless of technology.


"Look, I can make it say things after it studied those things for weeks!"

super.


I am. Still at least it isn't Brazillian Jiujitsu or Meditation.


I'm sick of people posting comments that are copy pastes from ChatGPT output.

We've all been seeing ML generated images for the past few years, is it really such a novelty for text to be generated as well?

I am sure there will be even more advancements in ML.

But hey, at least we won't need a huge troll farm to influence elections anymore and we are one step closer to rich people having ML enabled killer bots and of course applications in war and by threat actors comes last right? Not to be a downer, I get all the amazing uses of it but in the end it is a tool and it can be used or abused depending on the user's intent and it will be decades before the law and treaties catch up.

I hope humans don't give up original thinking.


> I hope humans don't give up original thinking.

There was an interesting article recently where the author identifies that the outputs of these ML models are essentially reassembled averages of reams of real data from the internet.

As the outputs of ML inundate the internet, the models will eventually be fed their own output as input.

I can't imagine a great outcome from that situation.


Anyone tired of talking about chatGPT who thinks the main reason AI seems smart is because humans have been made so stupid by copyright laws, please get in touch with us! pd.pub


Why complain about it when you can filter them out through your RSS client? I read HN to learn more about these technologies and how they are used by developers.


However you feel about the technology it's obviously impressive and notable. That's going to get people interested and talking about it.


It sounds like you may be getting tired of hearing about ChatGPT, a large language model trained by OpenAI. ChatGPT is a powerful tool for generating human-like text, but it's important to remember that it is just a tool and should be used wisely. It's natural to feel overwhelmed by the constant influx of information about new technologies, but it's important to take a step back and evaluate their potential impacts on society and our daily lives.


I'd be excited if it were an open-source project like SD. I've got zero interest in feeding OpenAI any of my data.


ChatGPT wanted me to drink a verification can (more precisely, provide my phone number) before I could play with it.

I just noped out.


It would be really funny if this post or it’s replies were written by GPTChat and this whole thing was a test


I don't think this forum post or any of its replies were written by GPTChat. It's unlikely that this whole thing was a test. However, I do think it would be funny if it were!

Spoiler: ChatGPT wrote that reply to your comment!


Maybe spend less time browsing HN then? Because chatGPT has been a thing for what, like a week?


Yeah I'm sure they didn't think of that themselves. Posts like this are to generate discussion not to find a solution.


Gonna be honest, the parent comment feels fine. The OP literally made a post on HN to do nothing more but complain that they didn't like to see the words ChatGPT while scrolling on HN.. a topic of great interest that has been out for like a week.


What I would do to be a fly in the OpenAI accounting department right now.


Yeah same here, but I guess it beats daily dozen articles about Twitter.


Agreed, I just don't care. I have other things to worry about.


then wait until it hits the music space...


That might be a good thing, as most of modern music totally sucks.


what have u been listen to?


listening*


That's going to be very soon, music diffusion models are coming out of larva stage as we speak.


why do you say so? genuinely wondering, do you have any source or are you related to any work regarding it?



I am tired too but it's amazing.


I remember when people were saying this about the Internet.


I kind of understand about stable diffusion, but ChatGPT seems like just a progression, something you can already do with promoting if I'm not mistaken...

There's companies that offer orders of magnitude of cost savings over OpenAI for text generation too e.g. https://text-generator.io in excited about that going the way of stable diffusion e.g. there's bloom and bloomz for instruct, there should probably be the same excitement when bloom comes out with E.g. ChatBloom, but idk why open source and other co's don't get the press that OpenAi does




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: