this is the same in any field, 99% of human content is terrible, including this comment (and your comment)
the thing is, we need the other 1%, but when in absolute numbers it is small, you can still kind of find it, but imagine you have to look through 1 billion books to find one good book, it is as good as this 1% disappearing
Besides finding that 1% of good content, if AI is increasingly used to generate the 99%, then the people who otherwise would have written the 1% might not even bother pursuing the field any more. I am concerned that some subset of new good content is at risk of not even being made any more.
I notice the /s but I think for real, there's no point doing the find step if you have access to an AI. You'll just ask it to generate the story you want.
It is quite astounding how blatantly you missed the point of the entire article. Let me post a snippet in case you skipped over it by accident:
> So books, from Blood Meridian to Hop on Pop, are in part a dialogue between writer and reader. But the machines generating these stories cannot participate in a dialogue. They’re Mechanical Turks, guessing what word comes next based on a mix of complex math and the labor of Kenyan contractors paid less than $2 an hour to make sure the responses aren’t too racist.
You cannot simply ask an AI to generate a story that carries any sort of meaning.
> You cannot simply ask an AI to generate a story that carries any sort of meaning.
First, AI stories don't have to be one prompt wonders where it does everything. It can be an iterative, or curated, process guided by a person. Second, AI is perfectly capable of putting together words that carry meaning. What magic sauce do your collection of words have that an AI wouldn't? They can quite possibly be the same exact words.
I'm a bit more hopeful for the auto generated (stable diffusion) illustration lowering the barrier for a good human writer, but I suppose I care more about the quality of content than the originality of images.
Wait until the children's books become the next battle ground for misinformation/agitprop. You could sneak stuff into images too I suppose.
haha i have most of the negative symptoms and never had any drugs what so ever
> Continuous anxiety about literally nothing. I felt like a student who didn’t do their homework, and was dreading their teacher calling on them - for 24 hours a day.
this one is most fun :)
kind of like the end of the world is coming and you are supposed to do something but you dont know what, but you know its the most important thing that has to be done
Upvoted because I see your point, although I think you’re being too harsh.
I don’t have kids, but surely you see the design flaw at play here:
I’d want to give my future kid a phone so I may connect with them at will, but at the same time a phone allows others to connect with them (giving them addictive content to serve ads for example)
An iPhone has pretty good parental controls, but when it’s time for the child to connect with their peers, the same place they connect with peers is the same place designed to exploit them, with little to no alternative.
It’s like we’ve transported our 11 year olds into digital pool halls where parent supervision is banned and the pool hall owner keeps serving them drinks
I won’t be giving my future child a phone before 13 but I have no illusions that they’ll be normal peer after not having the same access to such things.
It’s like the McDonald’s play pens all over again, but worse!
> An iPhone has pretty good parental controls, but when it’s time for the child to connect with their peers, the same place they connect with peers is the same place designed to exploit them, with little to no alternative.
try to use it and you will see how bad it actually is, reinstalling an app removes its timeout, any kid almost trivially goes around it, you have to not forget to block the web versions of the apps, there is +1 minute, there is issue with whatsapp links etc
my daughter(12) also just asks for 15 mins extra youtube, then she just looks at my fingerprints on the phone to guess the screentime code, so i have to wipe my fingerprints like i am in a james bond movie (changed the screentime code like 3 times until i figured out what is happening)
i am trying to teach her self control and also to monitor her feed like a hawk and be super critical of what she watches
also once a week we go over all the feeds and try to teach the algorithms to show better things (like/follow some science/code youtubers, block toxic ones, watch few videos etc)
i also pay for everything i can so she doesnt see ads (including having pi-hole and adblockers)
you assume they just reproduces the training set, but when they get big enough they start to "understand" things, when the input is really big it actually can never "guess" the next word in the batch, so it has to "learn" concepts
No, but it means you should probably have specific evidence of malice before assuming it. It's similar to Occam's Razor: Just because "the simplest explanation is usually the best", does not mean that the simplest explanation is always right. But it's usually a good idea to start with the simpler explanation because a simpler theory is easier to interrogate.
Adam McKay and Charles Randolph wrote a script. Therefore, Jared is wrong about what he is capable of doing and what is he capable of learning. However, Jared is based on a real person - Gregg Lippman. Despite this, after a brief search I found no public information about the arrest of this brother. So either Gregg didn't say this or he said it and it wasn't true, because people in his position would have been told the difference during the course of their life or maybe I didn't search hard enough - but I don't think I should search harder, so if you really feel this is strong support please provide evidence it is so.
I'm more interested in building explanations on reality than supporting my positions with comedic fiction. I'll share some small part of my explanation. I apologize for my lack of brevity.
In game playing we have the ability to calculate the time to calculate and store perfect play in various games. After around the complexity of chess we get to the point where it is physically impossible to store perfect play because the combinatorial explosion generates more states than we have atoms to work with in our universe. This forces approximation, which forces error, which means that for almost every task in this universe error is not only present but absolutely demanded. In multi-agent settings these issues only get worse, because not only is the state space very high due to the combinatorics, but in addition the optimal policy is a function of other policies. The connecting point here is that since our problems are a superset of things we know to be physically unrealizable it follows that error is inevitable and therefore the observance of stupidity relative to theoretical ideals is inevitable.
Founder culture is completely at odds with sophisticated cynicism.
- Paul Graham
the thing is, we need the other 1%, but when in absolute numbers it is small, you can still kind of find it, but imagine you have to look through 1 billion books to find one good book, it is as good as this 1% disappearing