It’s easy to be against it now because so much content that people recognise as AI is also just bad. If professionals can start to use it to produce content that is actually good, I think opinions will shift.
There are a lot of AI videos that you can very easily tell are AI, even if they are done well. For example, I just saw a Higgsfield video of a kangaroo fighting in the UFC. You can tell it is AI, mainly because it would be an insane amount of work to create any other way. But I think it is getting close to good enough that a lot of people, even knowing it is AI, wouldn't care. Everyone other than the most ardent anti-AI people are going to be fine with this when we have people creating interesting and engaging media with AI.
I think we will look back at AI "slop" as a temporary point in time where people were creating bad content, and people were defending it as good even when it was not. Instead, as you say, AI video will fall into the background as a tool creators use, just like cameras or CGI. But in my opinion it won't be that people can't tell that AI was used at all. Rather, it will be that they won't care if there is still a creative vision behind it.
At least, that is what I hope compared to the outcome where there are no creators and people just watch Sora videos tailored to them all day.
We already have digital IDs in Australia, and it seems like a natural fit for this. The digital ID doesn't need to share much information with social media companies, it just needs to confirm your age. And then we don't need new 3rd-parties holding our personal information.
Also yes, voting is mandatory in Australia. You get a small fine if you don't vote.
It's a very good system. $20 is the right number to get you off the couch, but not so much as to cripple you. There are exceptions if you have a valid reason for not voting. The maximum fine is ~$180 so you can't simply ignore the Elections Commission and hope it goes away.
> These kind of tasks ought to be have been automated a long time ago.
It’s much easier to write business logic in code. The entire value of CRUD apps is in their business logic. Therefore, it makes sense to write CRUD apps in code and not some app builder.
And coding assistants can finally help with writing that business logic, in a way that frameworks cannot.
This tracks with my own AI usage over just this year. There have been two releases that caused step changes in how much I actually use AI:
1. The release of Claude Code in February
2. The release of Opus 4.5 two weeks ago
In both of these cases, it felt like no big new unlocks were made. These releases aren’t like OpenAI’s o1, where they introduced reasoning models with entirely new capabilities, or their Pro offerings, which still feel like the smartest chatbots in the world to me.
Instead, these releases just brought a new user interface, and improved reliability. And yet these two releases mark the biggest increases in my AI usage. These releases caused the utility of AI for my work to pass thresholds where Claude Code became my default way to get LLMs to read my code, and then Opus 4.5 became my default way to make code changes.
2. Some people have become very tied to the memory ChatGPT has of them.
3. Inertia is powerful. They just have to stay close enough to competitors to retain people, even if they aren’t “winning” at a given point in time.
4. The harness for their models is also incredibly important. A big reason I continue to use Claude Code is that the tooling is so much better than Codex. Similarly, nothing comes close to ChatGPT when it comes to search (maybe other deep research offerings might, but they’re much slower).
These are all pretty powerful ways that ChatGPT gets new users and retains them beyond just having the best models.
All of my family members bar one use ChatGPT for search, or to come up with recipes, or other random stuff, and really like it. My girlfriend uses it to help her write stories. All of my friends use it for work. Many of these people are non-technical.
You don’t get to 100s of millions of weekly active users with a product only technical people are interested in.
I think the key here is “if X then Y syntax” - this seems to be quite effective at piercing through the “probably ignore this” system message by highlighting WHEN a given instruction is “highly relevant”
The skilled AI users are the people that use it to help them learn and think problems through in more detail.
Unskilled AI users are people who use AI to do their thinking for them, rather than using it as a work partner. This is how people end up producing bad work because they fundamentally don’t understand the work themselves.
GenAI isn't a thinking machine, as much it might pretend to be. It's a theatre kid that's really motivated to help you and memorized the Internet.
Work with them. Let them fill in your ideas with extra information, sure, but they have to be your ideas. And you're going to have to put some work into it, the "hallucinations" are just your intent incompletely specified.
They're going to give you the structure, that's high probability fruit. It's the guts of it that has to be fully formed in the context before the generative phase can start. You can't just ask for a business plan and then get upset when the one it gives you is full of nonsense.
Ever heard the phrase "ask a silly question, get a silly answer"?
Opus 4.5 seems to think a lot less than other models, so it’s probably not as many tokens as you might think. This would be a disaster for models like GPT-5 high, but for Opus they can probably get away with it.
They also recently lowered the price for Opus 4.5, so it is only 1.67x the price of Sonnet, instead of 5x for Opus 4.
reply