According to the Ofcom regulation checker [1] (linked to by The Register article), the Online Safety Act does not apply to this content.
Here's the most pertinent section (emphasis mine):
> Your online service will be exempt if... Users can only interact with content generated by your business/the provider of the online service. Such interactions include: comments, likes/dislikes, ratings/reviews of your content including using emojis or symbols. For example, this exemption would cover online services where the only content users can upload or share is comments on media articles you have published...
is this legal advice you are offering, as someone practicing law in the uk? because you are all over this thread stating your opinion very confidently.
(conveniently, there is no risk to yourself if you happen to be wrong or misinformed.)
No, I'm not offering legal advice, and neither am I stating an opinion. I'm simply quoting Ofcom, the regulatory body responsible for overseeing this law.
A valid point, and maybe I should have phrased it differently. I've deleted the comment which used the word "misinformed", so as not to cause any confusion.
My point is simply that the Ofcom quote clearly states that user comments on an article are not subject to the Online Safety Act. I assume this is a fact, as it's from the horse's mouth.
Some people appear to be basing their opinions on the assumption that the OSA does apply to such comments (hence my use of the offending word).
>Please note: The outcome of this checker is indicative only and does not constitute legal advice. It is for you to assess your services and/or seek independent specialist advice to determine whether your service (or the relevant parts of it) are subject to the regulations and understand how to comply with the relevant duties under the Act.
I mean even the site itself says it really shouldn't be used for legal advice...
On top of that, none of this matters until said law is settled under a case. Most often it's the first judge and the set of appeals after that point that define how the law is actually implemented. Everything before that is bluster and potential risk.
So the story is... a publication that opposes the party currently in power, quoting a few people from the side that's presently out of power, saying that their being out of power is really bad, and we may never recover?
How is this different than the whining we get when the roles are reversed?
I realize you folks hate each other, but it would be nice if either of you could talk about something without turning it into a rant about how great, noble and good your side is and how awful the other side is.
To someone neutral (yeah, humor me), the Trump administration has done far more to demolish the reputation of the US than any other administration in my lifetime (OK, maybe Nixon - I don't remember all that much about him firsthand).
But I would also say that Biden, while not as bad as Trump, was worse than anybody since Nixon.
Which of Biden's policies and actions did you find worse than any since Nixon? And where do you rank the Iraq debacle that Bush started? How about selling arms to Iran to fund the Contras in Nicaragua?
Remember what we're talking about. It's not about their policies per se, it's about what they do to the US's international reputation.
So what did Biden do? The botched withdrawal from Afghanistan was the biggest thing. But his own frailty didn't help (speech fumbling and falling on stairs). Yeah, I know, his personal frailty shouldn't affect the US's reputation. But I think it did.
I mean, yes, the fact that we were leaving at all is due to Trump. (Either credit or blame, depending on whether you think we should have stayed there.) But the absolute debacle of how we left is on Biden. And it's that debacle that tarnished the reputation of the US.
There isn't enough training data though, is there? The "secret sauce" of LLMs is the vast amount of training data available + the compute to process it all.
This is essentially a distillation on the bigger model; you'd wind up surfacing a lot of artifacts from the host model, amplifying them in the same way repeated photocopying introduces errors.
> Catching an LLM hallucinating often takes a basic understanding of what the answer should look like before asking the question.
We had the same problem in the early days of calculators. Using a slide rule, you had to track the order of magnitude in your head; this habit let you spot a large class of errors (things that weren't even close to correct).
When calculators came on the scene, people who never used a slide rule would confidently accept answers that were wildly incorrect (example: a mole of ideal gas at STP is 22.4 liters. If you typo it as 2204, you get an answer that's off by roughly two orders of magnitude, say 0.0454 when it should be 4.46. Easy to spot if you know roughly what the answer should look like, but easy to miss if you don't).
We do know. There have always been ways that people could avoid the painful process of learning, and...they don't learn.
Here's a competing thought experiment:
Jorge's Gym has a top notch body building program, which includes an extensive series of exercises that would-be body builders need to do over multiple years to complete the program. You enroll, and cleverly use a block and tackle system to complete all the exercises in weeks instead of years.
Playing devil's advocate here, but in theory, you could claim that setting up harnesses, targets, verification and incentives for different tasks might be the learning that you are doing. I think that there can be a fair argument made that we are just moving the abstraction a layer up. The learning is then not in the specifics of the field knowledge, but knowing the hacks, monkey patches, incentives and goals that the models should perform.
There are flowers that look & smell like female wasps well enough to fool male wasps into "mating" with them. But they don't fly off and lay wasp eggs afterwards.
But there is a distinction we can make between flowers and wasps. If there is no distinction we can make between Schwartz and non-Schwartz, then we are susceptible to the sample problem with or without AI. And if there is a distinction then we can use that distinction to test Bob, and make him learn from his test failures.
But the whole point is that there is a significant difference between Schwartz and non-Schwartz, that only turns up after they start working for real, producing new work rather than rehashing established material, and it takes years to detect. By that time, Bob's forty.
It isn't a "sample problem" it's a process problem. By perpetually raising the stakes and focusing on metrics (e.g. grades, number of publications for students, graduation rates for schools) we've created and fallen into a Poe's law trap. Adding a new metric isn't likely to help.
What might help? Making the metrics harder to game (e.g. something like oral exams, early and often), more discerning (grade deflation), and moving the wrong-track consequences earlier (start holding people back in grade school, make failing to graduate high school easier, make getting into college harder, etc.), and change the cash-cow funding models to remove the perverse incentives.
How do you know that they're stars? I believe they probably are stars as well (by visual comparison with a star chart, suitably rotated), but I've found no source for either claim.
I did find multiple sources, including TFA, for the brightest being Venus.
I’m talking about the grainy noise over all the black parts (actually over the Earth disk as well), including beyond the window edge. The window edge itself looks like a denser and brighter stripe of stars.
Sped through that, couldn't stomach the whole thing. Is there more to it than "argument by sneering dismissal"? (Basically, so far as I can tell, her point seems to be "this was intended as a joke to see if you're stupid, so if you believe it, you are, neener-neener!")
https://www.theregister.com/2025/02/06/uk_online_safety_act_...
reply