You can have a shared family room computer! It works really well. No screens in the bedroom is a great idea. iPhones with strict Screen Time settings are awesome when the kids get old enough to use a phone for communication but not old enough to handle a phone with games and the full Internet
My 16 yr old just had his phone update and apply his old screen time settings from 4 or 5 years ago. Sorry kiddo, don’t remember the screen time password.
Now why they came back, and weren’t working before? The restrictions were so full of holes that they didn’t really work as anything other than a speedbump.
And since you have root certs on the devices, you can decrypt traffic and uniquely identify devices and block internet from your central management, at any time, regardless if the phone is on your wifi vs a friend's vs mobile data.
This makes me a little sad. There's an ideal built into the Internet, that it has no borders, that individuals around the world can connect directly. Blocking an entire geographic region because of a few bad actors kills that. I see why it's done, but it's unfortunate
But the numbers don't lie. In my case, I locked down to a fairly small group of European countries and the server went down from about 1500 bot scans per day down to 0.
It's not because of a few bad actors, it's because of a hostile or incompetent government.
Every country has (at the very least) a few bad actors, it's a small handful of countries that actively protect their bad actors from any sort of accountability or identification.
"This is depressing. Profoundly depressing. i look at the statistics board for my reverse-proxy and i never see less than 96.7% of requests classified as bots at any given moment. The web is filled with crap, bots that pretend to be real people to flood you. All of that because i want to have my little corner of the internet where i put my silly little code for other people to see."
There are fundamental differences. Many people expect a positive gradient of quality from AI overhaul of projects. For translating back and forth, it is obvious from the outset that there is a negative gradient of quality (the Chinese whispers game).
200 times?? No, I don't think anybody expected that to produce something good. It's just a other attention grabbing, "I did a thing a ridiculous amount of times!" stunt.
You're assuming the author did this only for audience engagement(dozens, nay, scores of blog hits!), and not to gratify their own intellectual curiosity? Sometimes you just want to see what happens.
On-Topic: Anything that good hackers would find interesting. That includes more than hacking and startups. If you had to reduce it to a sentence, the answer might be: anything that gratifies one's intellectual curiosity.
Off-Topic: Most stories about politics, or crime, or sports, or celebrities, unless they're evidence of some interesting new phenomenon. Videos of pratfalls or disasters, or cute animal pictures. If they'd cover it on TV news, it's probably off-topic.
Now you woke me up, but what happend to Opera 12 Browser? (somewhere at the internet...:) "It's not enough that AI screwed the distorted reality for millions, spread its poised eliminating of millions of jobs, if content doesn't gain traction and people don't seek validation or stir up drama the future of warfare is autocomplete": Bitte komplettieren,... Krempel! Der Krakel
Mindless fun is not what hacker news is for. There are a million other places in the internet for that. Hacker news is special because it has resisted devolving into yet another "post your memes and brain rot here" website for a good long time.
"Now, with LLM-generated content, it’s hard to even build mental models for what might go wrong, because there’s such a long tail of possible errors. An LLM-generated literature review might cite the right people but hallucinate the paper titles. Or the titles and venues might look right, but the authors are wrong."
This is insidious and if humans were doing it they would be fired and/or cancelled on the spot. Yet we continue to rave about how amazing LLMs are!
It's actually a complete reversal on self driving car AI. Humans crash cars and hurt people all the time. AI cars are already much safer drivers than humans. However, we all go nuts when a Waymo runs over a cat, but ignore the fact that humans do that on a daily basis!
Something is really broken in our collective morals and reasoning
> AI cars are already much safer drivers than humans.
I feel this statement should come with a hefty caveat.
"But look at this statistic" you might retort, but I feel the statistics people pose are weighted heavily in the autonomous service's favor.
The frontrunner in autonomous taxis only runs in very specific cities for very specific reasons.
I avoid using them out of a feeble attempt to 'do my part', but I was recently talking to a friend and was surprised that they avoid using these autonomous services because they drive, what would be to a human driver, very strange routes.
I wondered if these unconventional, often longer, routes were also taken in order to stick to well trodden and predictable paths.
"X deaths/injuries per mile" is a useless metric when the autonomous vehicles only drive in specific places and conditions.
To get the true statistic you'd have to filter the human driver statistics to match the autonomous services' data. Things like weather, cities, number of and location of people in the vehicle, and even which streets.
These service providers could do this, they have the data, compute, and engineering to do so, though they are disincentivized to do so as long as everyone keeps parroting their marketing speak for them.
I don't know why that matters? The city selection and routing is a part of the overall autonomous system. People get to where they need to be with fewer deaths and injuries, and that's what matters. I suppose you could normalize to "useful miles driven" to account for longer, safer routes, but even then the statistics are overwhelmingly clear that Waymo is at least an order of safer than human drivers, so a small tweak like that is barely going to matter.
Well it would seem these autonomous driving service providers disagree with your claim that it is just a 'small tweak' considering they only operate under these specific conditions when it would be to their substantial benefit to instead operate everywhere and at all times.
You consider it "sane" to compare the citywide driving statistics of mid winter Buffalo New York with mid summer San Francisco California driving limited to only using Market and Van Ness streets?
> AI cars are already much safer drivers than humans.
Nothing like that was shown. We have a bunch of very "motivated reasoning" kind of studies and best you can conclude from them is that "some circumstances where ai cars are safer drivers exist". The common trick is to compare overall human records with ai car record in super tailored circumstances.
They have potential to be safer drivers one day, if they will be produced by companies that are forced to care about safety by regulations.
I'm trying to understand where this kind of thinking comes from. I'm not trying to belittle you, I sincerely want to know: Are you aware that everyone writing software has the goal of releasing software so perfect it never needs an upgrade? Are you aware that we've all learned that that's impossible?
this was basically true until consoles started getting an online element. the up-front testing was more serious compared to the complexity of the games. there were still bugs, but there was no way to upgrade short of a recall.
I'm not saying that this model is profitable in the current environment, but it did exist in a real world environment at one point, making the point that certain processes are compatible with useful products, but maybe not leading edge competitive products that need to make a profit currently.
I love how LLMs have made everyone forget how hard it is to verify software correctness and how hard it is to maintain existing software. There is endless gushing about how quickly LLMs can write code. Whenever I point out the LLMs make a lot of mistakes people just wave their hands and say software is easy to validate. The huge QA departments at all software shops would beg to disagree, along with the CVE database, the zero day brokers, etc. But you know, whatever, they're just boomers right?
reply