If I had a puzzle I really needed solved, then I would not ask a rando on the street, I would ask someone I know is really good at puzzles.
My point is: For AGI to be useful, it really should be able to perform at the top 10% or better level for as many professions as possible (ideally all of them).
An AI that can only perform at the average human level is useless unless it can be trained for the job like humans can.
> An AI that can only perform at the average human level is useless unless it can be trained for the job like humans can.
Yes, if you want skilled labour. But that's not at all what ARC-AGI attempts to test for: it's testing for general intelligence as possessed by anyone without a mental incapacity.
It seems they don't test for that, since they use the second-best human solution as a baseline.
And that's the right way to go. When computers were about to become superhuman at chess, few people cared that it could beat random people for many years prior to that. They cared when Kasparov was dethroned.
Remember, the point here is marketing as well as science. And the results speak for themselves. After all, you remember Deep Blue, and not the many runners-up that tried. The only reason you remember is because it beat Kasparov.
> The only reason you remember is because it beat Kasparov
There is an additional fascinating aspect to these matches, in that Kasparov obviously knew he was facing a computer, and decided to play a number of sub-optimal openings because he hoped they might confound the computer's opening book.
It's not at all clear Deep Blue would have eked out the rematch victory had Kasparov respected it as an opponent, in the way he did various human grandmasters at the time.
This is supposed to test for AGI, not ASI. ARC-AGI (later labelled "1") was supposed to detect AGI with a test that is easy for humans, not top humans.
> Yes, if you want skilled labour. But that's not at all what ARC-AGI attempts to test for: it's testing for general intelligence as possessed by anyone without a mental incapacity.
Humans without a clinically recognized mental disability are generally capable of some kind of skilled labor. The "general" part of intelligence is independent of, but sufficient for, any such special application.
davinci resolve is the only commercial NLE with any kind of vulkan support, and it is experimental
prores decodes faster than realtime single threaded on a decade old CPU too
it doesn't make sense. it's much different with say, a video game, where a texture will be loaded once into VRAM, and then yes, all the work will be done on the GPU. a video will have CPU IO every frame, you are still doing a ton of CPU work. i don't know why people are talking about power efficiency, in a pro editing context, your CPU will be very, very busy with these IO threads, including and especially in ffmpeg with hardware encoding/decoding nonetheless. it doesn't look anything like a video game workload which is what this stack is designed for.
6k ProRes streams that consumer cameras record in are still too heavy for modern CPUs to decode in realtime. Not to mention 12k ProRes that professional cameras output.
How do you figure? Have you tried? The CPU is required for IO. Deciding ProRes is pretty simple, that's why you can do it in a shader in the first place, and the CPU will already be touching every byte when you're using Vulkan.
Yes. I get 300fps decoding 8k ProRes on a 4090 and barely 50fps on a Zen 3 with all 16 cores running.
The CPU doesn't touch anything, actually. We map the packet memory and let the GPU read it out directly via DMA. The data may be in a network device, or an SSD, and the GPU will still read it out directly. It's neat.
50fps sounds greater than 24fps, which is greater than realtime, no?
> We map the packet memory and let the GPU read it out directly via DMA
packets from where, exactly?
another POV is, all the complexity of what you are doing, all the salaries and whatever to do this prores thing you are doing, is competing with purchasing a $1,000 mac and plugging a cable in.
if your goal is, use a thing in the Apple ecosystem, the solution is, the Apple ecosystem. it isn't, create a shader for vulkan.
i didn't say that streaming prores 8k doesn't make sense. even if it were 60fps. i am saying that it doesn't make sense to do product development in figuring out how to decode this stuff in more ways.
How do you figure? Power consumption will be higher. Decompressing on GPU with shaders uses the highest power state and every engine (the copy, compute and graphics for your NLE). The CPU will be running at full speed for IO and all the other NLE tasks. Using the GPU might be more efficient, but it will not matter to decode faster than realtime, which is my whole point.
If you manage enough diverse servers, then patching will break something critical fairly frequently. Back when I was a sysadmin, Windows updates would break some server every 2 months, and Redhat every 6 months.
Being able to just reboot the server back into a working state, and then fix it at a later time would have been nice.
It's also a big deal for desktops, especially when they're operated by people who ain't experts at troubleshooting software issues. Aeon's my go-to when setting up computers for non-technical folks specifically because I can have it auto-update fearlessly, knowing that the absolute worst case scenario is having to talk someone through booting into a known-good snapshot.
I started out running FreeBSD on my home servers, then moved to Alpine Linux because all server software that I wanted to run was provided in Docker docker containers and with docker compose examples, so it was just easier. Moving the ZFS pools over to Linux was effortlessly.
And now I am looking at moving over to k3s (still on Alpine) because everyone is providing Helm charts, so it seems easier.
I really like FreeBSD, but it's just easier to go with the flow.
Agreed. Putting the burden on parents is quite something:
1. You end up being the bad guy, other parents don't restrict their kids internet usage etc. Some folks would argue to just not set up restrictions and trust them. But it's a slippery slope and puts kids in a weird position. They start out with innocent YouTube videos, but pretty quickly a web search or even a comment can lead them to strange places. They want to play games online, but then creeps abuse that all the time. Even if you trust them to not do anything "wrong", it's a lot to put on their shoulders.
2. If you want to put restrictions in place, even if you're an expert, the tools out there are pretty wonky. You can set up a child protection DNS, but most home routers don't make it easy (or even allow you) to set a different DNS server. And that's not particularly hard to circumvent. I suppose a proxy would be a more solid solution, but setting that up would be major yak shaving. Any "family safety" features (especially those from Microsoft) are ridiculously complicated and often quite buggy. Right now, I got the problem on my plate that I need to migrate one of my kid's accounts from a local Windows account to a Microsoft account (without them loosing all their stuff), because for local accounts, it seems the button to add the device is just missing? Naturally, the docs don't mention that, I had to do research to arrive at that hypothesis. The amount of yak shaving, setup and configuration you have to do for a reasonable setup is just nuts.
3. If you're not good with tech - I don't see how you have _any_ chance in hell to set up meaningful restrictions.
Some countries are banning social media - sure, that's one thing. But there's a _lot_ of weird places on the internet, kids will find something else. I for one would appreciate dedicated devices or modes for kids < 18. Would solve all this stuff in a heartbeat.
After struggling with this problem for a while, we started using Qustodio. It's not perfect by any means, but it's the most broadly effective and usable tool for parental control I've found. Loads better than the confusing iOS native screen time tools.
reply