That is the feature that gives your drive as a mounted file system that stream files as you need them.
It gives me the ease of having access to a giant amount of files stored in my gdrive without having to worry about the space they take up locally nor moving files up and down.
Actually, what solutions to that might already exist? I don't really use the web UI of gdrive as much as use it as a cloud disk drive.
They are intentionally making something like Bloomberg TV, with a very specific tech news audience and with some of the playbook of twitch streamers - growing via clipping -- but a look and feel of Cable news shows.
They mention squawk box on CNBC many times, as competition, in the interview and that they have no problem with filling ad inventory for their 3+ hours of programming a day.
FYI: this is not Asimov's Science Fiction, the pulp sci fi magazine, found along with Analog Science Fiction and Fact at convenience stores near me, but something else.
AGI’s 'general' is the wrong word, I thinkg. Humans aren’t general, we’re jagged. Strong in some areas, weak in others, and already surpassed in many domains.
LLM are way past us at languages for instance. Calculators passed us at calculating, etc.
> If LLMs cannot learn to beat not-that-difficult of games better than young teens, they are not intelligent.
I agree, with unresolved questions. Does it count if the LLM writes code which trains a neural network to play the game, and that neural network plays the game better than people do? Does that only count if the LLM tries that solution without a human prompting it to do so?
I disagree that LLMs cannot solve "unsolved problems." This is already happening, and at fundamental mathematical and medical levels (the fields that are the most demanding when it comes to quality).
The idea that we haven't taught LLMs to come up with new answers... That doesn't even sound plausible. Just crank up the temperature, and an LLM will throw out so many ideas you'll exhaust yourself trying to sort through them.
So what haven't we taught LLMs?
- Have we not taught them to "filter"? We just haven't equipped them with experience and intuition, because we only feed them either "absolute fakes" or "verified facts." We don't feed them the actual path of problem-solving and research; those datasets simply don't exist.
- Have we not taught them to "double-check"? They are already excellent at verifying the credibility of our work.
- Have we not taught them to "defend" their ideas? They can justify ironclad logic and spot potentially "flaky" logic better than any human.
- Have we not taught them to "publish" and "present to the scientific community"? It's just that the previous steps aren't fully polished yet.
And if you look at the question of "creating completely new ideas" from this angle and in this level of detail... To me personally, it doesn't seem at all like LLMs are incapable of this kind of work.
We simply haven't taught them how to do it yet, purely because we don't have a sufficient volume of the right training materials.
Compared to AI our brains are order of magnitudes more energy efficient.
Frontier science doesn’t necessarily mean doesn’t mean that its meaningful, there are a bunch of problems that are tedious to solve with existing patterns. Really tedious. So a human prompting it can let an AI solve it, but at the same time that solution might completely not matter at all ever.
I think the real question should be how much does it help when it matters, moneywise.
Like it can create an app, but can it create an app that makes money and somebody cares about?
Like the amounts a human has to intervene to get an app that makes just 10k MRR is probably around in the 1000s, so how we are at really close to AGI?
I’ve tried and promoted the Ralph Loop, but I learnt that the loop just keeps overcomplicating stuff and then you try to simplify the overcomplication and it simplfies the wrong things and enforces the wrong things and in the end you can not move,till a human goes in and entangles it properly.
So your definition of intelligence would be exactly equal to a human or some subset of them you choose? Could a dog solve ARC-AGI? Probably not. I would not say they lack intelligence. Same with a fruit fly. What if the calculator is powered by actual living neurons? I think you need to know where you actually think the difference between organic machine and intelligence is before making blanket statements.
A modern LLM in a loop with a harness for memory and behavior modification in a body would probably fool me.
"a harness for a memory" so it still requires external tools to work well. The whole point of this benchmark is to validate the systems can solve problems without any sort of outside help.
What are you suggesting, should we rename it. To me the fundamental question is this.
Do we still have tasks that humans can do better than AIs?.
I like the question. I think another good test is "make money". There are humans that can generate money from their laptop. I don’t think AI will be net positive.
I’ve tried to create a Polymarket trading bot with Opus 4.6. The ideas were full of logical fallacies and many many mistakes.
But also I’m not sure how they would compare against an average human with no statistics background..
I think it’s really to establish if we by AGI mean better than average human or better than best human..
I don't have a good alternative sadly. Human Equivalent Intelligence? ChatGPT suggests "Systems that increasingly Pareto-dominate human intelligence across domains". Not so catchy.
The "things that currently make money" definition is interesting. Bc they are the things that automation can't currently do, because could be automated, then price would tend to 0 and and couldn't make money at it.
I’d actually focus on something else entirely here.
Let's be honest: we are giving LLMs and humans the exact same tasks, but are we putting them on an equal playing field? Specifically, do they have access to the same resources and behavioral strategies?
- LLMs don't have spatial reasoning.
- LLMs don't have a lifetime of video game experience starting from childhood.
- LLMs don't have working memory or the ability to actually "memorize" key parameters on the fly.
- LLMs don't have an internal "world model" (one that actively adapts to real-world context and the actual process of playing a game).
... I could go on, but I've outlined the core requirements for beating these tests above.
So, are we putting LLMs and humans in the same position? My answer is "no." We give them the same tasks, but their approach to solving them—let alone their available resources—is fundamentally different. Even Einstein wouldn't necessarily pass these tests on the first try. He’d first have to figure out how to use a keyboard, and then frantically start "building up new experience."
P.S. To quickly address the idea that LLMs and calculators are just "useful tools" that will never become AGI—I have some bad news there too. We differ from calculators architecturally; we run on entirely different "processors." But with LLMs, we are architecturally built the same way: it is a Neural Network that processes and makes decisions. This means our only real advantage over them is our baseline configuration and the list of "tools" connected to our neural network (senses, motor functions, etc.). To me, this means LLMs don't have any fundamental "architectural" roadblocks. We just have a head start, but their speed of evolution is significantly faster.
This list of Do and Don'ts now reads like a bad Claude.md file to me.
Don't insinuate that someone else must have broken that. It was you.
Do run the linter
Don't commit throw-away code
Do write a test case
Don't write a comment describing every single function
Seriously, run the linter. And fix the issues.
It is your fault.
I thought it was just a cluster in the big 5. Very low Agreeableness, low conscientious and extremely low Neuroticism.
People with neuroticism struggle to learn from their actions. conscientious is thinking things through I think. All of this per a book on big 5 personality tests.
Feature request: Google Drive for desktop.
That is the feature that gives your drive as a mounted file system that stream files as you need them.
It gives me the ease of having access to a giant amount of files stored in my gdrive without having to worry about the space they take up locally nor moving files up and down.
Actually, what solutions to that might already exist? I don't really use the web UI of gdrive as much as use it as a cloud disk drive.
reply