They've all avoided loading up their LLMs with ads to this point. That is going to change dramatically over the next 2-3 years. All of them will be loaded with ads, and Google will partake as expected given their ad network & capabilities in that realm. They'll match GPT's ad roll-out.
where do I find the paid option? I can not find that on their product page.
There are only two options I can see; one "Available at no charge" and another one "Coming soon - For organizations"
Can you upgrade in the IDE? It would be strange that Google has a performance problem for paid users while I do not experience any such issues at all with Claude and Codex.
- "summarize the discussions on hacker news of last week based on what I would find interesting".
- "Plan my summer vacation with my family, suggest different options"
- "Look at my household budget and find ways to be more frugal."
There are thousands of things I can think of when it comes to how an agentic OS would work better than the current Screen Keyboard paradigm. I mean all these things I could now do with Claude or Codex and some of these things I already do with these tools.
>What specifically does an agentic OS UX look like beyond giving claude access to local files and a browser?
Providing the structure of a unified framework: APIs, safeguards, routing to the appropriate model or pipeline, and controlled access to devices and data. The capability is already there. What’s missing is a sane permission system that operates at the level of intent. Having used OpenClaw, that’s IMO the missing piece. It’s a fun experience, but in its current state I would not trust it to autonomously run any meaningful part of my life.
UX-wise, chat is kind of a crutch. It’s slow and inherently limiting. I imagine something closer to a natural, ongoing conversation paired with an execution layer: some sort of approval or review dashboard where planned actions are ready for approval or returned for refinment before they happen. Probably with a conservative moderator agent in the loop that flags things based on preferences and hard-coded policies.
Calling it an OS isn’t accurate, I agree. But that's how people will perceive it. Most people already think of the application layer on Android as "the OS," not the kernel or drivers. This will be the first-class interface on your device, so that’s what it gets called. It doesn’t mean browsers or dedicated applications go away.
Three years ago I would not have thought the IDE would stop being the application I spend most of my time in. Now it’s mostly a passive code viewer and Git browser.
Compare that to everyday workflows. Researching anything still feels incredibly antiquated. Buying a phone, planning a vacation, comparing options means opening dozens of tabs, copy-pasting specs or prices into spreadsheets, reading through fine print, dealing with low-quality or honestly untrustworthy reviews, checking distances manually on maps. It’s boring and tedious work.
Meanwhile, in a professional life, these systems already behave like a team of secretaries: always available, reasonably competent, and scalable. Not perfect, but easily good enough to offload a huge amount of cognitive overhead.
What I'm trying to say is the long path is "get shit done". No work is completed by reading AI summaries of informative content. Its just productivity porn
I think you misunderstood. It is not based on "feel" and the greater procedural leeway they provide the party with the perceived or actual weaker argument is not for appearance sake alone. A judge wants both parties to make their best case and if one party fails to do so, they try to help them along to achieve it.
most experts in that field do not have access to a quantum computer. For the longest times it was a very theoretical field.
Having access to a physical machine will not help you for 99% of the knowledge you can acquire in that field right now.
Test your code against Firefox. I've encountered many bugs (mostly small and non-breaking issues) in major enterprise SPAs which happened because the developers only did their stuff in Chrome. Little things, like spans acting as check-boxes (why?) only being selectable via keyboard, or custom tool tips not aligning with elements come to mind. But also breaking issues, like using APIs that only just landed in Chrome or assuming the non-standard behavior of the Chrome behavior is the actual standard across all browsers.
You can find a picture of what the data you capture looks like in the "Taking Data" page in the tutorials.
Unfortunately the "Analysis" page is "under construction". But you won't get photographs, like you would get from an optical telescope.
At least in principal it is indeed possible to process data from a collection of radio telescopes into images, and they often overlay the data on optical images to see regions of radio emission. Whether that's in the reach of amateur equipment is another story that I don't have insight into.
Yes! Radio astronomy at most frequencies is totally feasible 24 hours a day, as long as you’re not pointing too close to the Sun.
Source: I’m a radio astronomer.
Now add to it people that use machines with different language settings. On my own devices English, but clients provide clients in German, Italian or Dutch. With icons and limited language skills it did not matter. But finding the item without exactly knowing what term is used is really frustrating.
What do you mean with vitriol? The handling of personal data by the United States government is just an unsolved problem that has to be dealt with. How is Google going to lock the data in the EU, in a way that it can't be grabbed by the unbound US surveillance apparatus?
Can't you be worried about both? In the EU you at least theoretically have legal recourse. But these issues go hand in hand, as surveillance data sharing programs like Five Eyes have shown.
Or I just am not in the mood again. If I do not like something I want a way to tell the engine that and not be afraid of skipping my favorite songs because I want to listen to them at another time.
I think alogrithms are great but feel really frustrated and helpless because it somehow became impossible to explicilty give it signals or tweak it. It's like living together with someone who always tries to guess your needs but you are not allowed to talk to them. That's just a broken system.
Do you know if the algorithm takes that into account?
I also became suspect of the behavior of the skip button, I sometimes skip songs of artists I really like and could observe that I did not get them recommended anymore until I explictly played a song of them. Of course this could be just random luck but it made me careful to skip my favorite songs and instead I open the playlist and select the next song.
reply