Hacker Newsnew | past | comments | ask | show | jobs | submit | saintfire's commentslogin

The API offers that. Pay X per month, get Y tokens. Then you can look at all the graphs of money being deleted by OpenClaw, for transparency.

People want a free lunch. If the API was cheaper than the subscription then everyone would use the API. Instead people flock to an, apparently, unsustainable pice at a fixed monthly rate; presumably subsidized by others who don't use their full capacity every month.


This webpage performs shockingly bad on my phone for a (basically) empty page with a video.

I guess it's like their phones. Got too old, slow 'er down.


Not sure AI is ready to replace anyone but that doesn't seem to be the road block.

Iranians are also making bank. Why kick a hornets nest when you're winning?

> Why kick a hornets nest when you're winning?

Tell that to Trump and his glorious way of bombing Iran. Nothing against the idea itself, the Mullahs all but asked for it to happen.

But the execution? That was a level of dogshit I haven't seen in the time I was alive lol. Even Russia was better prepared with their invasion of Ukraine.

Both Trump and Netanyahu had a somewhat solid perspective on not getting utterly wasted in the next elections. Instead they go on one of the most ill-prepared wars in modern history, with results that may seriously upend the global economy if not lead us to WW3 outright.


"... agreed to a permanent prohibition barring them from misrepresenting how they use and share personal data. "

So... Their punishment for breaking the law is having to promise to follow the law going forward?

I wish I had that superpower, too.


But this time it's permanent!

Was about to post just this. What kind of joke is this?

I download all apps on my phone from the bleeding edge of npm. /s

When npm has supply chain attacks it's still news.

On Google Play Store its actually noteworthy when an app isn't some level of malware loaded with ads and questionable permissions.


Assuming you can catch every new bug it introduces.

Both assumptions being unlikely.

You also end up with a code base you let an AI agent trample until it is satisfied; ballooned in complexity and redudant brittle code.


You can have an AI agent refactor and improve code quality.

But, have you any code that has been vetted and verified to see if this approach works? This whole Agentic code quality claim is an assertion, but where is the literal proof?

If it can be trained with reinforcement learning then it will happen

Did we have code quality before llms?

Funnily enough I've literally never seen anyone demo this, despite all the other AI hype. It's the one thing that convinces me they're still behind.

It’s agents all the way down - until you have liability. At some point, it’s going to be someone’s neck on the line, and saying “the agents know” isn’t going to satisfy customers (or in a worst case, courts).

> until you have liability

And are you thinking this going to start happening at some point or what?

The letters I get every other month telling me I now have free credit monitoring because of a personal info breach seems to suggest otherwise.


A firm has very different amounts of time, ability and money to spend on following up on broken contracts.

Sure it can. It's not like humans aren't already deflecting liability or moving it to insurance agencies.

> It's not like humans aren't already deflecting liability

They attempt to, sure, but it rarely works. Now, with AI, maybe it might, but that's sort of a worse outcome for the specific human involved - "If you're just an intermediary between the AI and me, WTF do I need you for?"

> or moving it to insurance agencies.

They aren't "moving" it to insurance companies, they are amortising the cost of the liability at a small extra cost.

That's a big difference.


At some point, the risk/return calculus becomes too expensive for insurance companies.

Usually thats after the premiums become too high for most people to pay.


Just today I had an agent add a fourth "special case" to a codebase, and I went back and DRY'd three of them.

Now I used the agent to do a lot of the grunt work in that refactor, but it was still a design decision initiated by me. The chatbot, left unattended, would not have seen that needed to be done. (And when, during my refactor, it tried to fold in the fourth case I had to stop it.)

(And for a lot of code, that's ok - my static site generator is an unholy mess at this point, and I don't much care. But for paid work...)


I got an email about it.

Its sort of a moot point since the whole thing is for good will anyways.

They freely scraped licensed code and semi-private data across the internet and now they're pretending that they need to license anything.

If a court rules they had to license data in the first place then the whole industry would actually have to start following laws.


The people pushing this technology, that accelerates climate change, have lobbied the government to circumvent typical roadblocks created by society to limit sensationalist development. Incidentally, the same people who talk about how dangerous AI will be for society, but don't worry, they're going to be the one to deliver it safely.

Now, I don't believe AI will ever amount to enough to be a critical threat to human life, you know, beyond the immense amounts of wasted energy they propose to convert into something more useful, like a market crash or heat and noise, or both.

Not sure how you can call someone opposed to any of that "anti-civilisational" matter-of-factly.


The painful irony of bragging about/lamenting your new model's cybersecurity capabilities within and in response to leaking all the information about it due to poor cybersecurity.

Can't wait for the "$LAST_MODEL was amazing but this is the one that will change everything."


> The AI lab left the material, including what appeared to be a draft blog post announcing a new model, in an unsecured, public data lake

My tinfoil theory is that it was left by them to be discovered by the public.


I believe a lot of 'leaks' you see these days are, at least some what, intentional.

The irony of bragging about how dangerous to cybersecurity it is with all the holes punches by the current generations

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: