Hacker Newsnew | past | comments | ask | show | jobs | submit | zmmmmm's commentslogin

> building products around claude -p

But OpenClaw is not a product. It's just a pile of open source code that the user happens to choose to run. It's the user electing to use the functionality provided to them in the manner they want to. There's nothing fundamental to distinguish the user from running claude -p inside OpenClaw from them running it inside their own script.

I've mostly defended Anthropic's position on people using the session ids or hidden OAuth tokens etc. But this is directly externally exposed functionality and they are telling the user certain types of uses are banned arbitrarily because they interfere with Anthropic's business.

This really harms the concept of it as a platform - how can I build anything on Claude if Anthropic can turn around and say they don't like it and ban me arbitrarily.


Claude Code is not a platform and you’re not meant to be building on it. Netflix is also not a platform and you shouldn’t be running code (open source or not) to mass download Netflix movies either.

It's a reasonable comment, and I should be clear, I don't expect it to be a platform. But I do expect to be able to use its advertised features for any reasonable purpose they can support.

Where it leaves me is is sort of like the DoD - nobody should use Claude for anything. Because Anthropic has set as principle here that if they don't like what you do, they will interfere with your usage. There is no principle to guide you on what they might not like and therefore ban next. So you can't do anything you want to be able to rely on. If you need to rely on it, don't use Claude Code.

And to be clear, I'm not arguing at all against using their API per-token billed services.



Is using claude -p supposed to be dangerous? Could someone be confused as openclaw or other things?

If yes, why do Anthropic provide this cli flag?


Given it is reported to be successfully targeting Israel with cluster ammunitions in warheads, I am curious what stops Iran from targeting US ships even far outside the strait? I would have thought if you could send multiple missiles with cluster bombs simultaneously at short notice it would be very difficult to counter and impose catastrophic cost.

Is anti-missile defense is just that good on ships that no amount of simultaneous missiles and decoys can overcome it?


The chances of a ballistic missile hitting a ship - a small, moving target in the middle of the sea - are negligible. And a 4kg bomblet wouldn't do much damage anyway.

Have you seen the 'no smoking' signs on those vessels?

Sinking a US ship would be a drastic escalation. Iran has done a lot of damage to US assets but inflicted few casualties, demonstrating both capability and restraint. If they destroyed the American boomers' few remaining illusions of supremacy by sinking a ship and potentially killing hundreds of crew, the loss of face would likely instigate a drastic response that could lead to a worst-case scenario. Much better for Iran to keep playing bloody knuckles and force the US and Israel to beg for peace when their missile defenses and appetite for war run dry.

Interesting to see three entirely different responses to my question - but I think I believe this one the most. Not necessarily that they could be successful in attacking (who knows), but that trying would escalate things on the wrong timeline for them. At this point, they actively want to drag this out.

My sense at the moment is they are pursuing a "humiliation" strategy where they will persuade Trump to withdraw by making it too embarrassing to continue. For that, all they have to do is make him look impotent, which they achieve by continuously provoking just enough to force a response (either military, or Trump to issue yet another TACO threat he can't carry through with) but then popping up a few days later with a new attack showing it didn't work.


It's a waste. Iran can't win against the US army. They'll win by being as disruptive as possible, for as long as possible. They'll keep seldomly launching rockets until they get what they want or the global economy collapses. This whole situation perfectly illustrates that wars are won with intelligence, and not by gung ho "warrior ethos" morons like Hegseth.

Are you saying Iran, a country that was just sanctioned to hell for almost half a century, with a defense budget of at most $30B is outsmarting our $2T/year military which we consider to be the greatest in the world? The can't be. That's literally the only thing that makes this nation "great". That would imply that our country is being led by morons

Who says they didn’t? Although not widely reported in mainstream US media, there are lots of claims online that US Navy ships were hit by missiles, including a clip from Trump himself. Why is the Ford and Lincoln so far away?

It's an interesting strategy, I see a pretty big risk from them leaning into it like this. We already have a vibe in my circles of the old "gmail vs yahoo" type thing where if you saw someone had a yahoo mail address you assumed they were technologically illiterate. Similarly it's mildly embarrassing already to say you used ChatGPT for something. It's not unrecoverable, but it's a pretty steep slippery slope they probably don't want to be anywhere near if they care about enterprise.

After the DoD moves? It is not just the technologically illiterate, it is part of the US culture war. OpenAI is the MAGA brand, like tesla.

What an absurd leap. OpenAI is the maga brand because they simp for government contracts like any massive company would?

Did you miss the cancel/unsubscribe gpt boycott? It was only about a month ago. Many people I know cancelled/unsubscribed. To be fair though, most people I have talked with needed almost no encouragement to move to anthropic or google (better products, easy to switch etc). Consumer sentiment can change quickly.

Many people in your bubble cancelled.

Perhaps, but it was pretty widely reported on, if you care to look.

Caitlin Kalinowski and other OpenAI employees resigned because of it [1].

ChatGPT uninstalls rose by 295%, downloads fell 13% on day one and a further 5% the next day [2].

One-star reviews spiked 775% overnight, then doubled again the following day [2].

1.5 million users joined the QuitGPT boycott within days [1].

Claude rose to #1 most downloaded app in the App Store and US usage rose by 51% [2].

New customers are now choosing Claude over OpenAI 70% of the time [1].

And much more. I think it was just your bubble that didn’t cancel it.

[1] https://letsdatascience.com/blog/altman-called-the-pentagon-...

[2] https://www.ibtimes.co.uk/openai-backlash-pentagon-partnersh...


I'm aware that it happened. You seem under the impression that this is some kind of mass exodus based on people you know.

Uninstalls up 300%! What's the baseline?

> downloads fell 13% on day one and a further 5% the next day

Dramatic falloff of new downloads after one day (still plenty of new downloads). Day 3 was likely negligible and, I bet, it was back to normal less than a week after when the story left the news cycle.

> 1.5 million users joined the QuitGPT boycott within days

That's both very few people and a completely meaningless number since all it requires is checking a box. Did anyone verify they were actually human?

> Claude rose to #1 most downloaded app in the App Store and US usage rose by 51% [2].

> New customers are now choosing Claude over OpenAI 70% of the time [1].

Which has nothing to do with cancellations.

> And much more. I think it was just your bubble that didn’t cancel it.

Most people in my bubble have no idea any of this happened and are just using free chatgpt tier if they use it at all. That seems much more representative given your provided statistics of the 1.5m person boycott.


Ahh I see, you possess the superior bubble, how silly of me!

I didn't say that, I just brought that up to contrast it to yours.

The strongest part of my argument goes with your cited 1.5m number. That's not a lot of people, especially when you consider the signing of a petition requires no other action than signing and has no way to verify the signing.

I'm just not seeing how any of this harmed OpenAI more than a government contract helps.


Like Anthropic notably refused to.

I couldn't read the article but am curious what the definition of "smart" is. Because if that is the exact wording then it seems to be extremely broad and probably capture some unintended cases.

These kind of blanket bans are going to pose some real problems for the tech because people who wear prescription glasses will often get their prescription built in. So you can't take them off - you need them to see. And then there is another subset of blind and deaf users who are even more dependent on them. What are these people going to do once there are a non-trivial amount of places banning you from wearing them at all?

I think the tech industry is far behind the eight ball on this. To their credit Meta actually did a half decent job out of the gate designing sensor-gated recording lights into the Raybans. But it's not enough. There needs to be an industry wide agreement on a standard where something like a bluetooth beacon can shut off recording. Then maybe you have a chance of this category not becoming Google Glass 2.0. Otherwise I'm struggling to see how this ship won't sink.


The important part of the article:

> From then on, any eyewear with video and audio recording capability will be forbidden in all of the First Judicial District buildings, courthouses, or offices, even for people who have a prescription. Other devices with recording capabilities like cell phones and laptops continue to be allowed inside courtrooms but must be powered off and stowed away.

It's defined as having recording capability, which is quite a reasonable restriction to make, IMO.


That's actually not too bad - it leaves space for devices that do have cameras or microphones for other reasons, as long as they don't persist the output. So you could do real time recognition for assistive devices etc.

What about facial recognition? Even without persistence that’s a big deal for juries.

I think it's a very bad idea for a prescription glasses wearer to have only a single pair of glasses where that single pair has a built in camera.

It's not just "having" them though, it's carrying them everywhere and constantly swapping over to your dumb glasses as you walk in and out of places that don't like the smart ones.

Which is sort of my point: when main purpose is convenience, if you have to do something inconvenient to use it then you killed the thing altogether. So if manufacturers want this to fly, they need to sort out the privacy question before there's a sign on every public place saying "no recording glasses". If I was in Meta's position, i'd be going to regulators to ban glasses without an externally controlled hard shutoff mechanism.

It might seem a trivial thing currently, but some of these factors will be the ultimate determinants of exactly how much utility humans can get out of AI. If it can't see what you can see, it can't help you with that.


> [W]hen main purpose is convenience, if you have to do something inconvenient to use it then you killed the thing altogether.

Funny. Because UV-activated darkening lenses inevitably fail in a half-darkened state, I have a pair of always-dark prescription sunglasses and prescription -er- clearglasses. I can tell you from personal experience that it's inconvenient to carry both and swap between the two as my location and the time of day changes, and yet... somehow there's still a solid market for always-dark prescription eyeglasses.

Weird, innit?


I’ve thought about that before. On one hand: “I need these to see.” Other: “No, you need some glasses to see. Picking these as your only pair was bad decision making.”

It sounds like OP is talking about having this extra pair with them where they go, not just having a pair in general.

Which is a fair expectation IMO. There are plenty of places where it's not appropriate to record that they might encounter in the course of a normal day.

If they can afford stupid "smart" glasses they can afford dumb glasses.

> There needs to be an industry wide agreement on a standard where something like a bluetooth beacon can shut off recording.

Yes, this is a great idea. Hardware hackers can then quickly clone these beacons and spam $5 glass hole blockers everywhere.


> Then, ask if this bombing was part of group A or group B.

false dichotomies are a common rhetorical method (and sometimes useful) to argue your way to a moral justification, but that doesn't make them reflect reality

There is no A and B. You want to force a situation where B is pure good intent and we either have to choose that or choose A where there is only bad intent. The reality is, this war is about ego, power and money as much as it is about any "good intent". The decisions to start the war were made with a full knowledge of the risks and costs it would entail, with almost all of those being externalised to other people than those taking the choices.

Nobody taking those choices should get to just opt out of moral responsibility with some easy "A / B" logic.


i'm missing something basic here .... what does it actually do? It executes a prompt against a git repository. Fine - but then what? Where does the output go? How does it actually persist whatever the outcome of this prompt is?

Is this assuming you give it git commit permission and it just does that? Or it acts through MCP tools you enable?


MCP tools. We're doing some MCP bundling and giving it here, pretty cool stuff.

wasn't MCP a critical link in the recent litellm attack?

And if it was?

It's a bit like asking if "an API" was a critical link in some cybersec incident. Yes, it probably was, and?


i'd say it's more like intentionally choosing to use naive string interpolation for SQL queries than a trusted library's parameter substitution. Both work.

There is no "parameter substitution" equivalent possible. Prompt injection isn't like SQL injection, it has no technical solution (that isn't AGI-complete).

Prompt injection is "social engineering" but applied to LLMs. It's not a bug, it's fundamentally just a facet of its (LLM/human) general nature. Mitigations can be placed, at the cost of generality/utility of the system.


> It's not a bug, it's fundamentally just a facet of its (LLM/human) general nature

Fair enough but then that means that MCP is not "a bit like asking if "an API" was a critical link in some cybersec incident"

Because I can secure an API but I can't secure the the "(LLM/human) general nature."


MCP itself is just an API. Unless the MCP server had a hidden LLM for some reason, it's still piece of regular, deterministic software.

The security risk here is the LLM, not the MCP, and you cannot secure the LLM in such system any more you can secure user - unless you put that LLM there and own it, at which point it becomes a question of whether it should've been there in the first place (and the answer might very well be "yes").


We use to do do automated sec audits weekly on the code base and post the result on slack

so is slack posting an MCP tool it has? or a skill it just knows?

In Claude it is a "connector" which is essentially an mcp tool.

I think the whole entire point of this is they shouldn't be excluding Anthropic as an entity, they should be excluding all suppliers on equal terms on the basis of whether they satisfy requirements or not. If it is a requirement that they be able to conduct mass domestic surveillance then they should put that into their contract with Palantir, not "You can't use Anthropic".

So I agree with you, it ought to be illegal for them to tell a supplier what other suppliers to use. But that is exactly the larger point here in the first place that they should not be doing that at all.


The government cannot conduct massive domestic surveillance in any case, that’s illegal. Other vendors are mature and serious enough to understand that the government is subject to American law and must operate under American law. They’re mature and serious enough to understand that it is the exclusive right of the judicial branch to make determinations around whether the law has been violated or not. They’re mature and serious enough to understand that the DoD has a mandate to pursue its mission to the fullest extent allowable by the law, and it is the sole responsibility of the DoD legal team to determine whether they are operating safely within the bounds of the law.

Anthropic is uniquely interested in introducing itself as an external enforcer of US law, a sort of belt-and-suspenders approach, where the Department is not only subject to operate under the constitution and the laws from the legislative branch, but also subject to anthropics interpretation of whether they are operating under the constitution and the laws from the legislative branch.

The department of defense does not want to engage in massive domestic surveillance beyond what the law allows them to do. They have signed agreements with OpenAI and other vendors which reiterate that they do not wish to use AI systems for massive domestic surveillance. These terms were unsatisfactory for Anthropic, for whatever reason.

The problem is not the terms of the agreement. It’s the people and the way they conduct business. It’s the fact that they’ve expressed a willingness to hold their product (or future products) hostage, at the cost of DoD operational excellence. It’s the fact that they’re training a specific model variant for government usage with extra guardrails and limitations and values.

Above all else, it’s the fact that they want to leverage their position as a leading AI company to influence government policy. This is not how a serious reliable partner of the government behaves. The problem from the DoDs perspective is the company itself and the people in charge of it.


I don't think a lot of what you are citing is true or valid - but for the sake of argument, everything valid that you are expressing can be and should be put into terms that don't relate specifically to Anthropic. The government just needs to state what its requirements are and then treat all parties equally. Anything else is crony capitalism.

The government has stated what its requirements are: “all lawful use”. Anthropic is uniquely unwilling to agree to that.

So that should have been the end of it - why didn't the government just do that and leave it there? The gap between the accessible means for them to achieve the requirement they needed and the action they actually took amounts to a harm to Anthropic for which they may have the right to pursue compensation.

Again, you’re ignoring the entire background of this dispute: Palantir. Once DoD has established that Anthropic is an unreliable partner and is liable to act adversarially, they needs a legal mechanism to prevent Palantir (and all companies like Palantir) from taking a dependency on Anthropic. This is what that looks like.

Ceasing to contract with them directly doesn’t change the fact that Anthropic wishes to leverage itself to influence the government. That doesn’t go away. The problem is not with closing all direct contracts between the Pentagon and Anthropic, those don’t matter, it’s with closing all their channels of influence into DoD as a subcontractor.

Similarly to how DoD refusing to buy from Huawei doesn’t protect DoD from their prime contractors buying Huawei gear, they need a supply chain risk designation to ensure they are protected.


well you've zeroed on the part that I just don't accept at all here:

> is liable to act adversarially

...

> wishes to leverage itself to influence the government

This goes way beyond the above requirement of "all legal purposes", and I haven't seen anything that remotely supports these in the public evidence. In fact there's a lot of evidence to support the opposite view.


> Anything else is crony capitalism.

Are you new here?!


Seems like it's going to be a tough sell to get people to want to write

    (tc/select-rows ds #(> (% "year") 2008))
instead of

    filter(ds, year > 2008)
They seem to ignore the existance of Spark, so even if you specifically want to use JVM it feels clearer and simpler:

    ds.filter(r => r.year > 2008)

You're right, that is longer! I get why though; `filter` is a clojure.core function name people don't necessarily feel comfortable shadowing, and the Clojure and Spark versions make it clear what's a symbol in local scope versus a field in the dataset. I don't think it'd be hard to make a little wrapper for this sort of thing though! Here's an example which turns any symbols not in local scope into field lookups on an implicit row variable.

    (require '[clojure.walk :refer [postwalk]])

    (defmacro filter
      [ds & anaphoric-pred]
      (let [row-name (gensym 'row)
            pred     (postwalk (fn [form]
                                 (if (and (symbol? form) (nil? (resolve form)))
                                   `(get ~row-name ~(str form))
                                   form))
                       anaphoric-pred)]
      `(tc/select-rows ds (fn [~row-name] ~@pred))))
Now you can write

    (filter ds (> year 2008))
And it'll expand to the ts form:

    (pprint (macroexpand '(filter ds (> year 2008))))
    => (tc/select-rows ds (fn [row2411] (> (get row2411 "year") 2008)))

In my experience the advantage comes when you have a few more lines of code

The Clojure pipelining makes code much more readable. Granted dplyr has them too, but tidyverse pipes always felt like a hack on top of R (though my experience is dated here). While in Clojure I always feel like I'm playing with the fundamental language data-types/protocols. I can extend things in any way I want


Couldn't agree more. R and dplyrs ability to pass column names as unquoted objects actually reduces cognitive load for new people so much (pure anecdata, nothing to back this up except lots of teaching people).

And that's on top of the vastly simpler syntax compared to what's being shown here


> vastly simpler syntax

I've been programming for decades. I've used dozens of different, at times enormously esoteric languages. At one point I built ERPs in a language where operators were abbreviated Russian terms. After just a few years of using Lisp dialects I am absolutely convinced - there's no simpler and more readable syntax than of Lisp's. Anyone who doesn't see that, in my eyes just not made the distinction between familiarity and simplicity.

They're measuring how quickly their eyes can parse something they've already seen a thousand times, and calling that readability. But readability isn't recognition speed - it's the cognitive distance between the code and the computation it describes. And on that measure, Lisp is essentially lossless. There's no syntactic residue. No ceremony the language demands for its own sake. What you write is the structure of the thing, all the way down.

"You get used to it. I don't even see the code. All I see is blonde, brunette, redhead..." I don't look at Matrix feeling puzzled anymore. I see the truth.

People who bounce off the parentheses are reacting to something real: it doesn't look like what they already know. But that's not the language failing them. That's just the last bit of the old syntax dying. Give it a few months of structural editing and a proper REPL workflow, and you won't see parentheses anymore - you'll see shape. You'll see depth. And going back to anything else will feel like someone handed you a map drawn in crayon and called it a feature.


Even though I'm firmly in the familiarity camp, I love this description of what it's like to be fully lisp-pilled. I definitely want to try it out one day.

One doesn't need to dance around weird circles to give it a try. VSCode's Calva extension has great "Quickstart guide". Or you can just install Clojure, run it - it drops you into a REPL. Once you ready to get serious, grok some basic structural editing commands (grabbing an expression and moving it somewhere else would suffice) and REPL-driven workflow (Lispers typically don't type into REPL, they eval things directly where the source is)

> familiarity vs simplicity

Love this, I've never heard it put that way before.


Rich Hickey did in "Simple made easy" talk.

In practice we use `ds/filter-column` and `ds/filter` much more than `select-rows`.

The sell isn't about typing a few more or a few less characters, it's about doing data science functionally.


If only there was some kind of tool that could answer helpful questions about technology instead of needing a cheat sheet.

> Walmart will embed its own chatbot, Sparky, inside ChatGPT. Users will log into Walmart, sync carts across platforms, and complete purchases within Walmart’s system.

The enshittification is upon us.


Hah, Clippy's cousin Sparky: every once in a while after ChatGPT answers a question it'll say "Looks like you still have stuff in your WalMart cart. Would you like me to complete that checkout for you? Also, WalMart-brand diapers is on offer this week, shall I add that to your cart?"

“You are an unfit mother.”

https://youtu.be/lajnHjRp9Z0


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: