I've got two garages full of 80's and 90's HP lab equipment, and most of it even works. In that era, HP had the best hardware design/production capability in the world.
Unfortunately, in the same era, their software was almost always complete crap. I think the same rigid processes and controls that allowed them to make great hardware were the reason their software was awful. Their rigid processes made changing the software difficult, so it was harder for the devs to improve (and they usually didn't bother).
The problem with fair use is that the rules are subject to challenge and interpenetration. Defending an argument for fair use costs a lot of money, and involves significant risk.
The content creators know this, and they'll leverage their money and legal teams to sue for copyright violation, ignoring fair use. Fair use is a valid defense, but the defense must be presented and adjudicated, and that takes time and money.
How many years of support does Apple guarantee with the Neo? At some point in the future, even though the hardware is fine, it will be unsupported, and potentially vulnerable to whatever exploits are built into its version of MacOS.
Perhaps by that time the M3 will have better Linux support, but dealing with the 8GB memory size limitation will become more difficult as time ticks by.
I'm a Nevada resident, and a supporter of US/local manufacturing, but these guys have been denying that their products have design defects despite clear evidence.
Unfortunately, I expect them to now be sued into oblivion.
I ditched my two Ring (Elite PoE) doorbells and replaced them with Reolinks a little over a year ago. They cost less, perform better, support open standards for video streaming, and don't partner with law enforcement to surveil your neighborhood.
The only drawback I've found is that despite their IP65 weatherproof rating, I've had two failures caused by rain. (In each case, the microphone was permanently affected.) They've been good about issuing RMAs and sending replacements, but I guess I'll need to start paying for replacements if they begin failing after the warranty period.
Even with their IP rating, I made sure to install them under eves (with 2ft overhangs). I looked at the microphone location (@unboxing) and immediately thought "that cannot be waterproof!"
This system was recommended by a trusted EE/hn guy, and his have been functional since ~2016 (no RMA, to my knowledge; but his are also under well-under eves).
ESL readers: "eve" (noun) is the underside of a roof overhang
I was an early adopter on many platforms, and used the same three letter handle on each. I've had the same thing happen to me, even with an account that was being actively used. There's nothing that you can do about it. It's their platform and they can grab your handle if they want it.
Really kind of weird with all these people who think it is just fine for services to take over accounts of people.
Of course, they can literally do whatever they like, it is their platform.
But it would be nice if everyone considered what it would be like for a platform to just arbitrarily nuke their account one way or another.
There's probably a lot of "well they wouldn't do that, I don't have a valuable named account, and I'm a user in good standing" but in reality they can do it for whatever reason they like and there's no actual guardrails--so anyone's account is equally at risk if they decide to.
Another example of using children to shield a military target. (The USA has done the same thing in the past. For example; Aviation High School was located on the same parcel as TRW.)
In the case you're presumably referring to, no fighter pilots were involved. A Tomahawk missile was launched from a US naval vessel, aimed at a military site next door to the girls school.
From Anthropic's lawsuit brief:
“The Constitution does not allow the government to wield its enormous power to punish a company for its protected speech,” and “Anthropic turns to the judiciary as a last resort to vindicate its rights and halt the Executive’s unlawful campaign of retaliation.”
Unfortunately for Anthropic, the Constitution does not require the government to purchase goods or services from a company that has made a public declaration that it will not allow its AI models (specifically Claude) to be used for mass domestic surveillance or to power fully autonomous weapons, if such a declaration goes against the government's contract requirements.
The government, and not the contractor, should have control over the scope of its use of products purchased from a supplier. If a supplier wants to retain such control and restrict functionality, deeming them a supply-chain risk seems appropriate.
> Unfortunately for Anthropic, the Constitution does not require the government to purchase goods or services from a company that has made a public declaration that it will not allow its AI models (specifically Claude) to be used for mass domestic surveillance or to power fully autonomous weapons, if such a declaration goes against the government's contract requirements.
Obviously, if the contract requirements themselves are lawful, the government has the power to purchase only those goods and services that meet the requirements, and to not purchase those that do not.
But that's irrelevant, because the "supply chain risk" designation is not needed if the government is merely trying to assure that the good and services in a contract meet the requirement of the contract, it is a separate legal provision with separate purposes that would be superfluous for the purpose described.
If the government is using the "supply chain risk" designation as a backdoor way to rewrite all previously-entered, still-in-force defense contracts to retroactively add new requirements incompatible with the use of Anthropic software given their limitations on the service Anthropic is willing to provide, that also is not what the "supply chain risk" designation exists for, and, even if were to seem facially within the statutory purpose of the authority, would raise 5th Amendment takings issues.
> If a supplier wants to retain such control and restrict functionality, deeming them a supply-chain risk seems appropriate
Anthropic provides their product under specific terms, if the government doesn't accept those terms then there's no deal, simple as that. That's how basic contracts work, not sure why you think that has anything to do with a supply chain risk.
Steel-manning a bit. AFAIK one major issue was that Palantir relied on Claude under the hood. If that’s true, the designation makes some sense. Essentially “given our dealings with Anthopic, we don’t want our suppliers using them for products we buy either.” Hence the “chain.”
But who knows. None of us are at the table, and there’s probably classified stuff anyway, so as an observer it’s tough to take a position based purely on facts.
In an ideal world, it sounds like Anthropic should not accept the military’s terms, and consequently no supplier will accept Anthropic’s terms, and everybody will get what they want.
DoW did probably did this because the government has existing contracts, and Anthropic will not perform to their requirements. It's a way for the government to invalidate the contracts, and avoid the problem in the future.
Given that the Anthropic announcement of their policy update to their AI being used in they way it was already being used happened less than two weeks ago, how could the government have had a right of refusal at the time the contracts were issued (probably years ago)?
> DoW did probably did this because the government has existing contracts, and Anthropic will not perform to their requirements.
The opposite: DoD did this because Anthropic was performing to the contract (which codified the restrictions in question), and the government was violating the contract, and sought to violate it more.
> Given that the Anthropic announcement of their policy update to their AI being used in they way it was already being used happened less than two weeks ago
The policy excluding use of Anthropic's Claude service for domestic mass surveillance of civilians and fully-autonomous killbots is at least a year old, and codified in the contract terms the government signed back then.
After the signing, the government decided they wanted domestic mass surveillance of civilians and fully-autonomous killbots, and thus regretted their agreement to those contract terms, and thus is now unilaterally cancelling their agreement. Simple as that.
I compared their original policy, which DoW was not violating, against their revised policy from less than two weeks ago, which they released after unfruitful negotiations with DoW. They don't say the same thing.
Mass domestic surveillance of civilians and fully-autonomous killbots were excluded from the contract signed about a year ago.
That is why the DoD sought to force anthropic to modify the contract to include those 2 things, with no other changes.
Even though the terms of the contract prevail over the general public Terms Of Use, one can look at the general public TOU from before the contract was signed [0] and see that it explicitly forbade mass domestic surveillance of civilians:
"Do Not Use for Criminal Justice, Law Enforcement, Censorship or Surveillance Purposes"
So basically, before any of the code even runs, this environment begins by gobbling up more than the total RAM that most of my first computers had (SYM-1, IAMSAI-8080, Ferguson Big Board, Kaypro II, and CCS S-100 Z-80). All of these systems were 8-bit, with various RAM sizes from 8KB to 64KB. That was the maximum RAM available, and it was shared by the OS and the applications.
What's the purpose of making such a comparison? The implication is that we're being wasteful, but I'm not certain that's the point you're trying to make.
That was close to my point. RAM prices are up 400% over last year. I know that 72KB seems like a pittance today, but any waste is bad -- especially in embedded environments.
I'm not really even saying this is wasteful. Maybe a static allocation would make more sense for critical exception handling code...
Unfortunately, in the same era, their software was almost always complete crap. I think the same rigid processes and controls that allowed them to make great hardware were the reason their software was awful. Their rigid processes made changing the software difficult, so it was harder for the devs to improve (and they usually didn't bother).
reply