Hacker Newsnew | past | comments | ask | show | jobs | submit | Animats's commentslogin

"Sometimes you have to keep thinking past the point where it starts to hurt." - Fermi

The lack of UL approval is a concern. This thing draws over 500 watts and runs hot.

Is it really not UL approved? Hard to call it a premium product in that case.

> If China gets bogged down in Taiwan...

Look at the geography. Taiwan is a long, narrow island. All the important parts are in a narrow plain on the west side, facing China. There's only about 20km of depth from the sea.

The war in Ukraine is like fighting over Iowa, one farm at a time. Taiwan is not like that.


Bro: China literally has not fought a war in over 50 years. Taiwan is rich; lots of urban combat; lots of jungle & mountains; a resistant population; motivated & rich allies; China doesn’t remotely have the naval capability to fully blockade the island; Xi’s favorite pastime is disappearing experienced generals. A full-on China/Taiwan war will never happen, and if it did, it would kill 30 years of industrial and geopolitical progress.

> There's only about 20km of depth from the sea

Don’t underestimate the stopping power of water. Taiwan will be China’s first combined-arms assault with a critical amphibious component.

> war in Ukraine is like fighting over Iowa, one farm at a time. Taiwan is not like that

Wide-open plains are traditionally easier for large armies to conquer than mountains.


YC's previous recommendation was to use Silicon Valley Bank. That ended well.

What's the context here? When, where and for what did they recommend SVB?

(FWIW, it did end well, as going with a relatively large federally insured bank meant that no one lost any money during the crash)


SVB was considered the "standard" bank for all startups for decades so it's not surprising that YC would give the same advice. If you run a startup out of a normal bank sometimes you get weird glitches: https://mitchellh.com/writing/my-startup-banking-story

Of course today startups are probably using Mercury/Ramp/whatever.


I don't see any weird glitches there

chase did what they were asked for years

up to the point they were told there had fraud going on, at which point the walls went up

which is entirely as to be expected


SVB depositors were mildly interrupted, no doubt, but there's little reason not to exercise extreme moral hazard in banking. OPM will bail you out via FDIC. Theoretically that has a limit but in practice FDIC usually will bail out the full balances even over the nominal limit.

If I had an FDIC account I would basically want a bank that invests my money in the most wildly hazardous ways with the most reckless financial controls to give the max returns and flexibility, then let everyone else bail me out if it went south.


Go back to the SVB failure threads here and observe the freak out before the decision was made to reimburse deposits above FDIC limits. Sometimes you’re lucky, but luck is not effective risk management.

> OPM will bail you out via FDIC.

I'm waiting for the demands for a bailout when the next big stablecoin goes bust. Especially if it's Trump's.[1]

[1] https://finance.yahoo.com/news/trump-usd1-stablecoin-hits-5b...


This is terrible for Space-X. They're doing a great job. Musk has left running it to Gwynne Shotwell, who really is a rocket scientist. Now Space-X has a AI business unit they don't need, a new money drain, and more attention from Musk.

Should have merged xAI into Twitter. A failure there would not be a major setback.


xAI bought Twitter a bit under a year ago: https://www.bbc.com/news/articles/ceqjq11202ro

Wait, does this mean Space-X owns Twitter/X now?

This, as others have noted is not a right to repair, it's a right to pollute.

There's a push for agricultural right to repair.[1] That falls under the Federal Trade Commission, not the Environmental Protection Administration. That's about parts and tool availability for the whole machine, not just the Diesel power train. This new announcement has zero effect on that.

Here are the controls of a modern John Deere combine.[2] Very little of that has anything to do with engine control. It's being able to fix that, and all the sensors and actuators connected to it, that's important. Diesels are rather reliable by now.

[1] https://nationalaglawcenter.org/ftc-files-suit-against-john-...

[2] https://www.deere.com.au/assets/images/region-4/products/har...


One of the weird things is the way manufacturers used emissions regulation as an excuse to lock their engines down.

Older engines had no computers. But computer control allows things that are needed to meet emissions like variable valve timing and whatnot. Also, engines are supposed to detect faults in emissions control systems and throw an error code.

So they brought in ECUs, but they DRMed them. Now they said it's so people can't tamper with emissions, but they DRMed everything to the point where now you have to take it to the dealer for servicing.

Do they need DRM to prevent people from tampering with emissions? No, they could have ringfenced it like some devices with radio transmitters do. Does DRM in the ECU prevent people tampering with emissions controls? Also no. There are people doing EGR deletes and whatnot on Tier 4 engines all the time.

But they were able to use emissions as the wedge to do what they really wanted to do: make their equipment not third-party serviceable.

By the way I think this also is a microcosm of the failure of the liberal order. The government should have cracked down hard on manufacturers doing this to begin with. The fact that they allowed manufacturers to turn their customers into serfs "in the name of emissions" obviously just ended up discrediting emissions control.


Most people today use Kynar hook-up wire for this sort of thing. Even WalMart stocks it.

Go carefully there. Kynar tends to wilt at soldering heat. Press insulated wires together when they're still hot and suddenly they can be kissing; a 1-diameter short is easy to overlook when you're inspecting your work.

I find it worthwhile to use teflon-insulated wire here. When I'm building a prototype, the last thing I want to have to distrust is my construction.


True. Kynar is good for toughness around square-cornered wire wrapping posts. Teflon will survive a touch of a soldering iron.

I found my wire-wrapping gun recently. If I put in new C batteries it might work.


This sounds close to Russell's "class of all classes" paradox. Is it?

Yes the type theoretic analog to Russel's (set theoretic) paradox is Girard's (as mentioned in the abstract) paradox.

This is incorrect. The set paradox it’s analogous to is the inability to make the set of all ordinals. Russel’s paradox is the inability to make the set of all sets.

Since we're being pedantic, Russel's paradox involves the set of all sets that don't contain themselves.

Technically speaking, because it's not a set, we should say it involves the collection of all sets that don't contain themselves. But then, who's asking...

The paradox is about the set of all sets that do not contain themselves, or a theorem that such a set does not exist.

In the context of the paradox you need to call it a set, otherwise it would not be a paradox


I'm asking. What prevents that collection from being a set?

This is the easiest of the paradoxes mentioned in this thread to explain. I want to emphasize that this proof uses the technique of "Assume P, derive contradiction, therefore not P". This kind of proof relies on knowing what the running assumptions are at the time that the contradiction is derived, so I'm going to try to make that explicit.

Here's our first assumption: suppose that there's a set X with the property that for any set Y, Y is a member of X if and only if Y doesn't contain itself as a member. In other words, suppose that the collection of sets that don't contain themselves is a set and call it X.

Here's another assumption: Suppose X contains itself. Then by the premise, X doesn't contain itself, which is contradictory. Since the innermost assumption is that X contains itself, this proves that X doesn't contain itself (under the other assumption).

But if X doesn't contain itself, then by the premise again, X is in X, which is again contradictory. Now the only remaining assumption is that X exists, and so this proves that there cannot be a set with the stated property. In other words, the collection of all sets that don't contain themselves is not a set.


Let R = {X \notin X}, i.e. all sets that do not contain themselves. Now is R \in R? Well this is the case if and only if R \notin R. But this clearly cannot be.

Like the barber that shaves all men not shaving themselves.


They redefined sets specifically to exclude that construction and related ones.

The paradox. If you create a set theory in which that set exists, you get a paradox and a contradiction. So the usual "fix" is to disallow that from being a set (because it is "too big"), and then you can form a theory which is consistent as far as we know.

Perhaps I overstated how related the two were. I was pulling mostly from the Lean documentation on Universes [0]

> The formal argument for this is known as Girard's Paradox. It is related to a better-known paradox known as Russell's Paradox, which was used to show that early versions of set theory were inconsistent. In these set theories, a set can be defined by a property.

[0]: https://lean-lang.org/functional_programming_in_lean/Functor...


It is Burali-Forti 1897, predating Russel's paradox.

Back issues of Game Developer used to have post-mortems of major projects, both successful and unsuccessful. Those were concrete. This is vague and abstract.

Flying magazine has been publishing "I learned about flying from that" since 1939, and those articles are still good.


The author says it's too long. So let's tighten it up.

A criticism of the use of large language models (LLMs) is that it can deprive us of cognitive skills. Are some kinds of use are better than others? Andy Masley's blog says "thinking often leads to more things to think about", so we shouldn't worry about letting machines do the thinking for us — we will be able to think about other things.

My aim is not to refute all his arguments, but to highlight issues with "outsourcing thinking".

Masley writes that it's "bad to outsource your cognition when it:"

- Builds tacit knowledge you'll need in future.

- Is an expression of care for someone else.

- Is a valuable experience on its own.

- Is deceptive to fake.

- Is focused in a problem that is deathly important to get right, and where you don't totally trust who you're outsourcing it to.

How we choose to use chatbots is about how we want our lives and society to be.

That's what he has to say. Plus some examples, which help make the message concrete. It's a useful article if edited properly.


I think that this summary is oversimplifying: The rest of the blog post elaborates on how the author and Masley has a completely different interpretation of that bullet point list. The rest of the text is not only examples; it provides elaborations of what thought processes led him to his conclusions. I found the nuancing of the two opposing interpretations, not the conclusion, the most enjoyable part of the post.

(This comment could also be shortened to “that’s oversimplifying”. I think my longer version is both more convincing and enjoyable.)


I feel like your comment is in itself a great analogy for the "beware of using LLMs in human communication" argument. LLMs are in the end statistical models that regress to the mean, so they by design flatten out our communication, much like a reductionist summary does. I care about the nuance that we lose when communicating through "LLM filters", but others dont apparently.

That makes for a tough discussion unfortunately. I see a lot of value lost by having LLMs in email clients, and I dont observe the benefit; LLMs are a net time sink because I have to rewrite its output myself anyway. Proponents seem to not see any value loss, and they do observe an efficiency gain.

I am curious to see how the free market will value LLM communication. Will the lower quality, higher quantity be a net positive for job seekers sending applications or sales teams nursing leads? The way I see it either we end up in a world where eg job matching is almost completely automated, or we find an effective enough AI spam filter and we will be effectively back to square one. I hope it will be the latter, because agents negotiating job positions is bound to create more inequality, with all jobs getting filled by applicants hiring the most expensive agent.

Either way, so much compute and human capital will go wasted.


> Proponents seem to not see any value loss, and they do observe an efficiency gain.

You get to start by dumping your raw unfiltered emotions into the text box and have the AI clean it up for you.

If you're in customer support, and have to deal with dumbasses all day long who are too stupid to read the fucking instructions. I imagine being able to type that out, and then have the AI remove profanity and not insult customers to be rather cathartic. Then, substitute "read the manual" for an actually complicated to explain thing.


> You get to start by dumping your raw unfiltered emotions into the text box and have the AI clean it up for you.

Anyone semi-literate can write down what they're feeling.

It's sometimes called "journaling".

Thinking through what they've written, why they've written it, and whether they should do anything about it is often called "processing emotions."

The AI can't do that for you. The only way it could would be by taking over your brain, but then you wouldn't be you any more.

I think using the AI to skip these activities would be very bad for the people doing it.

It took me decades to realize there was value in doing it, and my life changed drastically for the better once I did.


I don’t understand this summary - isn’t this a summary of the authors recitation of Masleys position? It’s missing the part that actually matters, the authors position and how it differs from Masley?

Yep - it honestly reads like an LLM's summary, which often miss critical nuances.

I know, especially with the bullet points.

The meat there is when not to use an LLM. The author seems to mostly agree with Masley on what's important.


I am curious if you understand how disrespectful your comment is.

It's a dismissive as a spitting in someone's face.


It actually isn’t very long. I was expecting it to be much longer after the author’s initial warning.

This here is why I always read the comments /first/ on HN

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: