Look at the geography.
Taiwan is a long, narrow island. All the important parts are in a narrow plain on the west side, facing China. There's only about 20km of depth from the sea.
The war in Ukraine is like fighting over Iowa, one farm at a time.
Taiwan is not like that.
Bro: China literally has not fought a war in over 50 years. Taiwan is rich; lots of urban combat; lots of jungle & mountains; a resistant population; motivated & rich allies; China doesn’t remotely have the naval capability to fully blockade the island; Xi’s favorite pastime is disappearing experienced generals. A full-on China/Taiwan war will never happen, and if it did, it would kill 30 years of industrial and geopolitical progress.
SVB was considered the "standard" bank for all startups for decades so it's not surprising that YC would give the same advice. If you run a startup out of a normal bank sometimes you get weird glitches: https://mitchellh.com/writing/my-startup-banking-story
Of course today startups are probably using Mercury/Ramp/whatever.
SVB depositors were mildly interrupted, no doubt, but there's little reason not to exercise extreme moral hazard in banking. OPM will bail you out via FDIC. Theoretically that has a limit but in practice FDIC usually will bail out the full balances even over the nominal limit.
If I had an FDIC account I would basically want a bank that invests my money in the most wildly hazardous ways with the most reckless financial controls to give the max returns and flexibility, then let everyone else bail me out if it went south.
Go back to the SVB failure threads here and observe the freak out before the decision was made to reimburse deposits above FDIC limits. Sometimes you’re lucky, but luck is not effective risk management.
This is terrible for Space-X. They're doing a great job. Musk has left running it to Gwynne Shotwell, who really is a rocket scientist. Now Space-X has a AI business unit they don't need, a new money drain, and more attention from Musk.
Should have merged xAI into Twitter. A failure there would not be a major setback.
This, as others have noted is not a right to repair, it's a right to pollute.
There's a push for agricultural right to repair.[1] That falls under the Federal Trade Commission, not the Environmental Protection Administration. That's about parts and tool availability for the whole machine, not just the Diesel power train. This new announcement has zero effect on that.
Here are the controls of a modern John Deere combine.[2] Very little of that has anything to do with engine control. It's being able to fix that, and all the sensors and actuators connected to it, that's important. Diesels are rather reliable by now.
One of the weird things is the way manufacturers used emissions regulation as an excuse to lock their engines down.
Older engines had no computers. But computer control allows things that are needed to meet emissions like variable valve timing and whatnot. Also, engines are supposed to detect faults in emissions control systems and throw an error code.
So they brought in ECUs, but they DRMed them. Now they said it's so people can't tamper with emissions, but they DRMed everything to the point where now you have to take it to the dealer for servicing.
Do they need DRM to prevent people from tampering with emissions? No, they could have ringfenced it like some devices with radio transmitters do. Does DRM in the ECU prevent people tampering with emissions controls? Also no. There are people doing EGR deletes and whatnot on Tier 4 engines all the time.
But they were able to use emissions as the wedge to do what they really wanted to do: make their equipment not third-party serviceable.
By the way I think this also is a microcosm of the failure of the liberal order. The government should have cracked down hard on manufacturers doing this to begin with. The fact that they allowed manufacturers to turn their customers into serfs "in the name of emissions" obviously just ended up discrediting emissions control.
Go carefully there. Kynar tends to wilt at soldering heat. Press insulated wires together when they're still hot and suddenly they can be kissing; a 1-diameter short is easy to overlook when you're inspecting your work.
I find it worthwhile to use teflon-insulated wire here. When I'm building a prototype, the last thing I want to have to distrust is my construction.
This is incorrect. The set paradox it’s analogous to is the inability to make the set of all ordinals. Russel’s paradox is the inability to make the set of all sets.
Technically speaking, because it's not a set, we should say it involves the collection of all sets that don't contain themselves. But then, who's asking...
This is the easiest of the paradoxes mentioned in this thread to explain. I want to emphasize that this proof uses the technique of "Assume P, derive contradiction, therefore not P". This kind of proof relies on knowing what the running assumptions are at the time that the contradiction is derived, so I'm going to try to make that explicit.
Here's our first assumption: suppose that there's a set X with the property that for any set Y, Y is a member of X if and only if Y doesn't contain itself as a member. In other words, suppose that the collection of sets that don't contain themselves is a set and call it X.
Here's another assumption: Suppose X contains itself. Then by the premise, X doesn't contain itself, which is contradictory. Since the innermost assumption is that X contains itself, this proves that X doesn't contain itself (under the other assumption).
But if X doesn't contain itself, then by the premise again, X is in X, which is again contradictory. Now the only remaining assumption is that X exists, and so this proves that there cannot be a set with the stated property. In other words, the collection of all sets that don't contain themselves is not a set.
Let R = {X \notin X}, i.e. all sets that do not contain themselves. Now is R \in R? Well this is the case if and only if R \notin R. But this clearly cannot be.
Like the barber that shaves all men not shaving themselves.
The paradox. If you create a set theory in which that set exists, you get a paradox and a contradiction. So the usual "fix" is to disallow that from being a set (because it is "too big"), and then you can form a theory which is consistent as far as we know.
Perhaps I overstated how related the two were. I was pulling mostly from the Lean documentation on Universes [0]
> The formal argument for this is known as Girard's Paradox. It is related to a better-known paradox known as Russell's Paradox, which was used to show that early versions of set theory were inconsistent. In these set theories, a set can be defined by a property.
Back issues of Game Developer used to have post-mortems of major projects, both successful and unsuccessful. Those were concrete. This is vague and abstract.
Flying magazine has been publishing "I learned about flying from that" since 1939, and those articles are still good.
The author says it's too long. So let's tighten it up.
A criticism of the use of large language models (LLMs) is that it can deprive us of cognitive skills.
Are some kinds of use are better than others? Andy Masley's blog says "thinking often leads to more things to think about", so we shouldn't worry about letting machines do the thinking for us — we will be able to think about other things.
My aim is not to refute all his arguments, but to highlight issues with "outsourcing thinking".
Masley writes that it's "bad to outsource your cognition when it:"
- Builds tacit knowledge you'll need in future.
- Is an expression of care for someone else.
- Is a valuable experience on its own.
- Is deceptive to fake.
- Is focused in a problem that is deathly important to get right, and where you don't totally trust who you're outsourcing it to.
How we choose to use chatbots is about how we want our lives and society to be.
That's what he has to say. Plus some examples, which help make the message concrete. It's a useful article if edited properly.
I think that this summary is oversimplifying: The rest of the blog post elaborates on how the author and Masley has a completely different interpretation of that bullet point list. The rest of the text is not only examples; it provides elaborations of what thought processes led him to his conclusions. I found the nuancing of the two opposing interpretations, not the conclusion, the most enjoyable part of the post.
(This comment could also be shortened to “that’s oversimplifying”. I think my longer version is both more convincing and enjoyable.)
I feel like your comment is in itself a great analogy for the "beware of using LLMs in human communication" argument. LLMs are in the end statistical models that regress to the mean, so they by design flatten out our communication, much like a reductionist summary does. I care about the nuance that we lose when communicating through "LLM filters", but others dont apparently.
That makes for a tough discussion unfortunately. I see a lot of value lost by having LLMs in email clients, and I dont observe the benefit; LLMs are a net time sink because I have to rewrite its output myself anyway. Proponents seem to not see any value loss, and they do observe an efficiency gain.
I am curious to see how the free market will value LLM communication. Will the lower quality, higher quantity be a net positive for job seekers sending applications or sales teams nursing leads? The way I see it either we end up in a world where eg job matching is almost completely automated, or we find an effective enough AI spam filter and we will be effectively back to square one. I hope it will be the latter, because agents negotiating job positions is bound to create more inequality, with all jobs getting filled by applicants hiring the most expensive agent.
Either way, so much compute and human capital will go wasted.
> Proponents seem to not see any value loss, and they do observe an efficiency gain.
You get to start by dumping your raw unfiltered emotions into the text box and have the AI clean it up for you.
If you're in customer support, and have to deal with dumbasses all day long who are too stupid to read the fucking instructions. I imagine being able to type that out, and then have the AI remove profanity and not insult customers to be rather cathartic. Then, substitute "read the manual" for an actually complicated to explain thing.
I don’t understand this summary - isn’t this a summary of the authors recitation of Masleys position? It’s missing the part that actually matters, the authors position and how it differs from Masley?
reply