Hacker Newsnew | past | comments | ask | show | jobs | submit | da_chicken's commentslogin

USB-C is rated for 10,000 connections, while Lightning is rated for 40,000. Except if you disconnect and reconnect your phone 4 times a day every day of every year you own it, 10,000 is enough for just under 7 years. And Lighting was introduced in 2012, while USB-C was 2014. In those days, the average lifespan of a smartphone was 2.5 years. Even today, the software is only supported for 7 years at most. You don't need a connector that's going to last nearly 30 years.

And the additional durability of Lightning is itself not free. It's not cheaper than USB-C. Quite the opposite. That additional cost means that it either uses more resources to manufacture, or more resources to make the tools to manufacture. So, it's just wasteful. Lightning is "physically superior" but USB-C is better engineering.

Apple knows that. So Apple chose to go with Lightning because it was theirs, not because it was better. Because it's not really better. Not better for the customer. Or really better for business. Apple chose vendor lock-in.

Worse than that, Apple's connectors are higher durability, but their cabling itself is awful. I work at a K-12 and we were in an iPad and Chromebook pilot back in the mid 2010s that ran about 4-5 years. We had a fleet of 3500 of each. The iPads saw less than half the usage hours as the Chromebooks, but had something like triple the incidence of cable replacement. The cable insulation splits. The plasticizers degrade, the cables get really sticky or oily, and then they split and expose the braided grounding sheath. That braided cable will shock you. That was true for both student and staff devices. So they had these wonderful connectors, but the cables still failed at effectively five or six times the rate of the alternative. And since they were proprietary, you couldn't just buy a better cable made by someone else! You had to buy the same cable that you knew was going to fail!


> And since they were proprietary, you couldn't just buy a better cable made by someone else! You had to buy the same cable that you knew was going to fail!

Godswallop! Aftermarket Lightning cables were readily available shortly after Apple first use the the port.

Agreed though, their own Apple branded cables that came with the device are terrible, and I always just threw them straight in the bin.

And connection cycles is the wrong metric for USB-C vs Lightning. The correct metric is how many and how much side-force removals can the port withstand.

My experience shows that for USB-C the answer is wildly insufficient whereas for Lightning it’s sufficiently high enough that it won’t be a concern.


No, that seems unlikely. They committed the cardinal sin of stealing from the rich.

Also probably why SBF is yet to be pardoned

He was a big supporter of the Democratic Party which would not necessarily lead to a pardon with the Republican administration.

Eric Adams is a Democratic politician, whom Trump's DOJ dropped charges for political favors from Adams. For the right bargain they don't even care about the party.

He supported both parties.

Trevor Milton received an unconditional pardon for his Nikola fraud last year.

Trump has no problem selling pardons to people who stole from the rich. It's a big club, and he's open for business.


I think it's important to remember that we're not perceiving some fundamental aspect of light. We're perceiving how the photosensitive portions of our retina convert light to stimulus, and how our brains construct a meaningful image from that stimulus in our mind.

Like film photography doesn't happen in the lens or the world. It happens in that photosensitive chemical reaction, and the decision of the photographer.


> how our brain construct

is the only part i.e., we perceive what brain predicts no more no less. Optical illusions demonstrate it well.

Sometimes that prediction (our perception) correlates with the light reaching the retina. But it is a mistake to think that we can perceive it directly. For example, we do not see the black hole in our field of vision where there are no receptors (due to our eyes construction).

Another example that makes the point clearer: there are no "wetness" receptors at all but we perceive wetness just fine.


It’s an important point: all our sensations are interpretations of readings from various sensing abilities.

Which is why it can be so easy to produce false sensations of many things. It’s like tricking your fridge into turning the light off by pressing the little switch instead of closing the door. The fridge isn’t detecting when the door is closed, it’s detecting with that switch is pressed and interpreting that as meaning the door is closed. However that interpretation may not always be correct.


It reminds me of how vinyl records are fairly lossy, but they provide a superior experience in some cases because those limitations have been accounted for during the mastering process.

It's an entire pipeline from photomultiplier to recording medium to the inverse process and everything is optimized not for any particular mathematical truth but for the subjective experience.


Vinyls are sometimes preferred because people like white noise, same as tube amps.

Granted some CDs are mastered like garbage, and that led to some bad press for awhile. But you can master a CD so that it sounds exactly, as in mathematically exactly, as a vinyl record, if so desired.

It is also possible to make a digital amplifier that sounds exactly identical to vacuum tubes.

Humans have well and mastered the art of shaping sound waveforms however we want.


I mean I've always thought the kinetic experience of vinyl was the point: my childhood memory is the excitement and anticipation of carefully putting the needle on the lead in and hearing the subtle pops and scratches that meant it was about to start.

The whole physical enterprise has a narrative and anticipation to it.


Not to mention the wider context of starting off by opening a beautifully designed record sleeve, and the chances people choosing to listening to vinyl are doing so on a beautifully engineered soundsystem that cost as much as a car when it was released 50 years ago, or a turntable setup that's designed for them to interact with.

You could add all of that to CD. Bigger packaging for "audiophile pressings", a play ritual, extra distortion and compression, especially in the low end, limited dynamic ranges, minimal stereo separation, even a little randomness so each listening experiences was slightly different.

This is consumer narcissism. It's the driver behind Veblen signalling - the principle that a combination of collecting physical objects. nostalgia, and the elevated taste and disposable wealth required to create a unique shrine to the superior self.

Buying houses, watches, cars, vinyl, yachts, jets, and politicians are all the same syndrome.

Some people take it further than others.


You could add the audio distortion. You couldn't add the ability to place it on your DJ turntable or vintage record player (which you might have paid a small fortune for or obtained from Dad or a car boot sale). The CD is also unnecessary to obtain the music anyway.

Tbh freshly pressed vinyl is a significant way down the food chain from new cars, never mind jets and conspicuous consumption fine art, and the demographics that buy it don't necessarily have more disposable income than the demographics with Spotify subscriptions hooked up to a mid range modern soundsystem. If you really want to go full Veblen you can probably buy an NFT to give you all the bragging rights of having signalling money to waste without the inconvenience of actually having anything to look after or listen to :)


  > carefully putting the needle on the lead in and hearing the subtle pops and scratches
Led Zeppelin III actually used that lead in as part of the music experience, and the original CD pressing didn't capture it. I've heard CD pressings (even the name remains from vinyl) that do capture it, I don't know when that started.

> CD pressings (even the name remains from vinyl)

The name comes from the CDs being manufactured by pressing into a master mold to create the pits. Replicated (mass manufactured) audio CDs are pressed not written with a laser like duplicated ones (CD-R/RW).


Most records these days use CDs as masters, sadly.

No. A friend of mine worked at United Record Pressing. The majority of the masters they received from customers were commercial CDs. No special master.

Are you referring to the loudness wars?

If you pay attention to cats, you figure out they are fuzzy little “difference engines.” They seem to be hyper-tuned to things that change.

For example, if I move a small item in the corner of my room, the next time the cat walks in, he’ll go straight to it, and sniff around.

I have a feeling that cat’s eyes have some kind of “movement sensors,” built in. Maybe things that move look red, and most of the background looks grey.


Even human eyes have some areas, outside the fovea centralis, that are very sensitive to motion even in low light. In the dark you will see motion out of the corner of your eye but you will only see pitch black if you stare in that direction.

The other part you mention is more interesting, I noticed it too. That must be a mechanism in the brain rather than the eye. It’s like the cat keeps a “snapshot” of that place to compare against next time it comes by. This might also explain why they take the same route all the time, maybe it gives them a good reference against the old snapshots.


>> If you pay attention to cats, you figure out they are fuzzy little “difference engines.”

> That must be a mechanism in the brain rather than the eye

Check out "A Thousand Brains: A New Theory of Intelligence" [1] by Jeff Hawkins [2], of PalmPilot fame. This theory postulates, in part, and with evidence, that brains are continuously comparing sensory input and movement context with learned models. I found the book to be mind-blowing, so to speak ...

[1] https://www.amazon.com/Thousand-Brains-New-Theory-Intelligen...

[2] https://en.wikipedia.org/wiki/Jeff_Hawkins


I still see value in the numbering.

Breaking 1NF is essentially always incorrect. You're fundamentally limiting your system, and making it so that you will struggle to perform certain queries. Only break 1NF when you're absolutely 100% certain that nobody anywhere will ever need to do anything even slightly complex with the data you're looking at. And then, probably still apply 1NF anyways. Everyone that ever has to use your system is going to hate you when they find this table because you didn't think of the situation that they're interested in. "Why does this query use 12 CTEs and random functions I've never heard of and take 5 minutes to return 20,000 rows?" "You broke 1NF."

2NF is usually incorrect to break. Like it's going to be pretty obnoxious to renormalize your data using query logic, but it won't come up nearly as frequently. If it's really never going to come up that often in practical terms, then okay.

3NF and BCNF are nice to maintain, but the number of circumstances where they're just not practical or necessary starts to feel pretty common. Further, the complexity of the query to undo the denormalization will not be as obnoxious as it is for 1NF or 2NF. But if you can do it, you probably should normalize to here.

4NF and higher continue along the same lines, but increasingly gets to what feels like pretty arbitrary requirements or situations where the cost you're paying in indexes is starting to become higher than the relational algebra benefits. Your database disk usage by table report is going to be dominated by junction tables, foreign key constraints, and indexes, and all you're really buying with that disk space is academic satisfaction.


> Your database disk usage by table report is going to be dominated by junction tables, foreign key constraints, and indexes, and all you're really buying with that disk space is academic satisfaction.

FK constraints add a negligible amount of space, if any. The indexes they require do, certainly, but presumably you're already doing joins on those FKs, so they should already be indexed.

Junction tables are how you represent M:N relationships. If you don't have them, you're either storing multiple values in an array (which, depending on your POV, may or may not violate 1NF), or you have a denormalized wide table with multiple attributes, some of which are almost certainly NULL.

Also, these all serve to prevent various forms of data anomalies. Databases must be correct above all else; if they're fast but wrong, they're useless.


> Junction tables are how you represent M:N relationships.

Yeah, the problem is that when you get to 4NF+, you're often looking at creating a new table joining through a junction table for a single multi-valued data field that may be single values a plurality or majority of the time. So you need the base table, the junction table that has at least two columns, and the actual data table.

So, you've added two tables, two foreign key constraints, two primary key indexes, potentially more non-clustered indexes... and any query means you need two joins. And data validation is hard because you need to use an anti-join to find missing data.

Or, you can go with an 1:N relationship. Now you have only one more table at the cost of potentially duplicating values between entities. But if we're talking about, say, telephone numbers? Sure, different entities might share the same phone number. Do you need a junction table so you don't duplicate a phone number? You're certainly not saving disk space or improving performance by doing that unless there's regularly dozens of individual records associated to a single phone number.

And if the field is 1:1... or even 90% or 95% 1:1... do you really need a separate table just so you don't store a NULL in a column? You're not going to be eliminating nulls from your queries. They'll be full of LEFT JOINs everywhere; three-valued logic isn't going anywhere.

> Databases must be correct above all else; if they're fast but wrong, they're useless.

Yeah, and if they're "correct" but you can't get it to return data in a timely manner, they're also useless. A database that's a black hole is not an improvement. If it takes 20 joins just to return basic information, you're going to run into performance problems as well as usability problems. If 18 of those joins are to describe fidelity that you don't even need?


Right. But faceting data is also part of what a good database designer does. That includes views over the data; materialisation, if it is justified; stored procedures and cursors.

I've never had to do 18 joins to extract information in my career. I'm sure these cases do legitimately exist but they are of course rare, even in large enterprises. Most companies are more than capable of distinguishing OLTP from OLAP and real-time from batch and design (or redesign) accordingly.

Databases and their designs shift with the use case.


> I've never had to do 18 joins to extract information in my career.

Really? You're not representing particularly complex entities with your data.

I work on a student information system. 18 joins isn't even weird. If I want a list of the active students, the building they're in, and their current grade level, that's a join of 8 tables right there. If I also want their class list, that's an additional 5 or 6. If you also want the primary teacher, add another 4. If you want secondary staff, that's another 5.

The whole system is only around 500 GB, but it's close to 2,000 tables. Part of the reason is tech debt archaic design from the vendor, but that's just as likely to reduce the number of tables as it is to increase them. The system uses a monolithic lookup table design, and some of the tables have over 300 columns. If they were to actually properly normalize the entire system to 3NF, I have no doubt that it would be in the hundreds of thousands of tables.


> joining through a junction table for a single multi-valued data field

I may be misunderstanding you, but to me it sounds like you're conflating domain modeling with schema modeling. If your domain is like most SaaS apps, then Phone, Email, Address, etc. are probably all attributes of a User, and are 1:N. The fact that multiple Users may share an Address (either from multiple people living together, or people moving) doesn't inherently mean you have an M:N relationship that you must model with schema. If you were using one of those attributes as an identity (e.g. looking up a customer by their phone number), that still doesn't automatically mean you have to model everything as M:N - you could choose to accept the possibility of duplicates that you have to deal with in application code or by a human, or you could choose to create a UNIQUE constraint that makes sense for 99% of your users (e.g. `(phone_number, deactivated_at)` enforces that a phone number is only assigned to one active user at a time), and find another way to handle the rare exceptions. In both cases, you're modeling the schema after your business logic, which is IMO the correct way to do so.

I apologize if I came across as implying that any possible edge case means that you must change your schema to handle it. That is not my design philosophy. The schema model should rigidly enforce your domain model, and if your domain model says that a User has 0+ PhoneNumber, then you should design for 1:N.

> And if the field is 1:1... or even 90% or 95% 1:1... do you really need a separate table just so you don't store a NULL in a column? You're not going to be eliminating nulls from your queries. They'll be full of LEFT JOINs everywhere; three-valued logic isn't going anywhere.

If the attribute is mostly 1:1, then whether or not you should decompose it largely comes down to semantic clarity, performance, and the possibility of expansion.

This table is in 3NF (and BCNF, and 4NF):

    CREATE TABLE User (
      id INT AUTO_INCREMENT PRIMARY KEY,
      name VARCHAR(255) NOT NULL,
      email VARCHAR(254) NOT NULL,
      phone VARCHAR(32) NULL
    );
So is this:

    CREATE TABLE User (
      id INT AUTO_INCREMENT PRIMARY KEY,
      name VARCHAR(255) NOT NULL,
      email VARCHAR(254) NOT NULL,
      phone_1 VARCHAR(32) NULL,
      phone_2 VARCHAR(32) NULL,
    );
Whereas this may violate 3NF depending on how you define a Phone in your domain:

    CREATE TABLE User (
      id INT AUTO_INCREMENT PRIMARY KEY,
      name VARCHAR(255) NOT NULL,
      email VARCHAR(254) NOT NULL,
      phone_1 VARCHAR(32) NULL,
      phone_1_type ENUM('HOME', 'CELL', 'WORK') NOT NULL,
      phone_2 VARCHAR(32) NULL,
      phone_2_type ENUM('HOME', 'CELL', 'WORK') NOT NULL,
    );
If a Phone is still an attribute of a User, and you're not trying to model the Phone as its own entity, then arguably `phone_1_type` is describing how the User uses it (I personally think this is a bit of a stretch). Similarly, it can be argued that this design violates 1NF, because `(phone_n, phone_n_type)` is a repeating group, even if you've split it out into two columns. Either way, I think it's a bad design (adding two more columns that will be NULL for most users to support a tiny minority isn't great, and the problem compounds over time).

> If it takes 20 joins just to return basic information, you're going to run into performance problems as well as usability problems. If 18 of those joins are to describe fidelity that you don't even need?

The only times I've seen anything close to that many joins are:

1. Recreating a denormalized table from disparate sources (which are themselves often not well-constructed) to demonstrate that it's possible. 2. Doing some kinds of queries in MySQL <= 5.7 on tables modeling hierarchical data using an adjacency list, because it doesn't have CTEs. 3. When product says "what if we now supported <wildly different feature from anything currently offered>" and the schema was in no way designed to support that.

Even with the last one, I think the most I saw was 12, which was serendipitous because it's the default `geqo_threshold` for Postgres.


I really don't think the collectivist societies are that far ahead. People just invent out groups. India's castes, China's Uyghur's, Japan's castes and treatment of Korea and China, etc. Religious out groups, ethnic out groups, cultural out groups, linguistic out groups, etc. The list is just as long.

This sounds like a big whataboutism, or at least oversimplification. The opposite of individualistic society could be western socialism.

> Religious out groups, ethnic out groups, cultural out groups.

Yeah, unfortunately seen this on all corners of the political spectrum, hidden or not.


> Where did the above commenter say “genius?”

It's the transparent subtext. Like, blindingly transparent.

GGP's comment is talking about how CEOs are special or different than we are. That is, that they're not just evil, but that they're evil geniuses. It's just Great Man Theory with a Snidely Whiplash costume.


No, data exfiltration is just as lucrative as crypto.

We are unfortunately long past the point where viruses would frequently be merely annoying.


How do you pay for data exfiltration ransoms or to purchase stolen data? My take is that if you remove crypto, you will hamper greatly these transactions.

Just about every exploited site I've had to deal with has been some form of crypto miner.

Sure, because there's no reason not to, and because crypto mining is noisier than data exfiltration.

That doesn't mean it's the most lucrative revenue stream.


More than that, people today seem to be saturated with sarcasm.

It's especially tragic with younger people who seem to have no experience with handling genuine sincerity. They laugh nervously at it, as if they're unfamiliar with how to handle someone saying what they actually think and feel.


It's fully scripted. The hokum is pre-planned.

The questions from the news agencies and their responses are not scripted. I encourage you to listen to the Q&As the astronauts and ground crew had during the mission and judge their character on that. You won't find any public figure / politician with any amount of media training that even comes close to their level of genuine humanism, humility, and professionalism.

> To me it's not clear what the problem is that would require a redesign.

The interface is still bad. Teaching people to use git is still obnoxious because it's arcane. It's like 1e AD&D. It does everything it might need to, but it feels like every aspect of it is bespoke.

It's also relatively difficult to make certain corrections. Did you ever accidentally commit something that contains a secret that can't be in the repository? Well, you might want to throw that entire repository away and restore it from a backup before the offending commit because it's so difficult to fix and guarantee that it's not hiding in there somewhere and while also not breaking something else.

It's also taken over 10 years to address the SHA-1 limitation, and it's still not complete. It's a little astonishing that it was written so focused on SHA-1 never being a problem that it's taken this long to keep the same basic design and just allow a different hashing algorithm.


> Well, you might want to throw that entire repository away and restore it from a backup before the offending commit because it's so difficult to fix and guarantee that it's not hiding in there somewhere and while also not breaking something else.

I'm not a git expert but I cant image that's true


It’s not you just need to force push or generate a new key…

Perhaps proving the point here. That's not enough to eliminate the secret, the dangling commit will persist. Though this might be a nitpick, it's rather hard to get it from the remote without knowing the SHA.

> generate a new key

Is absolutely the right answer. If you pushed a key, you should treat it as already compromised and rotate it.


You also need to clear the caches of the remote

Yeah it doesn't seem hard to rewrite the commit history

Of course is not true - look into git filter branch. I had to use it once when a developer checked in a whole bunch of binaries and created a PR which ended being merged. I had to rewrite the history and delete the files from history - just deleting the files would not suffice because the file were in git history and we’re taking too m&ch space.

The interface can be independent of the implementation. Under the hood git does everything you need. If learning to use it at a low level isnt appealing, then you can put an interface on top which is more ergonomic.

> Under the hood git does everything you need

No it doesn't. Git is buggy. It also doesn't work for anything that's not a text file. It is unbelievably slow.


> It also doesn't work for anything that's not a text file.

You can define a custom git merge driver to teach git how to handle your proprietary format.

Edit: after a quick google search I found the following curated list, https://github.com/jelmer/awesome-merge-drivers


Git still doesn't work well with non-text data, including being incredibly slow. There's a reason why game studios use things like Plastic SCM and Perforce.

There may be situations where the git defaults aren't ideal.

I found that for the special scenario of game development git-lfs did the job quite well for me.

> Git still doesn't work well with non-text data

Seems like you are either mishandling git in your situation or you require another tool (different merge driver or difftool?). But I would argue that in either case git infrastructure is not "Buggy" as you suggest neither does it need a rewrite like the original article suggests.

It works as intended and additionally it provides you with the hooks and possibilities to adapt it to your workflow, for example handling large binary format files.

Perhaps for your usecase you would be better off using an alternative for example: one drive business, Plastic SCM, Perforce, google drive or an internal file server. That doesn't mean that git should be rewritten to fit your needs.

It feels like you want a regular sedan to both race in F1 and carry the same load as a lorry, use a specialized tool for your needs.


I mean, this can go both ways. "Git solves the problems I care about and any problem outside of that is a misuse of Git" versus what I'm stating.

> Seems like you are either mishandling git in your situation

It's not my fault that Git has become the standard for source control even though not all source is text-based. All the tools integrated with Git, like GitHub, diffing, merging, etc. are based upon text being the norm.

> There may be situations where the git defaults aren't ideal.

Certainly, and that's the point.


> Git is buggy

Citation needed on this one. Every problem I've ever seen arise with git came from someone not understanding the model or not knowing all the commands. Those don't make it better, but they don't mean it's buggy either.


I'm a huge fan of lazygit

> Did you ever accidentally commit something that contains a secret that can't be in the repository?

What do I need to do on top of a git force push, and some well documented remote reflog/gc cleanup, which I can’t find with a single search/LLM request? Are we there, where we don’t have enough developers who can do this without feeling it as a burden? Or are we there where this level of basic logic is not needed to implement anything production ready?


> What do I need to do on top of a git force push, and some well documented remote reflog/gc cleanup, which I can’t find with a single search/LLM request?

This is a self-defeating argument. You're essentially saying we shouldn't improve something because it can be done with a handful of commands (you already know btw) and prompting an LLM.

> Are we there, where we don’t have enough developers who can do this without feeling it as a burden?

The no true scotman.

> Or are we there where this level of basic logic is not needed to implement anything production ready?

Not sure how this fits in with the rest honestly.

It was never about whether it was possible. It was about how it's being done. Juniors (and even seniors) accidentally check in secrets. Arguing that there shouldn't be a simpler way to remove an enormous security flaw feels a bit disingenuous.


  gitcli != git
If you want to create (or use) another git client that makes removing a secret easy for you and your team you are free to do so.

> It was never about whether it was possible. It was about how it's being done.

That's what I was saying originally, no need to change the infrastructure but you can change how you interact with it.

> Arguing that there shouldn't be a simpler way to remove an enormous security flaw feels a bit disingenuous.

First of all, skill issue educate your employees. Secondly, this is a well considered and a huge part of why git is preferred over older systems like SVN or SCCS especially in an open source context where you are distributing your code through unknown channels and where the publisher might have moved on.

Perhaps Git is not the best VCS for your situation. But I think that if you try other options you will run into bigger problems, there is a reason git became the standard in the industry.

[0] https://git-scm.com/book/en/v2/Getting-Started-What-is-Git%3... [1] https://git-scm.com/book/en/v2/Getting-Started-The-Command-L...


> First of all, skill issue educate your employees. Secondly, this is a well considered and a huge part of why git is preferred over older systems like SVN or SCCS especially in an open source context where you are distributing your code through unknown channels and where the publisher might have moved on.

If you're entire argument rests on people being perfect, it's a trash argument.

Implying that accidents don't happen when you have skills is absurd.


No, I’m saying that you can do this without replacing git. You can make it simpler even without replacing git. Aka you just did a strawman, if you are really into these. Also you answered to me in an authoritative way, when even according to you, you don’t understand my comment. You can figure out a logical fallacy name for this. And also of course a nice fallacy fallacy.

Btw, I’m also saying that who cannot find how it can be solved right now with git, those shouldn’t be allowed anywhere near a repo with write permission, no matter whether you use git or not. At least until now, this level of minimal logical skill was a requirement to be able to code. And btw regardless the tool, the flow will be the exact same: ask a search engine or ml model, and run those. The flow is like this for decades at this point. So those minimal logical skills will be needed anyway.

The problem mainly is that when they don’t even know that they shouldn’t push secrets. You won’t be able to help this either any tooling. At least not on git level.


> Aka you just did a strawman,

That's not what a strawman is.

> Also you answered to me in an authoritative way, when even according to you, you don’t understand my comment.

No, I didn't understand what referring to production-ready code has anything to do with making mistakes in source control.

> And also of course a nice fallacy fallacy.

You keep using words you don't understand.

> The problem mainly is that when they don’t even know that they shouldn’t push secrets. You won’t be able to help this either any tooling. At least not on git level.

You're not actually suggesting you become immune to making mistake after a certain level of experience, are you? That would be insane.


There was a long time when somebody answered me with far right tactics. In this space, they are rarer, for obvious proven reasons. And before you would come, that you are not far right, I didn’t say that, and you cannot prove or disprove it anyway. Even on HN, the Overton window is moved towards far right, as almost everywhere, so value of self claim is not larger than zero, and since large part of society intentionally uses it to make muddy waters, it’s value on the internet is even negative.

> That's not what a strawman is.

Let’s see Wikipedia.

> A straw man fallacy (sometimes written as strawman) is the informal fallacy of refuting an argument different from the one actually under discussion, while not recognizing or acknowledging the distinction.

Let’s see the relevant part of your comment:

> You're essentially saying we shouldn't improve something because it can be done with a handful of commands

Once again

> an argument different from the one actually under discussion

And once more

> You're essentially saying

So the exact thing for which I used, because I didn’t say that essentially. But luckily for us, you just proved that you intentionally misrepresented my comment, and you intentionally argued against something which is not there. Btw, for which you just proved again that you use fallacy fallacy, and intentionally.

I yet to see a case when somebody says, “you’re essentially saying”, and it’s not strawman.

Let’s move on…

> you don’t understand my comment

And immediately after that, you said that:

> I didn't understand

I know, that from your viewpoint this can be difficult, but maybe, you don’t understand something, because you don’t understand something. Maybe you didn’t understand a single sentence from my original comment, and that’s why you don’t understand even the connections between them. Maybe you didn’t understand the general meaning of my comment, and thus you picked something which your mind cannot comprehend, why it’s there, because you didn’t understand even the basics. You “essentially said” that you didn’t understand my comment in general, by doing a strawman with “essentially saying”, but as you proved it was intentional, so I have no idea about your real thoughts.

It’s possible that the general meaning of my comment is fine, but there is an error in the specifics. Maybe, even the general meaning is problematic. One for sure, we will never know, because you still haven’t asked anything, thus revealing that you don’t even want to know. You proved again btw, that it’s intentional, because you still haven’t attacked its general meaning.

> You're not actually suggesting

As we could see from the definition, this is a strawman. And your first question, so here is the answer: no, and my comment is completely rational regardless of the answer. In other words, if the answer would be yes, my comment rationality wouldn’t have changed even a bit. And since, it’s obvious that the answer is no for everybody, and it’s orthogonal to my comment, thus this is a strawman. The point obviously wasn’t to get an answer.

There are two options at this point:

- your arguments are not in good faith - you do this because you lack skills and/or knowledge

In both cases, any further discussion is pointless.

If it’s the later one, I recommend to learn more, before you ever feel the urge to use any fallacy’s name again. You’re not there yet, but I’m glad that you are interested in avoiding them. A better way of thinking isn’t to avoid specific logical fallacies, but all of them even those which don’t have names. Also using their names won’t ever lead to real discussions in any environments. They can be effectively used only to avoid them for people who cannot think without logical problems as a principle.

If the former, which you proved almost once per sentence, then I hope we will return to positive sum game once, which corrects this behavior with acceptable tools, and you’ll learn from them. Until then, you will continuously make the same mistakes unfortunately, regardless of what I say. I could have proved my points with Lean, and you would still not change a single bit.


> The interface is still bad.

This is not the problem they are redesigning for, they are redesigning the infrastructure. Github is a live example of a different interface on top of git and that is working fine (though some may have their complaints with it), no redesign of git's underlying "infrastructure" needed.

> Did you ever accidentally commit something that contains a secret that can't be in the repository?

This is an inconvenience for secrets (have become more commonplace since the creation of git) but by my understanding this was a very deliberate choice in the design of git. It grantees integrity in the distributed source. For example you can check the hash of the last commit en be sure that your mirror did not inject malicious code.


git rebase -i <one commit before your mistake> git push origin mainline -f

git log --all --reflog -- path/to/secret-file

Git filter rule

There are currently over 1,000 companies involved in lawsuits against the US government right now even if we restrict ourselves to just tariff lawsuits.

And the government is attempting "corporate murder" on precisely one of them. Wanna guess which one?

Consider applying for YC's Summer 2026 batch! Applications are open till May 4

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: