Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

> Sorry where is the non-sequitur?

It is in your original statement (abbreviated):

If there were easily exploited buffer overflow in <our program>, ambulances and fire engines <for our customers> would never be called to their destinations nor arrive on time.

One doesn't follow from the other. Your statement implies there are bad actors in existence that would actually exploit such flaws in order to disrupt the service. That may or may not be true, therefore it does not follow.

> The previous poster purported that I mustn't have much "real world" experience in coding, and I responded that I and my team wouldn't be entrusted to successfully maintain a program and network that coordinates emergency services first responders over a large land mass and number of people if that was the case.

That's not what I actually replied to, but it's also a non-sequitur. It would be entirely plausible that you could be a reckless band of cowboy coders that happens to be able to produce a working product that is nevertheless full of theoretically exploitable flaws that just happen to not get exploited, because nobody cares. Also, whoever "entrusted" you may themselves be entirely reckless or incompetent.

I'm not saying that any of this is the case, I'm simply saying you can't logically deduce one from the other. Therefore it doesn't really work as an argument.



> One doesn't follow from the other. Your statement implies there are bad actors in existence that would actually exploit such flaws in order to disrupt the service.

Rubbish. If the service in question was directly accessible from a public network and its source code known and published, it would be attacked in less than 5 minutes. That is the assumption that causes millions of dollars to be spent on securing the application and the environment every year. We are not prepared to take that risk, and we secure appropriately and expensively.

> Also, whoever "entrusted" you may themselves be entirely reckless or incompetent.

That is how employment works in the wider world of Internet Widgets etc. but not in critical government regulated infrastructure like emergency services.

Yes we are unable to prove 100% that the application has no externally exploitable bugs, despite our best efforts, so we don't claim that it is and go and make it accessible to untrusted sources of input.


> If the service in question was directly accessible from a public network and its source code known and published, it would be attacked in less than 5 minutes.

You just keep going with the non-sequitur, but now you're also moving the goalpost. Your original statement didn't contain anything about "directly accessible from a public network" (which I assume isn't the case) or "source code known and published" (which you said wasn't the case).

And it is still a non-sequitur. Why would anybody expend non-trivial resources to find exploitable flaw in an open-source codebase to actually bring down emergency service? There's no reasonable way to profit from that. If you're some foreign hacker, you might want to find the exploit, but you wouldn't attack right away. Again, I'm not saying nobody would possibly do it, but one does not automatically follow from the other.

Heartbleed really is the best example on how a critical piece of software infrastructure (including a lot of government regulated infrastructure) had a publicly visible flaw that went undetected (and presumably unexploited) for several years.

> That is how employment works in the wider world of Internet Widgets etc. but not in critical government regulated infrastructure like emergency services.

I don't know man, I'm here arguing with this guy who doesn't seem to grasp a basic concept in logical reasoning, yet he's working on critical government infrastructure.


> Heartbleed really is the best example on how a critical piece of software infrastructure (including a lot of government regulated infrastructure) had a publicly visible flaw that went undetected (and presumably unexploited) for several years.

It's already been discussed in this thread how old OpenSSL is - it dates back to 1998 when the risk of buffer overflows was not as well understood or publicised as it is now. It has also been addressed that code audits of OpenSSL were done as a result, which has lead to both forks and patches.

In the context of the new, one liner graphics format proposed by the OP in 2019 that had an obvious bug written straight into its specification, this thread has now reached absurd levels.

> Again, I'm not saying nobody would possibly do it, but one does not automatically follow from the other.

Ok. I'll tell my customer we don't need all the firewalls, IDS, code inspection, risk assessment and hardening because zeroname on HN told me it was OK.

> I don't know man, I'm here arguing with this guy who doesn't seem to grasp a basic concept in logical reasoning, yet he's working on critical government infrastructure.

I'm not at all offended by your ridiculous insults, I feel rewarded every day that my team and I are working on something worthwhile, and also the white-knuckled fear on that goes with it on occasion when we worry whether we've missed something or not.

Let's just both be thankful you don't live in an area where my team provides emergency services then.

Again, have a nice day, "man".


> It's already been discussed in this thread how old OpenSSL is...

That's completely irrelevant to the point I am making. I'm using it as a counterexample to your reasoning that if code for (your) critical infrastructure was published, any obvious flaws in it would almost immediately be discovered and exploited. That's completely orthogonal to how old or poorly written the codebase is.

> Ok. I'll tell my customer we don't need all the firewalls, IDS, code inspection, risk assessment and hardening because zeroname on HN told me it was OK.

Well, I didn't. You're clearly not only unwilling to engage in logical reasoning, you also seem to lack basic reading comprehension.

> I'm not at all offended by your ridiculous insults...

I don't mean it as an insult, but I'm really losing my patience here. Your ego is so tied to being right that you can't admit to having made a little logical blunder there. Logical fallacies are actually really common, there's no shame in them. In fact, I'm prone to arguing the same If-this-then-that fallacies myself.

Either way, if you really care to be right, it doesn't matter what your job is or who gave it to you and how smart and diligent your team is. You just have to get your propositional logic correct.

Simple example:

False: If I leave the door unlocked, people will come and steal my stuff. (Non Sequitur)

True: If I leave the door unlocked, a thief will have it much easier to come in and steal my stuff.


From wikipedia: the Heartbleed bug was introduced into OpenSSL in 2012 and publicly disclosed in April 2014.

The Morris worm was released in 1988.

People are prone to make logical errors and to overlook stuff, even when they are aware of the risks. Thinking is hard, and every little bit of help is useful, either from peers or from tools.

Do not take what zeroname wrote as an insult. He simply pointed out the mistake you made (indeed, it does not follow what you wrote - it may be possible, and probable, but it does not follow automatically).

You can look at all this as a peer review, just not for code, but for thought/logic argumentation.


"You're clearly not only unwilling to engage in logical reasoning, you also seem to lack basic reading comprehension."

- that is not simply pointing out mistakes. That's ridiculous insults that have no place on HN. I didn't think zeroname came off better in that exchange at all, in terms of who was trying to communicate, not just appear superior. There was a crossed wire, and condescending lectures in basic logic mixed with insults did not help.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: