Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

Battle tested in the past != fit for future wars. The nature of systems has evolved. Attackers are more sophisticated, have better tools and probably understand C code better than most who still write it. So our tools need to evolve too. Admittedly rewriting full libs into another language sounds scary, but hey - since when did fear of the future stop it from coming?


I'm unconvinced by the argument that C is unfit for future wars.


It's simple:

1. Memory safety issues are the cause of a very large number of security vulnerabilities (often most of them for projects written in C or C++, depending on the software).

2. Memory safety-related issues have a relatively high probability of being turned into remote code execution, which is one of the most if not the most severe outcomes.

3. C and C++ projects have been empirically observed to have orders of magnitude more memory safety problems than projects written in other languages do.

4. The additional classes of security vulnerabilities that managed languages tend to foster do not have the combined prevalence and severity of memory safety problems.

So, we would be better served in security by moving to memory-safe languages.


> C and C++ projects have been empirically observed to have orders of magnitude more memory safety problems than projects written in other languages do.

Note that this includes projects in languages that are themselves written in C or C++, which shows that there's some value in confining the unsafe code to a small and well-tested core library (in this case, the language runtime). Honestly, it seems like 50% of the value just comes from not using C strings, since pretty much every other language has its own string library that does not use null-termination.


Memory safe programs require

1) Runtime overhead for some form of GC (D, Lisp, etc)

2) Rephrasing a program to satisfy a memory constraint checker (Rust)

3) Disciplined memory usage (i.e. Nasa C coding guidelines)

We don't have enough experience with 2 to indicate whether it will create new classes of bugs. We also don't understand the knock-on effect of managing memory differently - will functionally identical programs require more or fewer resources, more or fewer programmers hours, etc.

Rust may very well be the future, but we don't know for sure yet.

One thing we do know: options 1 and 3 have been available for years, but not widely utilized. What lessons can we learn from this fact to apply to Rust?


> We don't have enough experience with 2 to indicate whether it will create new classes of bugs.

What classes of security bugs could possibly arise from Rust's ownership discipline?


Logic bugs. Failure to correctly adapt imperative algorithms while still satisfying the constraint checkers.

Not all security bugs are related to memory. Many are related to improperly written algorithms (most crypto attacks), or improperly designed requirements (TLSv1).

Even Heartbleed was primarily due to a logic bug (trusting tainted data) instead of an outright memory ownership bug.

Does Rust automatically zero out newly allocated memory? Honest question, I don't know the answer.


> Logic bugs. Failure to correctly adapt imperative algorithms while still satisfying the constraint checkers.

Oh, also: If you're implying that Rust's ownership discipline can create security bugs where there were none before, I consider that a real stretch. I'd need to see an actual bug, or at least a bug concept, that Rust's borrowing/ownership rules create before accepting this.


> Not all security bugs are related to memory. Many are related to improperly written algorithms (most crypto attacks), or improperly designed requirements (TLSv1).

Nobody is saying that Rust eliminates all security bugs. Just a huge number of the most common ones.

> Does Rust automatically zero out newly allocated memory? Honest question, I don't know the answer.

Yes.


> Not all security bugs are related to memory.

This is a problem that will be there equally in all languages

Perhaps less so in languages with a better type system, but that doesn't affect Rust since there aren't any _systems_ languages with a better type system.


You most definitely know a lot more about code than me. So I'm not challenging you at all. But my contention is that robustness is a function of resilience. C has a class of errors which are hard to spot for foot-soldiers, and sometimes even generals which can leave deadly chinks unspotted for long. If Rust attempts to do away with those specific type of errors altogether, what's wrong with that? Rewriting code. And if that seems like a challenge worth attempting for some people, I can't fault it. And in the process maybe we will discover more bugs, or maybe something better. Its evolution, no?


It's not a bad thing that Rust addresses these issues. That's good, and essential for newer languages---it wouldn't make sense to not try to solve these problems in a language that intends to be lower-level (like Rust), relatively speaking.

The paper on the Limits of Correctness that I mentioned above does a good job at arguing my point. Even if you rewrote glibc in a language like Coq and formally proved its correctness, that doesn't mean that it's "correct" in the sense that its logic mimics glibc---there could be logic errors.

So you might gain confidence (or guarantees) in rewriting glibc in Rust, but in rewriting it you potentially introduce a host of new issues.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: