> And that kind of attitude is why we still have security bugs.
Absolutely poppycock. We could all program in completely memory safe languages tomorrow with only one signed integer type and crap programmers would still find a way to write security holes into their programs.
I'm in no way condoning the use of C/C++ forever, the writing is on the wall for those languages, as much as I love them.
But programmers have to learn first and foremost to take responsibility; if you're writing code that runs with elevated privileges then BE CAREFUL. If you're writing code that is reading data from an untrusted source (disk, network or otherwise) then BE CAREFUL. Hell, being careful even if you think the data source can be trusted is a good starting point - defensive programming 101.
We cannot blame our tools forever, but we can improve them.
I'd add the "adopt an information theoretic approach" to security analysis. (Which is basically what taint analyzers perform.) Think through how systems/components/libraries/functions can and do interface with each other, and try to secure these points. (Make them type safe, make them strict, report meaningful errors ["expected this but got this" is a million times better than invalid input], so they will be easy to maintain and make even more secure.) Try to extract out these parameters as much as possible so you can avoid possible impedance mismatches across the interfaces.
Also, checklists. Checklists are good. And an inventory of used components, and their versions. (This makes it easy to do a CVE review from time to time, and then to automate the review eventually, so only the list maintenance will remain manual.)
Defense in depth, but not through obscurity. (There usually are low hanging fruits. Enforce use of password managers, invest in centralized credential storage, don't overdo password expiration and 2FA. Security training is also a good idea, but the real goal is to nurture a security aware office/team culture.)
Social engineering [or just plain old laziness] is still a serious threat.
Timeboxing. Set aside 1-2 days every month to work on meaningful security-conscious goals. So try to upgrade to lay the fundamentals for that library upgrade that is overdue for years, try to make systems reproducible (also good for DR), try to add a few simple validations here and there against local file inclusion (or whatever comes up during the month, or during the checklist review).
Also, accepting that maintaining network facing systems have an inherent ongoing cost. (Unless you want your iToaster to eventually end up as part of a botnet.) Sometimes we have to let things go, accept that some business models (or hobbyist projects) are not worth it to do sanely and securely.
That is a nice list and all, and I would add that if you can't use a memory safe programming language then you should look closely at the compiler flags in use as well.
But if you need all that to spot the obvious issue in the OP's original specification ... then wow.
> But programmers have to learn first and foremost to take responsibility;
That's not going to happen as long as all our licenses include something like "This software is distributed WITHOUT ANY WARRANTY; without even the implied warranty of
MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE."
> if you're writing code that ...
Being careful (I'd add: learn how to be careful) is a good platitude, though might as well just say so without the ifs. Especially for open source, you don't always know the context in which your program will run. There's responsibility of the dev, but also responsibility of downstream users. When companies wrap a cli program behind a web server they're asking for trouble.
Not failing on the data parse example above is expected of competent C/C++ programmers. I do think these days if you're going to use such languages outside of well-defined low-security-impact targets like non-networked games you're morally obligated to be better than competent. I can't change what exists though.
Managed languages help so much. I do think Java achieved its mission statement of dragging many C++ programmers halfway to Common Lisp (if halfway is a GC) and preventing many security bugs, but if more people were to go all the way, I think Naggum points to a big obstacle I don't know how to overcome: "What has to go before Common Lisp will conquer the world is the belief that passing unadorned machine words around is safe just because some external force has «approved» the exchange of those machine words."
Everyone makes mistakes, but not if the tools they use don't allow them to make them. Yes, if everyone working on it was perfect then you would be correct. Thankfully this is not a requirement for any profession.
There are parts of some jobs where perfection is required. People working at height don't drop anything (and there are cases where safety lines aren't possible). Surgeons don't get to just say "oops, my finger slipped" when holding a scalpel near your heart.
In "From the Earth to the Moon" [1], someone explains that the main problem of the Apollo 1 fire was that nobody thought to label that test "hazardous". Sitting on the ground was supposed to be safe.
We should expect perfection when writing C code, but we should also be clear that writing C is a hazardous activity, and not to be taken lightly. My surgeon is a human being, so I don't expect him to never make a mistake, but he's also a skilled and careful professional, and I do expect him to never make a 101-level mistake while holding a scalpel.
"There are cases where safety lines aren't possible" in programming, too, but such cases should be as rare as possible. Coding C/C++ software in this day and age (or software in any other language lacking extensive safety properties, for that matter) is the IT equivalent of working at height and without safety lines 100% of your working life, purely for shits and giggles.
I've made this point before too. Not only is perfection expected elsewhere, it's routinely achieved. Not always but routinely. My example is a Cirque Du Soleil performance. Seeing the routine lack of screwing up points to some actionable advice too: practice and train, study, have competence rankings.
New tools = New security bugs we still need to find.
If you know your tools there is less risk that you produce security bugs. You know the sideeffects of all the commands and functions. If you use a black box and trust it fully, you will (unknowingly) find ways to make new security holes noone ever thought of before.
I'm not saying that we should not innovate, just that we should not rush head first into the blue sky thinking everything is fine just because we use the new shiny thing where we can't make bugs ever.
Agreed. Though using conceptually better tools (parser generators) usually has real gains. (Even if it makes the system a bit more rigid. You usually can't just put an "if" in the middle of generated code. But this seems like a sane trade off.)
Unfortunately there are a lot of scenarios where C/C++ are still the only practical choices. No other language even comes close to their ubiquity AND performance at the same time. I'm sure the day will come though.
And there are those working on mitigating some of the worst issues with those languages, maybe someday they will bear practical fruit as well.
Even though you can still screw up as as a programmer in a better tool, you should still pick that tool if that reduces the security risk by a number of percent over another tool. (as a swede would say "Think of the percentage")
So why doesn't everyone go with the better tool? Large problem is that experienced programmers encourages new programmers to use the older & less secure tools. I guess in a way to stay relevant.
Just this past week it has been an article about C programming almost everyday on the top list here at Hacker News.
> Even though you can still screw up as as a programmer in a better tool, you should still pick that tool if that reduces the security risk by a number of percent over another tool.
I agree in general, but not necessarily if the "better tool" doesn't run on or generate code for your target platform, or doesn't meet your performance requirements, or memory constraints, or the requirements to interface with other languages via a common ABI etc, etc etc.
And, in those situations, you need to BE CAREFUL.
> Large problem is that experienced programmers encourages new programmers to use the older & less secure tools. I guess in a way to stay relevant.
Oh I get it. Blame the older generation who wrote the platforms & tools that gave you a job in the first place. If that doesn't work, blame the tools. Blame anything but yourself for writing shit code. I see.
Sure, there is always practical limitations, but I think that is less of a problem today than it used to be.
We have newer languages, but also a lots of languages have gotten better to interface lower level libraries.
Experienced programmers can be a huge asset, but at the same time a curse, there is no contradiction there. And I'm not arguing for a revolution to throw out all of what has been gained in software, I just say that new projects should to leave the old tools behind.
I am of course also guilty for proselytizing bad ideas & writing bad code.
Absolutely poppycock. We could all program in completely memory safe languages tomorrow with only one signed integer type and crap programmers would still find a way to write security holes into their programs.
I'm in no way condoning the use of C/C++ forever, the writing is on the wall for those languages, as much as I love them.
But programmers have to learn first and foremost to take responsibility; if you're writing code that runs with elevated privileges then BE CAREFUL. If you're writing code that is reading data from an untrusted source (disk, network or otherwise) then BE CAREFUL. Hell, being careful even if you think the data source can be trusted is a good starting point - defensive programming 101.
We cannot blame our tools forever, but we can improve them.