It's interesting to compare exploit tech from today to how simple things were back in '97. In '97, a complex exploit meant you couldn't have lowercase ASCII equivalents in your shellcode. Now, you scan the target for info leaks that will disclose stack cookies, and spray the heap to increase the odds that you'll hit your exploit in randomized memory, and forge stack frames that will return to mprotect or VirtualProtect, and and and.
So much of "security research" over the last 10 years has basically danced around a fundamental fact: if you lose control of the runtime memory of your program, you lose control of the whole program.
If you get lucky, your target won't bother with all these new-fangled protections and run on WinXP (no ASLR) and the only thing you'll have to contend with is non-executable stack. Our original exploit for Green Dam used a heap spray only to make writing it easier, and the underground exploit uses a really awful behavior of IE w.r.t. running .NET DLLs -- http://www.milw0rm.com/exploits/8938 . Other than that, there are no defenses to speak of.
In one of my freshman CS courses, one of our last assignments was to read this paper (as well as a more approachable tutorial written by our TAs) and carry this out on a toy architecture (LC3) that we had been working with for the whole course. It was probably the most interesting assignment we had that semester.
Funny how the first example contains a subtle bug, the string termination is never done, if the stack wasn't zeroed prior to running that program, you may not only overflow 'buffer' on purpose, you may overflow it a lot further than you'd think by just reading the code.
I remember reading this paper as an undergrad at Rutgers. We had to do a stack smashing project based on this with different stack protection methods (canaries/ASLR)
This was probably one of the most important things I learned in school.
I remember reading this and finding an error in one of the examples.
In addition to learning about the stack, I also realized that superheroes are human, too. <3
Amongst other things, for users using gcc ( >= 4.0 ) you have the flag `f-stack-protector' set by default, by setting the `-fno-stack-protector' flag explicitly, you effectively turn of compiler stack canary protection.
For those with a non-executable stack, by setting /proc/sys/kernel/randomize_va_space to 0, you effectively turn off any Address Space Layout Randomization (ASLR).
If by now, you're still unable to acquire a segmentation violation, try looking into the execshield stack markings, and so on, which may probably very easily be identified via some search engine.
Otherwise, it should be trivially capable to overflow a stack based buffer via the conventional routines, should they lack boundary checks.
Stack based buffer overflows pose as big a problem now, as they did 13 years ago, and probably more-so.
The flags for disabling executable stack are:
"-Wl,-z execstack" "-Wa,--execstack"
I also added -U_FORTIFY_SOURCE in the Makefile for the exploit project that the security course I'm TAing is currently working on. Not sure if that was entirely necessary.
Look at the output of `objdump -d prog | grep call | grep _chk | wc -l', if the value returned is zero/0/0x0 you haven't got FORTIFY_SOURCE enabled, and would be unnecessary to add -U_FORTIFY_SOURCE.
However this is dependant upon implementation set restrictions and defaults.
That's limited to the microsoft platform for now isn't it ?
Or does the term also apply to java ?
One of the things that I've been wondering about, that maybe you can answer is this: If languages such as javascript, lisp, python and so on (as opposed to say C) treat functions as first class citizens, doesn't that open up a completely new can of worms with respect to security ?
I guess that is a Microsoft term, but basically, if you can't cast an integer to a pointer, you have managed code, and you have eliminated many classes of very severe security vulnerabilities. (I assume, of course, that code like "foo[42]" is logically two unsafe typecasts; pointer to integer, some integer math, and then integer to pointer. The third operation is the security problem.)
I don't think managed code is vulnerable to any additional types of exploits, other than holes in the runtime. But remember, if there is a hole in the runtime, you patch the runtime and Every Program Ever Written gets that fix. Everyone knows how to avoid buffer overflows in C, but sometime they mess something up and the app becomes vulnerable. The only way to fix that is to find the problem in the app and fix it... and hope that you got all of them this time.
There are plenty of ways managed code can be insecure, of course. I am trying to get root access on my Archos media player, and it is pretty easy because of their poor programming. There is a bash script running as root that downloads a named file from the Internet and puts it in a directory. Except oops, I control the Internet connection and the content that script downloads. I also control the filesystem that it writes to. So while I don't have root, I can get the system to overwrite any file I want, including /etc/shadow, and now I do have root. (Except, the filesystem is read-only, so this doesn't actually work. They got very very lucky. But their code is horribly crappy C, so it should be easy to exploit when I feel like spending time on it.)
A strange question. Almost every language we use today is "managed", by the Microsoft definition; "Managed C++" is a Microsoft term for bare-metal C++ with extra runtime protections. Of course Java is "managed".
Yes, "managed C++" is very poorly managed. You can still do things like:
char foo[42];
foo[-4] = 0xcafebabe;
and watch your program crash and burn. (Sometimes, though, you get an exception you can catch with try{}/catch{}, instead of an actual segv. It would be hilarious... if only the app I was working on did not use this behavior as part of its string-parsing routine...)
Managed C++ isn't really designed to bulletproof C++; it's designed to make it possible to field C++ code without allowing attackers to upload their own programs into your program's memory.
Kicking Managed C++ for not being C# is a bit of a cheap shot.
Still subject to attacks on the management system, which is written with existing architecture. True Harvard architecture instead of Von Neumann may help, as it discriminates between code and data.
There was a frantic race to get the first overflow out after 8lgm capped off their run of zero-days with an announcement of a Sendmail 8.6.12 remote that relied on a syslog() overflow (yes, in 1995, syslog(3) had an overflow). I was sitting next to Pieter when he wrote part of the tutorial you linked to. I was pretty young (maybe 19?) but even so, it was a pretty electric time to be involved in security.
So much of "security research" over the last 10 years has basically danced around a fundamental fact: if you lose control of the runtime memory of your program, you lose control of the whole program.