it's really a tie between "int", "long" and DWORD (though thankfully DWORD never made it into a language standard).
I would put "char" in the running now that we (thankfully) live in an increasingly UTF-8 world, but at least it's more or less universally understood to mean an 8 bit entity (even though this is not strictly true). It certainly isn't "a character".
long is definitely worse than int. At least you know int is never 64-bit on desktops, and it comes in handy enough for things like logarithms. long being either 32-bit or 64-bit on the same platform depending on the compiler just feels like insanity.
Even programmers who are roughly aware of integral promotion rules may not have connected all the dots to realize the answer is implementation dependent.
If sizeof(long) == sizeof(unsigned int), 1L * 2U results in an unsigned int, meaning the result is "false".
If sizeof(long) > sizeof(unsigned int), 1L * 2U result in a signed long, meaning the result is "true".
Wow that is cruel! Somehow I just assumed int would get promoted to long regardless of their size difference. That this isn't the case almost seems like a language defect!
I would put "char" in the running now that we (thankfully) live in an increasingly UTF-8 world, but at least it's more or less universally understood to mean an 8 bit entity (even though this is not strictly true). It certainly isn't "a character".