Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

Seems like it boils down to using unsigned when you need to treat numbers as fields of bits, and signed when you need to do arithmetic.


Not experiencing undefined behavior is a desirable thing for arithmetic so unsigned has some appeal for arithmetic too, I'd say.


That seems like a false sense of safety: when doing arithmetic, your program is most likely not going to be handling overflows correctly anyway. Also, when using unsigned ints, any code involving subtractions can lead to subtle bugs that only occur when one value is larger than another. I'd recommend sticking with signed ints for all arithmetic code.


It might be pretty safe if you're only ever adding, multiplying or dividing with other unsigned numbers. Once you start doing subtraction it could get ugly.


> when doing arithmetic, your program is most likely not going to be handling overflows correctly anyway

Well with that attitude!




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: