Because for a texture size it makes sense it'll never be negative and if it's 0 then it's an error.
An up to date Visual Studio 2017 actually wouldn't let me compile your code due to unary minus on unsigned which it treats like an error but GCC and Clang both don't warn about it with even -Wall and -Wextra, all these three facts are astonishing to me (GCC and Clang for not warning at all about it and forcing a cast and VS for marking this as an error and not just a warning).
Unsigned is sometimes avoided because of:
1. Optimization - unsigned has well defined overflow behavior, signed doesn't so supposedly it can be reasoned by a compiler that it'll never go above 2^(bits/2)-1 in a for loop and so on.
2. Some sort of compatibility, e.g. Java doesn't even have proper unsigned types, just some helper functions, C# (the language) has them but .NET doesn't since .NET is 'CLS compliant' which requires no use of unsigned since not all .NET languages need to have them (so List<T> index[] and Count both use signed ints).
3. Cramming error codes or special values into a return value or argument of something in statically typed languages, e.g. many system functions on many OSes return -1 or negative for errors, in Lua if you want LUA_MULTRET (it's -1) values returned to you then it'll give you as many as it can (just saying a number fills empty spots with nil so you can't just say 999999 to get them all), file size for file that can't be accessed report -1 in PHYSFS (game filesystem library in C) and LCL fileutils (a Free Pascal GUI and utils library).
Places where unsigned is used sometimes get around the last use by saying the maximum possible value has special meaning, like std::string::find returns std::string::npos which is just that too. An easy way to get maximum value of an unsigned type is to assign/cast -1 to it, which is why it astonishes me that VS2017 marks -1u as an error in its default settings, I can get it for variables but for a constant (or at least for -1u which is a common idiom, just like how integer 0 is treated specially and plainly assignable to pointers but other integers aren't) I'd expect it to pass and not require pragmas disabling warnings and errors (than again they also deprecate and fail to compile stuff like sprintf by default).
And of course for various crypto or bitwise work people use unsigned (and Java uses int or long and special functions that manipulate it or just relies on the fact Java mandates twos complement for its signed ints). I'm personally not sure which approach (unsigned because it makes sense vs signed because it's useful and has no gotchas) I like more and I keep changing my mind but leaning towards ints usually, even Stoustrup apparently said to not use unsigned unless you have a specific need (and I totally agree with Sutter's point and think that needing a size of something on the order of 2^63 is so insane and beyond capacities of entire data centers, let alone single files, that at that level a single bit doesn't make or break you and you'll get caught by something else anyway like adding sizes of two such files overflowing even unsigned 64 bit):
https://www.nayuki.io/page/unsigned-int-considered-harmful-for-java