To elaborate on siblings compile time vs run time answer: if it fails at compile time you'll know it's a problem, and then have the choice to not enforce that check there.
If it fails at run time, it could be the reason you get paged at 1am because everything's broken.
It’s not just about safety, it’s also about speed. For many applications, having to check the values during runtime constantly is a bottleneck they do not want.
Like other sibling replies said, subranges (or more generally "Refinement types") are more about compile-time guarantees. Your example provides a good example of a potential footgun: a post-validation operation might unknowingly violate an invariant.
It's a good example for the "Parse, don't validate" article (https://lexi-lambda.github.io/blog/2019/11/05/parse-don-t-va...). Instead of creating a function that accepts `int` and returns `int` or throws an exception, create a new type that enforces "`int` less than equal 200"
class LEQ200 { ... }
LEQ200 validate(int age) throws Exception {
if (age <= 200) return age;
else throw Exception();
}
LEQ200 works = validate(200);
// LEQ200 fails = validate(201);
// LEQ200 hmmm = works + 1; // Error in Java
LEQ hmmm = works.add(1); // Throws an exception or use Haskell's Either-type / Rust's Result-type
Something like this is possible to simulate with Java's classes, but it's certainly not ergonomic and very much unconventional. This is beneficial if you're trying to create a lot of compile-time guarantees, reducing the risk of doing something like `hmmm = works + 1;`.
These kind of compile-time type voodoo requires a different mindset compared to cargo-cult Java OOP. Whether something like this is ergonomic or performance-friendly depends on the language's support itself.