> Why can't they be useful for a subset of programmers on a subset of projects?
They can, but that would hardly make a deep impact on the industry. And my question would then be why do you think they'll be useful only for a subset of programmers and a subset of projects?
> Types can be used to encode complicated invariants of software systems, which will be automatically be machine checked throughout the entire lifetime of your project.
I know they can, but so can other verification mechanisms that are less intrusive, or even pluggable types, like those that have recently been added to Java 8.
One point of difficulty -- that the article is talking about -- is precisely those variants that are really hard to enforce, like "every request X will be handled within 300 ms", or "the server will never respond Y if the user has never asked for Z", or "the system will never contact the database if the operator has turned on the offline flag". Similar (though less sophisticated) kinds of guarantees are made and verified by Esterel, without requiring the programmer to supply the proof. Other approaches include behavioral programming[1] (made by the same people behind the research that had led to Esterel), which simply let you specify a rule: "if offline flag is on, disable calls to the database", which will be enforced no matter how much lower priority rules try to access the database.
Types are just one approach for verification. There are invariants they excel at proving, and those that not so much. People who research type systems should, of course, explore anything that's possible with them, but software developers should realize that types are not the be-all and end-all of writing verifiably correct programs, and that for some program properties there are better approaches than types.
> They model invariants, help design and allow easy reasoning over large software projects.
I agree in theory. But I also think they may help up to a certain point and then start hurting. The problem is, this has never been put to the test.
> But I also think they may help up to a certain point and then start hurting. The problem is, this has never been put to the test.
You're right that the current cutting-edge of types in programming haven't been put to the test yet. Scala and OCaml and Haskell all get varying degrees of industry use, but it's nothing compared to C++, Java, Python, etc. So it's clear that we haven't even almost gotten to the point where the industry uses proper types at large.
I guess my point is this: Why then do you think at a certain point they start hurting? It's not like the industry has even gotten close to using types to their full potential. We've certainly never reached that "certain point" where they start hurting.
> Why then do you think at a certain point they start hurting
Oh, it's just a guess, but partly because of Haskell and its success at being continuously rejected by the industry for two decades now (it was touted as the next big thing when I was at university over 15 years ago). There have been few if any languages of Haskell fame that have clearly tried to break out of academia and have had so little use in the industry.
And the problem is less with types themselves but with the referential-transparency-everywhere and non-strict-evaluation approach. The result is that Haskell allows -- nay, demands -- a lot of reasoning about the program upfront at the expense of reasoning about it as it runs, with a debugger and a profiler (because of the complicated execution model). I also think Haskell places an unreasonable burden of thinking about the types vs. the algorithm an domain.
As to Scala, because it's such a "multi-paradigm" language, it's pretty hard to know exactly how it's used in the industry. There are certainly those that use it anywhere between Haskell-like with scalaz to Java-like only without having to write constructors and getters (in fact, Spark is written, I believe, closer in the latter style). But even if Scala is to be counted as an advocate for rich typing, its success record is pretty spotty (and, again, I wouldn't blame that necessarily on the type system).
I could, of course, be very wrong. One thing is certain: what a type system can do is a comp-sci question; how useful it is is a psychological question, and trying to argue about the usefulness of type systems (or any programming approach) as if it were a mathematical problem is just silly.
> We've certainly never reached that "certain point" where they start hurting.
I think Haskell is beyond that point, although I still believe a language with a type system as sophisticated as Haskell's might not be (because getting more for the same price might be worth it). By that I mean that Haskell types -- just like the article said -- waste a lot of mental power in the wrong places (i.e. on proving trivial stuff). I don't know what that imagined language would look like, but it would probably have some effect types (and like the author of the article has noted, there's a huge difference between being able to simulate effect systems and effect systems controlling actual effects).
> I also think Haskell places an unreasonable burden of thinking about the types vs. the algorithm an domain.
When you are encoding your logic in types the distinction between types and "the algorithm and domain" is very blurred if not the exact same thing. I wouldn't want to be programming in any other language if I knew very little about the domain.
But "encoding your logic in types" is often non-trivial and very different from thinking about the algorithm itself. Of course, most algorithms require dependent types (at the very least!) to be fully encoded, so you can't really encode them in Haskell...
They can, but that would hardly make a deep impact on the industry. And my question would then be why do you think they'll be useful only for a subset of programmers and a subset of projects?
> Types can be used to encode complicated invariants of software systems, which will be automatically be machine checked throughout the entire lifetime of your project.
I know they can, but so can other verification mechanisms that are less intrusive, or even pluggable types, like those that have recently been added to Java 8.
One point of difficulty -- that the article is talking about -- is precisely those variants that are really hard to enforce, like "every request X will be handled within 300 ms", or "the server will never respond Y if the user has never asked for Z", or "the system will never contact the database if the operator has turned on the offline flag". Similar (though less sophisticated) kinds of guarantees are made and verified by Esterel, without requiring the programmer to supply the proof. Other approaches include behavioral programming[1] (made by the same people behind the research that had led to Esterel), which simply let you specify a rule: "if offline flag is on, disable calls to the database", which will be enforced no matter how much lower priority rules try to access the database.
Types are just one approach for verification. There are invariants they excel at proving, and those that not so much. People who research type systems should, of course, explore anything that's possible with them, but software developers should realize that types are not the be-all and end-all of writing verifiably correct programs, and that for some program properties there are better approaches than types.
> They model invariants, help design and allow easy reasoning over large software projects.
I agree in theory. But I also think they may help up to a certain point and then start hurting. The problem is, this has never been put to the test.
[1]: http://www.wisdom.weizmann.ac.il/~bprogram/more.html