There is so much good stuff going on in projects like Midori that a new systems language (or a massive overhaul of C#) has to come out of Microsoft soon.
The biggest design mistake in Java was the omission of stack allocated value types, and the same for C# was the distinction between value and object types. If something can be done on the stack it should be done on the stack - the biggest lie we were told was that "short lived objects are cheap".
Dunno - MS seems to be doubling down on C++. They've essentially abandoned the F# team it seems. They haven't updated the CLR format since adding generics in v2. C# dev seems like they are sort of satisfied and just focusing on tooling. This many years, and not even tuples or pattern matching? Come on.
Or maybe I'm in a total echo chamber. Working with a client today, they were astonished to find out that:
var x = new List() { 1 , 2 };
var y = x;
x.Clear(); // This affects y.
Absolutely astounded. They also had no concept of local variable scope, thinking they had to manually clear out the List so the next invocation of the function wouldn't have the old data. They aren't fresh devs. They've had professional programming jobs. They've built a successful company based on their code. It does non-trivial work (not just a web UI on a DB) and processes many $/sec. In fact, I'm impressed how successful they are.
So who knows, maybe MS's overwhelming feedback is from people that fundamentally don't understand what's going on, and they're the ones with money?
To be fair, I've seen successful C projects with lines like: buffer = malloc(....); // gets memory
Even though it appears as ridiculous, I hope it means the guy did ensure that things worked correctly no matter how ignorant they were on some aspects, which if true is a good quality. No matter the tools, it's the properties of the system built that counts.
I think a divide between value and reference types in C# is inevitable. There are semantic differences between value and reference types in C# that I don't think you can elide in a way that's both reasonable for the tooling author and conceptually clear to the end user.
It's mostly an issue with poor escape analysis and the tendency to use heap allocation for data that could go on the stack. This isn't a requirement of the language/runtime though (I believe) so this could be improved, but maybe some property or feature of the language is what makes escape analysis hard?
I think it's more than that, though, much as it is in something like C++. A struct in C# can be considered a class in C++ with no virtual methods (where interfaces are handled either through static linkage in a local scope or via boxing, I forget the exact method), and this does have different implications with regard to dereferencing and that sort of thing. Think about an array of structs, which is represented internally as an array of n * sizeof(SomeStruct) versus an array of classes (which is an array of references).
(n.b.: when I write C#, it's for game development, and I am constantly thinking about structs versus classes and I'm very careful about them, so my perspective is probably not the same as your average C# developer.)
I think it was a research project only so simply a project from which features would be ported into production tech. I'm sure something found its way into C#, but I'm hoping for more. C++ isn't the horse to bet on...
The biggest design mistake in Java was the omission of stack allocated value types, and the same for C# was the distinction between value and object types. If something can be done on the stack it should be done on the stack - the biggest lie we were told was that "short lived objects are cheap".