Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

From where I sat in Windows Security (I wrote much of the BitLocker UI) these are a couple of anecdotes:

I remember one of the base team architects laughing and bragging about how and why they killed managed code and banned it from the OS — the technical reasons where valid, but the attitude was toxic. I think most or all of the generation of architects that had come from DEC have now moved on, but they also took tribal knowledge with them. I wonder how much they were encouraged to find other things to do because of this type of attitude (which not all of them had btw)

Moving on

The Avalon team appeared to be drinking their own cool aid. They had charts on the wall showing their performance goals and progress. All the performance measurements on those charts were annotated as "estimated". I remember an internal alpha that had strong polynomial execution time based on the number of elements in the logical tree. It felt like the design was good, but that the correspondence of the implementation to the design was missing.

When Avalon was booted, we were told to write UI in a broken framework called DirectUI which had a convoluted history, an arrogant maintainer, and a weirdly mismatched documentation. Much was documented that wasn't implemented, the standard answer to missing features was "add it yourself", but if I pushed a code change to shell code (where it lived in the codebase) it would typically cause a forward integration from winsec to base to be rejected.

Good times



> had a convoluted history, an arrogant maintainer, and a weirdly mismatched documentation. Much was documented that wasn't implemented, the standard answer to missing features was "add it yourself"

Sadly, this is how some open-source projects are run as well.


Worse, I've seen open source project maintainers tell to mere users (as in, non-programmers, just using the software) to 'add it yourself'. Ok the maintainers cannot really know the user's background, but still, such attitude raises some questions.


I also have this problem with the "where's your pull request" trend lately. Yes, I use open source project X. When I detect a bug, I'll definitely try to isolate the cause and reduce it to a minimum, and then log a Github issue with clear instructions on how to reproduce it. As a developer, I feel that this is massively helpful; if only the bug reports I get in my professional life were like this, instead of "yeah, there was an error - I have no idea how".

But no, I'm not going to put in 30+ hours to download, build and try to figure out where the bug is in the actual code. There are people who are in a better position to do that. If everyone who has a car problem had to submit a diagram to the factory on how they would change the car's design, we'd all be spending a lot of time doing stuff we're not good at.


If someone sends a "Please add X, KTXHBYE" , then I can see why a simple "Add it yourself" might be the response.

I offer here a response that you can use to add as a mental sed stage when reading mail from people that work for free and have little time:

Add it yourself -> Thanks for your suggestion. I will make a note of it but it is unlikely that I have the time and resources to implement this. You are more than welcome to submit a well tested patch.


To be fair, most OSS maintainers are doing the work for free. If they don't see the priority in the feature you want, it's not a bad thing to ask you to do it yourself.


By following multiple blogs and articles, that was the opinion I had built regarding Longhorn's failure. A sabotage caused by the internal politics between Microsoft teams, specially WinDev and DevTools.

Now with .NET Native, C++/CX and UWP, one can say the design of Longhorn is revenged.

The COM based infrastructure could already have been a thing, if there was a desire to collaborate during Longhorn's development.


The summary of the managed code in OS debacle is probably:

Rarely do you get something for free, and assuming that this time is different is more likely hubris.

And when you do get something for "free", it's more about making trade-offs that no one notices (JIT) or doing incredibly clever things in a layer abstracted enough that people won't have to concern themselves with what you did (language and compiler design).


You are forgetting a very important factor, politics trump technical achievements.

As for note, all modern mobile OSes have languages with runtimes, regardless of AOT or JIT compiled, but they all required architects with guts to push them through.


OSes with managed runtimes and OSes that are built on managed runtimes are two different things (vis the spelunking Android had to do in order to get rendering performance to acceptable latencies).


Objective-C also has a runtime.


We're talking about two different things. I'm talking about OS components being written in managed vs unmanaged code. You're talking (I believe) about whether or not an OS has a first-class managed runtime for apps.

The article's author's assertion is that blindly using managed code in performance-critical components is a stupid idea, and I agree.

From the article, WinFS (the filesystem), Avalon (the UI compositor and renderer), and Windows Communication Foundations (networking) were all rewritten in the then-new/unstable C# for Vista.

Given that fate of the projects, I'd say history bears it out as being a stupid idea.


NeXT device drivers were written in Objective-C, check Driver Kit API.

The projects were killed by political reasons, not technical. Specially after what was being done with Midori, also killed by management, in spite of the technical achievements.


Ojective-C is a superset of C though. If you avoid the (dynamic dispatch) message passing, and if you construct/destruct objects manually, you're getting the same performance as C. It's trivial to move the performance-critical bits into the pure-C subset of objective-C.

You can't really do this with a managed (VM) language.


> check Driver Kit API.


I remember when managed code was killed. We could never get hangs and quality issues to go away. There was an amazing long write-up on the fundamental issues. They were unfixable, so it had to go.

Everyone trying to drive to a stable release was happy to see it go, because otherwise our job was impossible.


Can you talk more about that era? I'm really interested.


People rarely talk about what went well in Vista.

There was an effort to componentize Windows. This was extremely hard to do, while the kernel itself had a very clean architecture, the layers on top - less so.

"Longhorn Basics" this was a list of fundamental stuff that every team had to either cover or explain why they didn't (Accesibility, I18N, security, etc.) this was a great way of making sure that these cross cutting concerns where handled consistently across an enormous team

Testing - it may be hard to see this from the outside, but the testing was off the hook. Hardcore static and dynamic analysis ran against everything

Edit: also BitLocker!


>testing

If you look at Vista as a hardware compatibility beta for 7, I think it was a resounding success. They pushed out an operating system with a new incompatible driver model, had to wait for driver manufacturers to catch up (giving the impression that Vista was the problem) and then by the time every driver was rewritten, 7 came off as a godsend. 7 wouldnt be 7 without the "disaster" before it. It also let them use home users as a giant testing base for enterprise customers.

and dont forget uac, which broke a ton of code that expected admin rights


UAC was a huge deal for me. I had to write UI that needed privilege in an unprivileged process


> and dont forget uac, which broke a ton of code that expected admin rights

What do you mean?


Prior to Vista a lot of code was written on the assumption that the process would be run with administrator privileges. While this wasn't the original security model for windows NT, 16-bit windows, and windows 95 et al didn't have a security model as such, so code that had been written for those operating systems was particularly problematic.

It was definitely the case that a large number of windows users, possibly a majority, would have their user account be a member of the Administrators group.

UAC changed the way processes were started to use a lower privilege level by default, or popup a permissions dialog if higher permissions were needed (based on a manifest associated with the process)

In parallel the code samples on MSDN that showed how to detect if the current process had admin permissions was subtly incorrect, so programs that had been written to do the right thing depending on privilege level would fail unexpectedly on Vista


Very interesting! I feel like all the OS components are now written in DirectUI too. Did it turn out to be the way to go, or did it happen because it was shoved down everyone's throat?


AIUI, DUI was the standard toolkit for first-party Windows UI from XP through 8.0 (this includes the "Metro" shell components in that release like the Start screen). Starting with 8.1, new components have been written using the same XAML toolkit that's promoted for third-parties, but there's still lots of older DUI stuff in the OS, like File Explorer and the taskbar.


I left the Windows division shortly before Vista shipped, so I couldn't say. But from native code, the choices were to do things directly on the win32 API or to use DirectUI which, for all its faults, was much easier to use, and produced much more modern UI/UX.

There were also the obvious advantages of everyone using the same technology. At the time, the look and feel of a windows release was a closely guarded secret until fairly near the release, so the team that owned that experience needed to be able to apply that theme to everything fairly quickly.


Ahh, DUI. I had forgotten about that.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: