A lot of 'real' (cough) hackers like to look down at MS, but Raymond Chen is one of those guys I'd hate to get into a technical argument with: he has experience, is extremely sharp, and very sarcastic. Those three attributes make Old New Thing my favorite MS blog even if I don't write Win32 anymore.
If anything, Windows' level of backward compatibility is a giant cautionary tale: enable poor behavior from devs, and it will proliferate. You cannot trust app devs to do the right thing, they need to be forced to; whether by gatekeepers at app stores, or OS restrictions. It is a tragedy of the commons. Whether it's inane programs inserting themselves into the systray, 'preloaders' for bloated apps (which slow startup), browser extensions, Explorer add-ons, or other garbage, app devs still seem to do a fantastic job of gunking up a Windows install.
This is why it's a bit of a blessing that webapps can't do much; because the more powerful they become, the more annoying and inane they will be.
I'm really amazed that Windows has maintained substantial binary compatibility for so long. To be honest, when I see "one way they could've done it was by squeezing the flag into the one byte of padding between widgetLevel and needs_more_time" I think nooope, nonstandard, can't do it, not my problem. Worrying about such things constantly throughout huge projects like MSVC over decades of time must add a whole new level to the difficulty of those engineers' jobs.
On the other hand, it's a service to the user that old software keeps running. People who once invested in a copy of photoshop don't have to pay for upgrades they don't need. Or, in my case, i still use ecco pro to outline ideas, which saw its last update in 1997, is orphaned without source, and for which i've found no alternative that matches my exact flow.
That's Windows strength as well as its weakness. For example, I've heard that Windows Shell (whose proud team member is Raymond Chen) has to carry the baggage of backward compatibility, heritage code base and stuff like that simply because some forsaken customer might be using a particular feature. It's really hard to innovate in the environment like that.
Yea, you can't simultaneously say "Developer, don't do this!" and then "Well, to maintain backwards compatibility with developers who did it, we're going to let you do it. BUT DON'T DO IT!"
MS is in a tight spot. They have this maintanance mess because of their otherwise admirable dedication to backwards compatibility.
Microsoft's dedication to backward compatibility is why Windows ruled the desktop world for several decades.
Apple's utter contempt for backward compatibility is why people who care about their sanity hate developing products for OSX. Which API is Apple going to deprecate and screw me over with today?
Microsoft always went above and beyond the call of duty to support hardware and software (with runtime code modification and more). This allows programs should have failed epically to run flawlessly on everything from Windows 95 to Windows 7.
If Android even attempted a fraction of what Windows did, fragmentation would not be a problem. Android is a perverse mix of Apple and Microsoft though. It's a pain in the ass to maintain products for Android and it's even worse for iOS.
> This allows programs should have failed epically to run flawlessly on everything from Windows 95 to Windows 7.
The "upgrade through every version of Windows" videos show Reversi and MS-DOS Executive running on Windows 8. You do have to run 32-bit Windows, though, to get 16-bit compatibility.
I know there were a couple of pretty significant glibc upgrades, and IIRC the binary format changed at some point from something earlier to ELF. Hrm. "something earlier" seems to have been the a.out format, based on Eric Youngdale's 1995 Linux Journal article:
While the kernel developers never break compat, userland developers are much happier to, which leads to all sorts of issues. Even if your a.out file is statically linked, it might depend on a service that's since been replaced.
This is why when it comes to old (2000 era) Linux games, it's much easier to just run the Windows version in wine than to go through all the hoops to gather old libraries and get the sounds working.
Old Linux games use OSS, which you may get to run. They may also require obsolete Xfree extensions for direct framebuffer access, which the modern hardware will not allow though.
If you are extra unlucky, they may require direct console framebuffer access and unless you have also the old hardware with old drivers, you are going to get it on modern system.
>Apple's utter contempt for backward compatibility is why people who care about their sanity hate developing products for OSX. Which API is Apple going to deprecate and screw me over with today?
You mean like they countless new API versions and frameworks that MS introduces year over year as "THE" way to write Windows programs?
Sure, they might retain binary compatibility, but if you want future-proof your stuff and continue to get updates and changes you have to jump to the new shiny.
This is compounded by the strange fact that Microsoft's own development tools produce programs which you cannot give to other people to run on a standard Windows install.
If I compile on Linux, I link against libstdc++ version du jour and the binary will just run on a recent distro. If it doesn't you end up in the same spot - you need to install the appropriate .so version, but most of the time things just work.
So Microsoft's stuff links against a system DLL, but I'm not allowed to do that because they can't guarantee full backwards compatibility, so they give me explicitly-numbered DLLs to link against. So why aren't those part of the standard install? It comes on a DVD! It has gigabytes of stuff, surely a few megabytes more won't hurt. It even comes with the .Net runtime and you'll get the latest one pushed to you by Windows Update. It's almost as if they want to push people towards writing C# instead of C++, which due to being a managed language doesn't end up in the situation where you're writing to strange private struct fields... oh...
Just imagine you're linking against the MSVC10 runtime and want to distribute your program to people who run Vista. How could the runtime even be part of the system? It didn't exist back then.
So you have to bundle your dependencies anyway and you need to do that with everything else you use as well. The OS does nothing else: It ships with the dependencies it needs. If there was no program in a standard Windows install that used the C++ runtime (very unlikely, I know), there probably wouldn't be a C++ runtime distributed with it (well, apart for that pesky problem Raymond acknowledges where people link against DLLs included in the system that are not system DLLs – probably the same reason why the VB6 runtime is still part of Windows).
So why do I need to distribute it? Why can't it be part of the standard install machinery "oh btw my program needs MSVCR10.DLL be a dear and fetch it from Windows Update"? Heck, this could be embedded in a manifest inside the .exe - "I need these MS libs with these versions" and when you double-click Explorer could parse that, check that those are installed and if not launch a "this program requires additional components installed from Windows Update, please wait".
And I don't want them to ship future versions of the runtime that don't exist yet, just the current ones. So when I build a .exe and give it to someone on the same OS they can run it.
What's bizarre is that it's easier to create a .NET program that runs on every supported version of Windows without having to worry about installing the prerequisites.
Simply compile to .NET 3.0. You can do this even on Visual Studio 2013. The resulting program will run on Windows Vista and Windows 7 without any install. It will also run on Windows 8 and 8.1, which bundle .NET 4, because Windows will automatically install .NET 3.5 when your program attempts to run.
This scenario doesn't work for Visual C++ programs. Each new version of Visual C++ ships with a new runtime library, and it's neither bundled nor automatically installed by Windows.
In other words, .NET executables are more portable than Visual C++ executables! Talk about bizarre.
They could set up Visual Studio to not compile code that does X, so that current binaries that do X still work, but you can't ship any new versions of your product until you stop doing X in it.
This is why I'm against the commonly praised "Be conservative in what you send, be liberal in what you accept" principle. Being liberal in what you accept only gives you compatibility nightmares in the future. Another perfect example of this is html and is the reason why every browser has to reimplement all the bugs of internet explorer 6 in order to not "break the internet". Really, why should a browser try to correct a corrupt document with trailing tags and open attributes. Beginner developers/designers will simply think "looks good on my machine, deploy and forget" and then we are stuck supporting that kind of corruptness forever.
"But, of course, if that had happened, maybe the web would never have taken off like it did, and maybe instead, we’d all be using a gigantic Lotus Notes network operated by AT&T. Shudder."
I'd even go as far as to suggest the opposite "be liberal in what you send [1], be conservative in what you accept (and fail as noisily as possible)".
[1]: by this I mean that it's ok if your "liberal" output (by "liberal" I usually mean "assuming newer features not yet widely adopted in other apps it needs to talk with") forces the user to upgrade other components to the latest versions in order to use yours, because if you're in a choice of "f_g the downstream guys" or "making trouble for the upstream guys", you should always choose "f_g the downstream guys", because here "f_g" them only means the minor inconvenience of forcing them to upgrade to decent versions and moves everything forward in the process, and has the added side effect of forcing them to also do their damn security upgrades :)
You must be my doppelganger, because I've been saying exactly this for some time. Internet Explorer caused so much trouble partly because it accepted so much bad HTML. It would close tags for you, even.
IE didn't start that, it had to accept that shit because everything else did long before it. I'm not even sure what browser started it. I'm positive that the first graphical browser I used (Cello) did, though.
I'm not even sure it was really considered a bad thing at the time.
P: Paragraph mark
The empty P element indicates a paragraph break.
The exact rendering of this (indentation,
leading, etc) is not defined here, and may be a
function of other tags, style sheets etc.
To whit, the well-formed, "good" example:
<h1>What to do</h1>
This is a one paragraph.< p >This is a second.
< P >
This is a third.
Yes, with spaces and alternating case. This was the standard before
IE6 came out, so it's not that crazy that they closed up some bold and
italics tags -- I'm not entirely sure if there were (are?) any other
SGML dialects that were quite as loosely defined as early "HTML" was.
I think IE6 did a lot of awful things, but the only real "damage" done
was in borking the CSS implementation (especially the box model). The
fact that it rendered crap HTML sort-of-ok wasn't all that bad.
It was the crazy things it did with, lets say.... specially crafted
HTML that led to a lot of problems. And MS FrontPage's insisting on
adding COLOUR="#000000" (black) to all documents, and assuming
BACKGROUND was set to white did a lot to break semantic mark-up.
Oh yeah, I'm not saying IE6 didn't do a lot wrong, just that this wasn't one of them.
But I also think it's worth remembering that IE5/6 also featured a lot of early experiments in things that are now considered essential components in the web. XMLHttpRequest was a huge advancement, and while ActiveX was obviously crappy it was at least part of an attempt at creating a more dynamic web.
The automatic tag closing behaviour is even specified now.
But look at it another way: imagine browsers accepted only valid HTML and every browser had some bugs in what they consider valid. Writing a document in a way that it displays at all might be a challenge already then. Add to that that not everyone is on the latest browser version (that was probably more true in the 90s as well) and you get all the current pains just with worse symptoms. I'm not really sure that's better.
I may be wrong here, but isn't implicit tag closing consequence of html deriving from sgml? mandatory tag closing is something that came with xml AFAIK
Yes, but then nobody implemented the SGML definition of HTML correctly. According to the standard, browsers should have parsed
<a href=foo/bar/baz>qux</a>
as
<a href="foo">bar</a>baz>qux</a>
since it did not only include optional closing tags, but also short tags. (This is incidentally why several SGML-based HTML validators used to complain about "end tag for element A which is not open" in the presence of href attributes without quotes.)
IETF groups continue to perpetuate this junk, too. It opens up security holes, as you can end up with a "confused deputy" situation. A proxy might read certain header fields one way because the spec encourages the proxy to "infer" the "meaning" of creative headers. But a server behind the proxy might interpret things differently.
I love Raymond Chen's old new thing, also Mark Russinovich's SysInternals - because of these two (and real job involving Windows) - I actually enjoy it when I can solve the next "weird" windows problems some of my coworkers would have (but then again I don't know as much as I wish - but it's fun time overall).
And while I disagree with Raymond on the MSVCRT.DLL topic, I can also see his point (and I might've took it, if I was working for Microsoft)
This is why it's a bit of a blessing that webapps can't do much; because the more powerful they become, the more annoying and inane they will be.
They can already do enough. My browser could be "my agent on the internet" but instead it's become "in control of the enemy, something I must neuter" to stop some website owner from ruining my computer's context menus, clipboard, playing unwanted sound, downloading and playing/displaying unwanted noise or adverts, taking me away from pages I was looking at, intercepting my mouse clicks and neutering them.
None of these things should ever have been at the control of the website.
This is part of what's driving the rise of feudal app store models. Users like them too-- at least most users who aren't concerned with (or don't understand) things like openness and freedom. They prune the ecosystem of shitware to at least some extent.
Personally I think OSes need to evolve toward a model where apps are more strictly containerized, something like Docker (Linux) but for everything. The user should still be able to admin their own machine, but this permission should be forbidden to apps without explicit user approval (and a big red warning).
That's the way OSX is RIGHT NOW. As a User I can run any program I want and mucky-muck my crap up. But Apps from the Mac App Store live in neat little bundles and are forbidden to do the mucking around in OS bits. So the OS is an "appliance" that I can just replace at a whim. If I go off the tracks to load apps that NEED to overwrite OS space, then I know who did it.
Personally, I but App Store versions whenever possible and I have next to zero problems with my Macs an order of magnitude less than my locked down PC at work even. VERY few apps need to color outside the lines and I like the push for them to explain themselves or get blasted by the next OS update.
I wonder if we will see app stores move more towards the package manager model we see in most Linux distros, where users can easily add third party repositories, remove first party repositories, and have dependencies handled automatically.
The discussion in the comments is interesting. MinGW, the compiler for VLC, LibreOffice and most other FOSS projects on Windows does exactly what Raymond says not to do. In fact, the entire purpose of MinGW is to make GCC able to target msvcrt.dll. Developers and users love this, since they don't have to distribute and install the CRT with the program. Developers following the GPL aren't even allowed to distribute the CRT with their program, so they have to use one already present on the system.
Though Raymond is correct. The MinGW guys should probably "write and ship their own runtime library," or at least use the msvcrt from ReactOS or Wine. That should make it possible to statically link to it.
Having said that, Microsoft probably won't change their msvcrt.dll in a way that breaks MinGW software, since their users will complain that they broke VLC.
> MinGW, the compiler for VLC, LibreOffice and most other FOSS projects on Windows does exactly what Raymond says not to do.
While this is mostly true, and because of the project of VLC on WinRT (Windows Metro), VLC can now link, in a MinGW environment, to MSVCR80, 90, 100 and 110.
The main problem with all those, is that we cannot redistribute MSVCR*.dll for licensing reasons.
MinGW relies on dynamic linking? I'm surprised at that because when I ran a test just now, Microsoft C++ compiles Hello World to an 84k executable whereas MinGW generates a 261k executable. I always assumed MinGW was doing the right thing and linking statically against (its own version of) the standard library; if not, where does the extra file size come from?
That's probably because you wrote C++ and are usimg C++ streams. Try just using printf. The binary, after stripping, will be much smaller (I think less than 20k). You can use DependencyWalker to check what DLLs you are linking to.
Ah, I was using C but I didn't realized needed to strip the binary, after doing that it comes down to 39k which is smaller than the Microsoft equivalent so you're right, it does seem to be using dynamic linking :(
For comparison, VC6 defaults give 40K (statically linked), switching to dynlink shrinks it to 16K, add a few more switches and it drops to 1.5K.
Is it possible to do that with the latest versions (i.e. linking to their CRT instead of MSVCRT.DLL), or are they incapable of doing it for some odd reason? These are the options I used:
Well, yes and no. Yes it does link to msvcrt.dll but it also brings in its own runtime too statically (mingwrt). Standard mingw links libgcc dynamically but you can use TDM gcc if you want to link it statically. I can't think of a bad reason not to use TDM gcc.
71, 80, 90, 100, 110, 120 - Did I miss any of these? - 6 different "C" runtime versions (apart from side-by-side sub-versions) for compiler that was released in span of 10 years.
It's not that I like the MSVCRT runtime. It's just that I have to target it. Any popular commercial product that has some form of plugin architecture (Autodesk for example) through DLLs would require more or less for one to compile it's own plugins with the exact version the main application was compiled.
It's a bit of strange moment - when one developer cries that OpenSSL should've not used it's own malloc implementation, and then another cries - don't expose malloc/free interface (but do say your_api_malloc, your_api_free) and this way you can target any "C" runtime.
Now these are completely two different things, but not so much. What if say OpenSSL used the "malloc" runtime - what version of MSVCRT.DLL would've they target? Does anyone really expect to target all these different versions and all these different compilers that you can't even find the free versions now through MSDN?
(Now I'm ignoring the fact that you can't easily hook malloc and replace it with "clear-zero" after alloc function, but that's just a detail).
What I'm getting is that there are too many C runtimes, hell DirectX was better!
I only wished MS actually somehow made MSVCRT.DLL the one and only DLL for "C" (C++ would be much harder, but it's doable).
> Any popular commercial product that has some form of plugin architecture (Autodesk for example) through DLLs would require more or less for one to compile it's own plugins with the exact version the main application was compiled.
Maybe Autodesk requires that ,but it's not true in general. Modules using different versions of the C runtime can coexist happily in the same process and talk to each other using plain function calls, COM, or whatever else you come up with. You can't share FILE*, malloc()ed memory, or other CRT-specific objects between them, but you don't have to: use LocalAlloc (on the process heap or an explicitly shared heap), the COM allocator, or something like that.
Even if that was possible, you need a linker. Also recent versions of MSVC plug-in a pragma that contains the version number of the MSVCRT dll and once a .c/.cpp file was compiled with specific version it has this version in the .obj file, so if you try to link it with another version it'll complain. Recently they've added this for debug/release builds (NDEBUG, _DEBUG) (so you can't mix them - but I think this was done only for C++ - because of STL)
I grew up with windows & windows dev tools and I always had a pragmatic appreciation for them, but lately I became quite bitter and cold towards Windows and Windows developers and this happened long before I started using alternatives. At one point it was very hard to avoid the feeling that something, somewhere deep in windows is rotten. With every new windows version from winXP on, I had the feeling that I'm being sold a slightly improved technological improvisation with a slightly better theme applied on it. Visual Studio and .Net framework tried hard and nearly succeeded to hide the messy roots of the problem but the stench of thousands of leaky abstractions can still be felt every time you open one of the many doors leading to Windows basement. The C/C++ foundation of windows (the shaky ABI) is only making it worse and in my opinion many major problems of modern software can be traced to unsolved problems in C/C++. I have a dream (a delusion :D) that despite the accelerating patches applied to both(coincidence?) of these technologies (C89,99,11,1X C++98,03,11,17 Win Vista,7,8,8.1,9) they will both end in the greatest "oh, f*ck it!!" in the history of software and be abandoned for something much better, which I hope will crystallize out of these ~70 years of computer science.
Why can't Windows officially include standard versions of this library? You know, like Windows already does with .NET versions since Vista or every other OS does with libstc++ or libc++? Forcing every C/C++ program to bundle their own MSVCRTXX.dll is pretty silly.
I must confess to statically linking my Windows binaries. It's better than making the user click through extra installers-within-installers to install the right MSVC redistributable.
It also seems to paradoxically improve load times. Shrug.
the crt can be installed silently just like any other self-respecting installer out there, and normally the tool you use to create installers does support this as well. I mean, even something basic as WinRar allows you to create silent extract-install combos.
I don't think it's so much how you (as an active developer does it) -- granted having to redistribute your app everytime any of (say) 10 bundled dependencies need an update is an inconvenience -- the biggest problem is when you have some old software (without vendor support) that is statically built with some overflow "built in" from an old version of a library.
Granted, at some point patches probably won't be backported, but it is convenient to be able to upgrade libssl, restart a few services and be done.
Of course, if you bought that software under GPL, you might be able to fix the issue yourself...
There's little paradoxical about it.. if you link against 20 DLLs, that turns into 20 directory lookups and then random paging potentially across the entire drive, whereas a .EXE is more likely to appear contiguously on the drive, and after the first page of it is loaded in, the OS readahead mechanism might opportunistically snarf something like the next 256kb worth of pages for free as part of the same IO.
Ignoring IO, and depending on the size of the app (particularly C++ apps though), the linker might be spending a huge amount of time on symbol fixups.
Yes, resorting to static linking is a pretty good solution for executables. Pitty about the bloat, but it works! How would static linking work with libraries? Would you get multiple heaps?
Sometimes you do get multiple heaps, but it can be made to work if one is careful not to allocate on one heap and then free on the other heap. One example is a C/C++ COM addin for Excel say - instead of freeing the Excel allocated memory directly the addin DLL invokes addRef/Release appropriately on the IUnknown interfaces to the various objects it gets handed by Excel. That can be automated a bit with ATL and CComPtr. A similar factory / smart pointer API can be made to work for any DLL that wants to maintain its own heap.
Sometimes you do get multiple heaps, but it can be made to work if one is careful not to allocate on one heap and then free on the other heap
Agreed, and it's sad to think how much confusion and grief would have been saved if they'd just made this a mandatory policy from day 1.
Windows developers regularly jump through an entire maze of hoops -- and make their users do the same -- just so they can call free() on a pointer they got from some random DLL, or do other goofy things that they shouldn't be allowed to do.
I've used delhanty's example in a large codebase and it worked well.
Another approach is to create a static library that overloads global memory operators (new, delete, and their variations). Link this static library to every DLL that you make, and have the implementation of these overloaded global memory operators use the heap of a single DLL. There may still be multiple heaps, but only one will be used.
> You can do the same with WinSxS and manifests, but again too many lazy developers.
I'm surprised Microsoft hasn't put a feature in the "Customer Experience Improvement Program" that scans the applications installed on a system for what versions of DLLs they try to load, collates all that data together, and then automatically generates "default" manifests for those apps, locking them to the specific versions of DLLs they were already requesting+using anyway. It'd be like wrapping each installed app in a Docker container based on the current working system config.
Microsoft C++ supports static linking with the standard library, which you should be using for release builds. That way, your program will always be using the exact version of the standard library that it was tested with, and it's guaranteed not to interfere with anything else on the target system.
What can you do for the situation where you ship a library instead of an executable?
My team's product is moving to statically link libstdc++ and libgcc thanks to the runtime linking exception in GCC. It will allow us to use C++11 on old systems. We are grateful that libstdc++ is backwards compatible to GCC 3.4 (other libstdc++'s can link against us)!
I'm looking into statically linking musl libc so our product could (in theory) run on any 2.6/3.0 kernel. =)
glibc itself is not supposed to be linked statically, because it itself dlopens other shared libraries - for pam, for nsswitch, etc. These are system specific (as configured by admin), so there is no one size fits all.
You may statically link another libc, but then the users may wonder why your app does not works as it should, when they configure nsswitch for example and your app will not respect that.
on the other hand there's the issue as stated in one of the other comments: If an application breaks because the internals of a library were changed, that's none other than the applications' fault.
All the occasions we had an application only crash on client's machine where alway due to bugs in our code, not in the platform. (not saying that it can't happen, but it seems rare for the type of app we do) So I'd rather expose bugs by 'surrendering' to the target machine completely, than to simply never find them.
As Raymond Chen explains at length in a few of his posts, this does not matter. Sure, there's a sense in which the bug is your fault for not being infallible, but what of that? We already knew we aren't infallible. The point is, that kind of live fire debugging punishes your users, not you. And as long as you keep doing it that way, it's going to end up hitting your users in a situation where you aren't around to fix the bug.
I did say you should use static linking in the release build. If you want to keep virtual machines with every version of Windows and/or the run-time library you can lay hands on in order to test debug builds of your program against them so you can find the bugs instead of letting users trip over them, there's certainly nothing wrong with that.
OTOH by linking statically you cannot distribute CRT bug fixes without building new binaries. Ideally the CRT DLLs are installed with Windows Installer and can then be updated via Windows Update.
I think the answer is "no" because Linux distros generally recompile the world with each new major standard library version. If any C standard library gurus are reading this, feel free to chime in!
When glibc guys changed memcpy implementation which changed its undefined behavior, it broke Flash Player and Linus got pissed off. Afair they reverted the change.
IIRC, they later used symbol versioning so that only newly compiled programs get the real memcpy. Older programs get a fake memcpy which simply calls memmove, which is what Flash should have been calling instead of memcpy.
You're basically right; there have been a few incompatible changes (eg aout -> elf, glibc 1 -> 2), but I believe that in all cases there was already a version numbering system in place such that both could be present at the same time.
Additionally, all the distros have update and packaging mechanisms. While you can install off disc, Linux is a lot more .. internet native than Windows.
The Linux world largely ignores binary compatibility. If you want old binaries to work on Linux, the API this is most achievable with is ... win32. As provided by Wine.
Could symbol versioning, like ELF has, help the situation? I know that glibc has made backwards incompatible changes and they up the symbol version when they do so. I don't know if that handles changes in struct sizes though.
Sorry, but I'm not going to link to a huge convoluted mess of variously versioned DLLs just to get standard C library functions. Maybe the situation is different with C++ (probably due to no real ABI standard), but the C library functions shouldn't change since they were standardised. If an application breaks because the internals of a library were changed, that's none other than the applications' fault.
"If an application breaks because the internals of a library were changed, that's none other than the applications' fault."
I agree 100%, it's certainly the application's fault, but Microsoft has some very large corporate customers that it wants to keep happy, so in some cases it will eat the cost of cleaning up after sloppy developers.
I don't work for Microsoft, but the company I work for (we write expensive enterprise software) has a similar practice of bailing out customers who get into trouble by relying on undocumented behaviors... as long as the customer is important enough that alienating them could cost us significant future revenue. Being willing to provide support like this can differentiate you from your competitors. Not to mention that you can also charge customers extra for this higher level of support.
"Sorry, but I'm not going to link to a huge convoluted mess of variously versioned DLLs just to get standard C library functions."
You don't really have to do that. If you build your application with Visual Studio 2013, you link against the version of the runtime library that it uses, and your application's installer installs the redistributable DLL for that version (if it's not already installed on the customer's machine). You don't have to deal with any other versions.
> and your application's installer installs the redistributable DLL for that version
I don't want an installer, nor to include the redistributable which is around 2 orders of magnitude larger than the application itself! I want a single tiny binary, just download and run on any system, and that's what using MSVCRT allows.
Sure, and for that upside you pay the downside of not knowing when that ever-changing dependency may inadvertently break your application. There's no free lunch.
- I write my applications to run anywhere and not require any installation. There is a huge convenience factor in this. As a collorary, they also don't require uninstallation since they do not "embed" themselves into the system (registry, config files, additional files elsewhere, etc.)
- It makes for a bigger download, of code and data that would never be used again, with one extra step the user has to do before use, and also an extra step for me to build the installer too.
I can see the advantages of an installer for large applications that need to be "installed" since they modify various other things in the system, but that's not the type of applications I work with.
The problem with that is that your software is a bad citizen in the windows ecosystem. Even a one shot can be pushed out or run via an msi package with no install. And you get the benefit of being able to easy throw it out via GPO, pull dependencies like VCRT versions etc. MSI packages are pretty small as well - our current VSTO package which has about 25,000 LoC in it is only 400k.
> your software is a bad citizen in the windows ecosystem
According to MS who seem to be trying to force the idea that all software needs to be "installed" and have an overly complex installer/uninstaller. I care more about the flexibility and convenience for my users than what MS thinks (which is ultimately heading towards locking users in a walled garden, so no surprise that "just download and run a self-contained executable anywhere" is perceived as somewhat of a threat.)
> And you get the benefit of being able to easy throw it out via GPO, pull dependencies like VCRT versions etc.
Those are not needed for what I'm doing, so they're not benefits to me. But I guess if your idea of "pretty small" is 400k, then that's not really a concern...
> If an application breaks because the internals of a library were changed, that's none other than the applications' fault.
The issue is that it becomes Microsoft's problem if popular apps break during OS upgrades. Breaking a high-profile app and saying "it's the app's fault" is a good way to attract angry blog posts on the top of HN.
Not so - I don't think - because so many types in the C standard library are value types. Say you have an array of struct stat (the first thing that springs to mind) - now your program depends on the size of that structure. When its size changes, your code will stop working. The standard doesn't guarantee the size of things, only what members they need to have.
It is the application author's fault but it becomes the customer's problem. This is what Raymond repeats over and over. And just telling the authors to fix their problems is no solution either because they are sometimes long out of business but companies still love to use the software that JustWorks™ even though it is 20 years old.
It enough, when you application uses several third party SDKs or plugins, each built with different MSVC version.
Sometimes, it has interesting impact. For example, you aren't going to get PyQt5 for Python 2.x, because binaries (builds from python.org and ActiveState) for Python 2.x are built with MSVC9 and Qt5 requires MSVC10 at minimum.
"This meant not changing the size of the class (because somebody may have derived from it) and not changing the offsets of any members, and being careful which virtual methods you call" scares me.
I understand DLL technology have been around for a couple decades now and that it solved many difficult problems when introduced, but, still, this looks insanely fragile.
Sure, just don't link against the C++ runtime library and you're good. Of course, you won't have anything said runtime lib provides, then. Or link the runtime statically, as suggested in a few other posts. Your binary size will increase then, though.
That's what the article is about, why they decided to not have VS link against the OS runtime.
At some point, the decision was made to just give up and declare it an operating system DLL, to be used only by operating system components.
Makes it clear enough that it was a choice they made, with the big reason being the effort required to maintain compatibility across so many systems and compilers. My knowledge of it never really got past the poking-with-a-stick stage, but apparently (at least some of) the windows SDK compilers were able to link to the system runtime well after they made the switch in VS.
If anything, Windows' level of backward compatibility is a giant cautionary tale: enable poor behavior from devs, and it will proliferate. You cannot trust app devs to do the right thing, they need to be forced to; whether by gatekeepers at app stores, or OS restrictions. It is a tragedy of the commons. Whether it's inane programs inserting themselves into the systray, 'preloaders' for bloated apps (which slow startup), browser extensions, Explorer add-ons, or other garbage, app devs still seem to do a fantastic job of gunking up a Windows install.
This is why it's a bit of a blessing that webapps can't do much; because the more powerful they become, the more annoying and inane they will be.