Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

there is basically no situation in which it is important to optimize for binary size, embedded sure, but nowhere else

the infrastructural model i'm describing doesn't require applications to run in browsers, or imply that applications are slower, actually quite to the contrary, statically linked binaries tend to be faster

the model where an OS is one to many with applications works fine for personal machines, it's no longer relevant for most servers (shrug)



> there is basically no situation in which it is important to optimize for binary size, embedded sure, but nowhere else

Not disagreeing that there many upsides to statically linking, but there are (other) situations where binary size matters. Rolling updates (or scaling horizontally) where the time is dominated by the time it takes to copy the new binaries, e.g.

> the model where an OS is one to many with applications works fine for personal machines, it's no longer relevant for most servers

Stacking services with different usage characteristics to increase utilization of underlying hardware is still relevant. I wouldn't be surprised if enabling very commonly included libraries to be loaded dynamically could save significant memory across a fleet .. and while the standard way this is done is fragile, it's not hard to imagine something that could be as reliable as static linking, esp in cases where you're using something like buck to build the world on every release anyway


it was never relevant for servers. and there are probably still fewer servers than end-user systems out there, certainly true if you include mobile (there are arguments for and against that).


servers vastly, almost totally, outnumber end-user systems, in terms of deployed software

end-user systems account for at best single-digit percentages of all systems relevant to this discussion

(mobile is not relevant to this discussion)


binary size is also memory size. memory size matters. applications sharing the same libraries can be a huge win for how much stuff you can fit on a server, and that can be a colossal time/money/energy saver.

yes: if you're a company that tends to only run 1-20 applications, no, the memory-savings probably won't matter to you. that matches quite a large number of use cases. but a lot of companies run way more workloads than anyone would guess. quite a few just have no cost-control and/or just don't know, but there's probably some pretty sizable potential wins. it's even more important for hyper-scalers, where they're running many many customer processes at a time. even companies like facebook though, i forget the statistic, but sometime in the last quarter there was quote saying like >30% of their energy usage was just powering ram. willing to bet, they definitely optimize for binary size. they definitely look at it.

there's significant work being put towards drastically reducing scale of disk/memory usage across multiple containers, for example. composefs is one brilliant very exciting example that could help us radically scale up how much compute we can host. https://news.ycombinator.com/item?id=34524651

i also haven't seen the very important very critical other type of memory mentioned, cache. maybe we can just keep paying to add DRAM forever and ever (especially with CXL coming across the horizon), but the SRAM in your core-complex will almost always tend to be limited (although word is Zen4 might get within striking distance of 1GB which is EPIC). static builds are never going to share cache effectively. the instruction cache will always be unique per process. the most valuable expensive fancy memory on the computer is totally trashed & wasted by static binaries.

there's really nothing to recommend about static binaries, other than them being extremely stupid. them requiring not a single iota of thought to use is the primary win. (things like monomorphic optimization can be done in dynamic libraries with various metaprogramming & optimizing runtimes, hopefully one's that don't need to keep respawning duplicate copies ad-nauseum.)

i do think you're correct about the dominant market segment of computing, & you're speaking truthfully to a huge % of small & mid-sized businesses, where the computing needs are just incredibly simple & the ratio of processes to computers is quote low. their potential savings are not that high, since there's just not that much duplicate code to keep dynamically linking. but i also think that almost all interesting upcoming models of computing emphasize creating a lot more smaller lighter processes, that there are huge security & managability benefits, and that there's not a snowman's chance in hell that static-binary style computing has any role to play in the better possible futures we're opening up.


you're very sensitive to the costs of static linking but i don't think you see the benefit

the benefit is that a statically linked binary will behave the same on all systems and doesn't need any specific runtime support above or beyond the bare minimum

this is important if you want a coherent deployment model at scale -- it cannot be the case that the same artifact X works fine on one subset of hosts, but not on another subset of hosts, because their openssl libraries are different or whatever

static linking is not stupid, it doesn't mean that hosts can only have like 10 processes on them, it doesn't imply that the computing needs it serves are simple, quite the opposite

future models of computing are shrinking stuff like the OS to zero, the thing that matters is the application, security (in the DLL sense you mean here) is not a property of a host, it's a property of an application, it seems pretty clear to me that static linking is where we're headed, see e.g. containers




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: