There was a time where this debate was bigger. It seems the world has shifted towards architectures and tooling that does not allow dynamic linking or makes it harder. This compromise makes it easier for the maintainers of the tools / languages, but does take away choice from the user / developer. But maybe that’s not important? What are your thoughts?
I have yet to find a memory hungry program thats its caused by its dependencies instead of its data. And frankly the disk space of all libraries is minuscule compared to graphical assets.
You know what’s going to really bother the issue? If the program doesn’t work because of a dependency. And this happens often across all OSes, searching for these are dime a dozen in forums. “Package managers should just fix all the issues”. Until they don’t, wrong versions get uploaded, issues compiling them, environment problems, etc etc.
So to me, the idea of efficiency for dynamic linking doesn’t really cut it. A bloated program is more efficient that a program that doesn’t work.
This is not to say that dynamic linking shouldn’t be used. For programs doing any kind of elevation or administration, it’s almost always better from a security perspective. But for general user programs? Static all the way.
I read an interesting post by Ben Hoyt this morning called The small web is a beautiful thing - it touches a lot on this idea. (warning, long read).
I also always feel a bit uncomfortable having any dependencies at all (left-pad never forget), but runtime ones? I really like to avoid.
I have Clipper complied executables written for clients 25 years ago I can still run in a DOS VM in an emergency. They used a couple of libraries written in C for fast indexing etc, but all statically linked.
But the Visual Basic/Access apps from 20 years ago with their dependencies on a large number of DLLs? Creating the environment would be an overwhelming challenge.
I kind of agree with your points, but I think there has to be a distinction of libs. Most deps should be static IMHO. But something like OpenSSL I can understand if you go with dynamic linking, especially if it’s a security critical program.
But for “string parsing library #124” or random “gui lib #35”… Yeah, go with static.
string parsing library #124
this could also become a major security problem, tho.
Great point. Sometimes the benefit of an external dependency being changeable is a great feature.
I can’t not upvote someone who brings Clipper to the table :)
Us looking at developers still on dBase III ~inserts Judgmental Volturi meme~
But for general user programs? Static all the way.
Does it include browsers?
deleted by creator
The user never had much choice to begin with. If I write a program using version 1.2.3 of a library, then my application is going to need version 1.2.3 installed. But how the user gets 1.2.3 depends on their system, and in some cases, they might be entirely unable unless they grab a flatpak or appimage. I suppose it limits the ability to write shims over those libraries if you want to customize something at that level, but that’s a niche use-case that many people aren’t going to need.
In a static linked application, you can largely just ship your application and it will just work. You don’t need to fuss about the user installing all the dependencies at the system level, and your application can be prone to less user problems as a result.
Only if the library is completely shitty and breaks between minor versions.
If the library is that bad, it’s a strong sign you should avoid it entirely since it can’t be relied on to do its job.
Not to disappoint you, but when I installed HL1 build from 2007, I had a lot ot libraries versions that did not exist back in 2007, but it works just excellent.
Shared libraries save RAM.
Dynamic linking allows working around problematic libraries, or even adding functionality, if the app developer can’t or won’t.
Static linking makes sense sometimes, but not all the time.
Shared libraries save RAM.
Citation needed :) I was surprised but I read (sorry I can’t find the source again) that in most cases dynamic linking are loaded 1 time, and usually very few times. This make RAM gain much less obvious. In addition static linking allows inlining which itself allow aggressive constant propagation and dead code elimination, in addition to LTO. All of this decrease the binary size sometimes in non negligeable ways.
I was surprised but I read (sorry I can’t find the source again) that in most cases dynamic linking are loaded 1 time, and usually very few times.
That is easily disproved on my system by
cat /proc/*/maps
.Someone found the link to the article I was thinking about.
Ah, yes, I think I read Drew’s post a few years ago. The message I take away from it is not that dynamic linking is without benefits, but merely that static linking isn’t the end of the world (on systems like his).
if the app developer can’t or won’t
Does this apply if the app is open source?
In practical terms often yes. It can be easier in practical terms to just
LD_PRELOAD
something than to maintain your own patched version of an RPM / APT package for example.
Not exactly, shared libraries save cache.
Personally, I prefer static linking. There’s just something appealing about an all-in-one binary.
It’s also important to note that applications are rarely 100% one or the other. Full static linking is really only possible in the Linux (and BSD?) worlds thanks to syscall stability - on macOS and Windows, dynamically linking the local
libc
is the only good way to talk to the kernel.(There have been some attempts made to avoid this. Most famously, Go attempted to bypass linking
libc
on macOS in favor of raw syscalls… only to discover that when the kernel devs say “unstable,” they mean it.)There’s just something appealing about an all-in-one binary.
Certainly agree. I remember the days when you could just copy a binary from one computer to another and it would just work™ Good times…
disk is cheap and it’s easier to test exact versions of dependencies. As a user I’d rather not have all my non OS stuff mixed up.
From my understanding, unless a shared library is used only by one process at a time, static linking can increase memory usage by multiplying the memory footprint of that library’s code segment. So it is not only about disk space.
But I suppose for an increasing number of modern applications, data and heap is much larger than that (though I am not particularly a fan …)
The gain in RAM are not even guaranteed. See my other comment
Look at how bloated android applications are
I misread disk as dick
You might want to have a look at this:
Nice link - it’s good to see some hard data when most of the discussion around this is based on anecdotes and technical trivia.
That’s misleading though, since it only cares about one side, and ignores e.g. the much faster development speed that dynamic linking can provide.
Nothing prevent you to use dynamic linking when developping and static linking with aggressive LTO for public release.
True, but successfully doing dynamically-linked old-disto-test-environment deployments gets rid of the real reason people use static linking.
Thank you so much. I read this when it was written, and then totally forgot where I read those information.
Can we get weighted by size?
Dynamically linked all the way; you only have to update one thing (mostly) to fix a vulnerability in a dependency, not rebuild every package.
Disk space and RAM availability has increased a lot in the last decade, which has allowed the rise of the lazy programmer, who’ll code not caring (or, increasingly, not knowing) about these things. Bloat is king now.
Dynamic linking allows you to save disk space and memory by ensuring all programs are using the only one version of a library laying around, so less testing. You’re delegating the version tracking to distro package maintainers.
You can use the dl* family to better control what you use and if the dependency is FLOSS, the world’s your oyster.
Static linking can make sense if you’re developing portable code for a wide variety of OSs and/or architectures, or if your dependencies are small and/or not that common or whatever.
This, of course, is my take on the matter. YMMV.
Except with dynamic linking there is essentially an infinite amount of integration testing to do. Libraries change behaviour even when they shouldn’t and cause bugs all the time, so testing everything packaged together once is overall much less work.
Which is why libraries are versioned. The same version can be compiled differently across OSs, yes, but again, unless it’s an obscure closed library, in my experience dependencies tend to be stable. Then again all dependencies i deal with are open source so i can always recompile them if need be.
More work? Maybe. Also more control and a more efficient app. Anyway i’m paid to work.
More control? If you’re speaking from the app developer’s perspective, dynamic linking very much gives you less control of what is actually executed in the end.
The problem is that the application developer usually thinks they know everything about what they want from their dependencies, but they actually don’t.
Static linking can make sense if you’re developing portable code for a wide variety of OSs
I doubt any other OS supports linux syscalls
It seems the world has shifted towards architectures and tooling that does not allow dynamic linking or makes it harder.
In what context? In Linux, dynamic links have always been a steady thing.
We could argue semantics here (I don’t really want to), but tools like Docker / Containers, Flatpack, Nix, etc. essentially use sort of a soft static link in that the software is compiled dynamically but the shared libraries are not actually shared at all beyond the boundary of the defining scope.
So it’s semantically true that dynamic libraries are still used, the execution environments are becoming increasingly static, defeating much of the point of shared libraries.
but tools like Docker / Containers, Flatpack, Nix, etc. essentially use sort of a soft static link in that the software is compiled dynamically but the shared libraries are not actually shared at all beyond the boundary of the defining scope.
This garbage practice is imported from windows.
That may well be, but it doesn’t really change anything, does it?
In Linux, dynamic links have always been a steady thing.
Hot take: This is only still the case because the GNU libc cannot be statically linked easily
Some languages don’t even support linking at all. Interpreted languages often dispatch everything by name without any relocations, which is obviously horrible. And some compiled languages only support translating the whole program (or at least, whole binary - looking at you, Rust!) at once. Do note that “static linking” has shades of meaning: it applies to “link multiple objects into a binary”, but often that it excluded from the discussion in favor of just “use a .a instead of a .so”.
Dynamic linking supports much faster development cycle than static linking (which is faster than whole-binary-at-once), at the cost of slightly slower runtime (but the location of that slowness can be controlled, if you actually care, and can easily be kept out of hot paths). It is of particularly high value for security updates, but we all known most developers don’t care about security so I’m talking about annoyance instead. Some realistic numbers here: dynamic linking might be “rebuild in 0.3 seconds” vs static linking “rebuild in 3 seconds” vs no linking “rebuild in 30 seconds”.
Dynamic linking is generally more reliable against long-term system changes. For example, it is impossible to run old statically-linked versions of bash 3.2 anymore on a modern distro (something about an incompatible locale format?), whereas the dynamically linked versions work just fine (assuming the libraries are installed, which is a reasonable assumption). Keep in mind that “just run everything in a container” isn’t a solution because somebody has to maintain the distro inside the container.
Unfortunately, a lot of programmers lack basic competence and therefore have trouble setting up dynamic linking. If you really need frobbing, there’s nothing wrong with RPATH if you’re not setuid or similar (and even if you are, absolute root-owned paths are safe - a reasonable restriction since setuid will require more than just extracting a tarball anyway).
Even if you do use static linking, you should NEVER statically link to libc, and probably not to libstdc++ either. There are just too many things that can go wrong when you given up on the notion of “single source of truth”. If you actually read the man pages for the tools you’re using this is very easy to do, but a lack of such basic abilities is common among proponents of static linking.
Again, keep in mind that “just run everything in a container” isn’t a solution because somebody has to maintain the distro inside the container.
The big question these days should not be “static or dynamic linking” but “dynamic linking with or without semantic interposition?” Apple’s broken “two level namespaces” is closely related but also prevents symbol migration, and is really aimed at people who forgot to use
-fvisibility=hidden
.Even if you do use static linking, you should NEVER statically link to libc
This is definitely not sound. You should never statically link against glibc as glibc does some very unsound things under the hood like load NSS modules. Static linking against a non-bloatware libc is fine in most cases, as kernel interfaces break rarely, or rather, because Kernel devs go to extreme lenghts not to break user space, and they do a fantastic job too.
The problem is that GLIBC is the only serious attempt at a libc on Linux. The only competitor that is even trying is MUSL, and until early $CURRENTYEAR it still had worldbreaking standard-violating bugs marked WONTFIX. While I can no longer name similar catastrophes, that history gives me little confidence.
There are some lovely technical things in MUSL, but a GLIBC alternative it really is not.
I would not agree with the “only serious attempt” path. The problem that most other libc are not drop in replacements has little to do with standard compliance and a lot to do with the fact that software is so glued to glibc behavior that you would have to be bug for bug compatible to achieve that goal, which imo is not only unrealistic, it’s also very undesirable.
The only competitor that is even trying is MUSL, and until early $CURRENTYEAR it still had worldbreaking standard-violating bugs marked WONTFIX.
Can you share a link? I’d be genuinely interested.
DNS-over-TCP (which is required by the standard for all replies over 512 bytes) was unsupported prior to MUSL 1.2.4, released in May 2023. Work had begun in 2022 so I guess it wasn’t EWONTFIX at that point.
Here’s a link showing the MUSL author leaning toward still rejecting the standard-mandated feature as recently as 2020: https://www.openwall.com/lists/musl/2020/04/17/7 (“not to do fallback”)
Complaints that the differences are just about “bug-for-bug compatibility” are highly misguided when it’s useful features, let alone standard-mandated ones (e.g. the whole complex library is still missing!)
Ah, yes, okay, that drama.
NEVER statically link to libc, and probably not to libstdc++ either.
This is really only true for
glibc
(because its design doesn’t play nice with static linking) and whatever macOS/Windows have (no stable kernel interface, which Go famously found out the hard way.)Granted, most of the time those are what you’re using, but there’s plenty of cases where statically linking to MUSL
libc
makes your life a lot easier (Apline containers, distributing cross-distro binaries.)
You can statically link half a gig of Qt5 for every single application(half a gig for calendar, half a gig for file mager, etc) or keep it normal size. Also if there will be new bug in openssl, it is not your headache to monitor for vuln announcements.
This compromise makes it easier for the maintainers of the tools / languages
What do you mean? Also how would you implement plug-ins in language that explicitly forbids dynamic loading, assuming such language exists.
Depending on which is more convenient and whether your dependencies are security-critical, you can do both on the same program. :D
The main issue I was targeting was how modern languages do not support dynamic linking, or at least do not support it well, hence sorta taking away the choice. The choice is still there in C from my understanding, but it is very difficult in Rust for example.
Yeah, you can dynamically link in Rust, but it’s a pain because you have to use the C ABI since Rust’s ABI isn’t stable, and you have to miss out on exporting more fancy types
Just a remark. C++ has exactly the same issues. In practice both clang and gcc have good ABI stability, but not perfect and not between each other. But in any cases, templates (and global mutable static for most use cases) don’t works throught FFI.