Cross-Platform – Why Aren’t Fat Binaries Widely Used?

binarycross platformoperating systems

As far as I know, so-called "fat binaries"–executable files that contain machine code for multiple systems–are only really used on Apple PCs, and even there it seems like they only used them because they needed to transition from PowerPC to x86.

These days a lot of software is cross-platform, and it seems like making a single fat binary would be in many ways simpler than keeping track of a dozen or so different downloads for each combination of operating system and architecture, not to mention somehow conveying to the customer which one they want.

I can come up with plenty of guesses as to why this approach never caught on, for instance:

  • A lack of cross-compilation tools making multi-OS binaries infeasible
  • You need to test the code on each OS anyway, so you already have to have systems that can compile natively for each OS
  • Apparently 32-bit programs "just work" on 64-bit machines already
  • Dynamic linking works differently on each OS, so a "fat library" might not work even if a "fat application" would

But since I always work with a library or framework that hides all these OS-specific and architecture-specific details from me, I don't know how true any of that really is, or if there are even more issues I don't know about. So, what are the actual reasons why fat binaries aren't generally used to create multi-architecture and/or multi-OS software? (outside of Apple)

Best Answer

A fat binary approach makes most sense if:

  1. Both architectures coexist on the same system
  2. Everything else is more or less the same for all architectures

That's why they are not used for cross-platform code (both criteria don't apply), or to support different Linux distributions with one binary (1. doesn't apply, 2. applies to a certain degree).

On Linux, both criteria would still apply if you want to support both 32 and 64 bit on a single Linux distribution. But why bother, if you already have to support multiple distributions?

On Windows, the transition from 16 bit to 32 bit happened initially with the introduction of Windows NT, which was major deviation from the 16 bit Windows world in many regards (virtual memory, multi-user access control, API changes...). With all these changes, it was better to keep the 32 and 16 bit worlds separate. NT had already the concept of "subsystems" support different OS "personae" (Win32, POSIX), so making Win16 a third subsystem was a straightforward choice.

The Win32 to Win64 transition didn't involve similar major changes, but Microsoft used a similar approach anyway, probably because it was proven and tried.

Related Topic