This content originally appeared on DEV Community and was authored by TheBrokenRail
Let's say you just made an amazing application for Linux, and now you want to release it. It uses C and C++ for maximum performance and you've tested it extensively. Everything seems perfect! Now you just have to compile and package it, you just have one big problem: most of your target audience uses ARM and your computer runs x86_64. Thankfully you hear of an amazing solution: cross-compiling! A way to compile for a different system than the current one. Then you realize that there are a million different ways to cross-compile and almost all of them are terrible.
In this article, I will be analyzing the pros and cons of different cross-compiling methods and my proposed solution on how to make it easier. And of course, much of this is just my opinion. There is also a pretty good chance I missed something, if I did please let me know.
Option #1: Debian Multiarch
When Debian Multiarch works well, it is one of the easiest ways to cross-conpile I have ever used. It works by adding a foreign architecture to APT, then installing the foreign libraries right along-side your native ones. Allowing you to cross-compile just like you would regularly compile.
Pros
- Extremely easy when functional
- No need to compile libraries from source
Cons
- Debian only
- Only supports targeting your current distribution version
- Some packages conflict with foreign versions, preventing Multiarch from working properly
- Ubuntu puts all non-x86 packages in a separate repository, requiring you to manually install the Ubuntu Ports repository
- Root filesystem pollution
Option #2: Sysroot/NDK
This option is typically used with devices like the Raspberry Pi and Android devices. In this option, the device manufacturer distributes the needed compilers and system libraries all in one big package. This works well for smaller programs, but becomes annoying when needing to use libraries.
Pros
- Just download and run the package
- No need to check compiler version or compatibility
Cons
- Unless a library is included in the package, you must either download it manually or compile from source
- Not all device manufacturers or distribution maintainers provide this package
Option #3: Docker/chroot
+ QEMU
This option involves compiling your program inside a Docker image or chroot
of your target environment, usually using QEMU user-mode for emulation.
There are also a few other variations of this method, all using some program or another to create an isolated container of your target to compile in.
Pros
- Easy
- If you need a library, just install it inside of your image
Cons
- Requires Docker or a
chroot
- Unless your image is the same architecture as your host, everything must be emulated
- APT
- Clang
- CMake
- Etc
General Cross-Compiling Issues
There are also some issues common to almost all of the options.
Selecting The Target In The Build System
Almost every single build system has a different way to select your build target. And almost all of them have weird quirks and exceptions.
GNU Autoconf usually uses --target
. CMake uses various variables like CMAKE_<Language>_COMPILER
, usually specified in a CMAKE_TOOLCHAIN_FILE
. Makefiles usually uses CC
, CXX
, LD
, etc, but they are by no means reliable. And that isn't even counting all the weird custom build systems projects like OpenSSL use.
Out of all the build systems that I have tried to cross-compile with, CMake is by far my favorite due to its far more enforced variables. If you set CMAKE_C_COMPILER
, it will try to use that compiler. Unfortunately, this reliability comes at the expense of a little flexibility, preventing CMake from doing things like compiling parts of a project for the host architecture and then running them to assist compilation for the target architecture later.
Libraries
Unless you use an isolated container for compilations, obtaining a copy of all the needed libraries is annoying. Needing to make sure it won't conflict with your main system or even compile from source is not an enjoyable experience. And if you compile from source, you need to remember to add the correct linker and include paths, not to mention keeping all your build scripts up-to-date.
My Proposed Solution
My proposed solution is a program like debootstrap
, you specify your target distribution and all the libraries you need, and it will build a package like in option #3 containing all your compilers and libraries. Then you just add this to your PATH
and tell your build system what compiler to use. (If this already exists please tell me.) Unfortunately, this solution is not without its flaws. Whoever maintains it would also have to maintain a compiler for every supported host system, probably 2 compilers if both GCC and Clang were supported. Then they would also have to bundle several different package management systems and convince them to all install in a specific directory without altering the external system.
Conclusion
Cross-compiling is not perfect, it is also not the most urgent problem either. If you follow the "happy path" like Debian Multiarch, things tend to work fairly well, it is only after you go off that path that things start breaking. Right now, the most reliably way to cross-compile unfortunately is to emulate the target system. Thank you for reading.
This content originally appeared on DEV Community and was authored by TheBrokenRail
TheBrokenRail | Sciencx (2021-07-09T20:01:51+00:00) Cross-Compiling Is Way Too Difficult, How Can We Fix It?. Retrieved from https://www.scien.cx/2021/07/09/cross-compiling-is-way-too-difficult-how-can-we-fix-it/
Please log in to upload a file.
There are no updates yet.
Click the Upload button above to add an update.