I prefer amd64 as it's so much easier to type and scans so much easier. x86_64 is so awkward.
Bikeshed I guess and in the abstract I can see how x86_64 is better, but pragmatism > purity and you'll take my amd64 from my cold dead hands.
As for Go, you can get the GOARCH/GOOS combinations from "go tool dist list". Can be useful at times if you want to ensure your code cross-compiles in CI.
rustc: `rustc --print target-list`
golang: `go tool dist list`
zig: `zig targets`
As the article point out, the complete lack of standardization and consistency in what constitutes a "triple" (sometimes actually a quad!) is kind of hellishly hilarious.
But for the rest of us, I'm so glad that I can just cross compile things in Go without thinking about it. The annoying thing with setting up cross compilation in GCC is not learning the naming conventions, it is getting the correct toolchains installed and wired up correctly in your build system. Go just ships that out of the box and it is so much more pleasant.
Its also one thing that is great about zig. Using Go+zig when I need to cross compile something that includes cgo in it is so much better than trying to get GCC toolchains setup properly.
x32 support has not been removed from the Linux kernel. In fact, we‘re still maintaining Debian for x32 in Debian Ports.
I actually do have working code for the triple-to-TargetInfo instantiation portion (which is fun because there's one or two cases that juuuust aren't quite like all of the others, and I'm not sure if that's a bad copy-paste job or actually intentional). But I never got around to working out how to actually integrate the actual bodies of TargetInfo implementations--which provide things like the properties of C/C++ fundamental types or default macros--into the TableGen easily, so that patch is still merely languishing somewhere on my computer.
> Why the Windows people invented a whole other ABI instead of making things clean and simple like Apple did with Rosetta on ARM MacBooks? I have no idea, but http://www.emulators.com/docs/abc_arm64ec_explained.htm contains various excuses, none of which I am impressed by. My read is that their compiler org was just worse at life than Apple’s, which is not surprising, since Apple does compilers better than anyone else in the business.
I was already familiar with ARM64EC from reading about its development from Microsoft over the past years but had not come across the emulators.com link before - it's a stupendous (long) read and well worth the time if you are interested in lower-level shenanigans. The truth is that Microsoft's ARM64EC solution is a hundred times more brilliant and a thousand times better for backwards (and forwards) compatibility than Rosetta on macOS, which gave the user a far inferior experience than native code, executed (sometimes far) slower, prevented interop between legacy and modern code, left app devs having to do a full port to move to use newer tech (or even just have a UI that matched the rest of the system), and was always intended as a merely transitional bit of tech to last the few years it took for native x86 apps to be developed and take the place (usurp) of old ppc ones.
Microsoft's solution has none of these drawbacks (except the noted lack of AVX support), doesn't require every app to be 2x or 3x as large as a sacrifice to the fat binaries hack, offers a much more elegant solution for developers to migrate their code (piecemeal or otherwise) to a new platform where they don't know if it will be worth their time/money to invest in a full rewrite, lets users use all the apps they love, and maintains Microsoft's very much well-earned legacy for backwards compatibility.
When you run an app for Windows 2000 on Windows 11 (x86 or ARM), you don't see the old Windows 2000 aesthetic (and if you do, there's an easy way for users to opt into newer theming rather than requiring the developer to do something about it) and you aren't stuck with bugs from 30 years ago that were long since patched by the vendor many OS releases ago.
It does work on Linux, the only kernel that promises a stable binary interface to user space.
I assume it works with an all-targets binutils build. I haven't seen anyone building their cross-compilers in this way (at least not in recent memory).
> No idea what this is, and Google won’t help me.
Seems that Kalimba is a DSP, originally by CSR and now by Qualcomm. CSR8640 is using it, for example https://www.qualcomm.com/products/internet-of-things/consume...
VE is harder to find with such short name.
I can't remember what the fifth one is, but yeah... insane system.
Thanks for writing this up! I wonder if anyone will ever come up with something more sensible.
- https://mcyoung.xyz/2021/06/01/linker-script/
Everytime I deal with target triples I get confused and have to refresh my memory. This article makes me feel better in knowing that target triples are an unmitigated cluster fuck of cruft and bad design.
> Go does the correct thing and distributes a cross compiler.
Yes but also no. AFAIK Zig is the only toolchain to provide native cross compiling out of the box without bullshit.
Missing from this discussion is the ability to specify and target different versions of glibc. Something that I think only Zig even attempts to do because Linux’s philosophy of building against local system globals is an incomprehensibly bad choice. So all these target triples are woefully underspecified.
I like that at least Rust defines its own clear list of target triples that are more rational than LLVM’s. At this point I feel like the whole concept of a target triples needs to be thrown away. Everything about it is bad.
* https://en.wikipedia.org/wiki/Endianness#Hardware
Is there anything that is used a lot that is not little? IBM's stuff?
Network byte order is BE:
This is technically incorrect. The 286 had protected mode. It was a 16-bit protected mode, being a 16-bit processor. It was also incompatible with the later protected mode of the 386 through today’s processors. It did, however, exist.
That's a wild take. I think its pretty universally accepted the GCC and the GNU toolchain is what made this ubiquitous.
Also, the x32 ABI is still around, support is still around, I don't know where the author got that notion
Why isn't it called wasm32-none-none?
https://git.savannah.gnu.org/cgit/config.git/tree/
The `testsuite/` directory contains some data files with a fairly extensive list of known targets. The vendor field should be considered fully extensible, and new combinations of know machine/kernel/libc shouldn't be considered invalid, but anything else should have a patch submitted.
> After all, you don’t want to be building your iPhone app on literal iPhone hardware.
iPhones are impressively powerful, but you wouldn't know it from the software lockdown that Apple holds on it.
Example: https://www.tomsguide.com/phones/iphones/iphone-16-is-actual...
There's a reason people were clamoring for Apple to make ARM laptops/desktops for years before Apple finally committed.
So, it turns out, actually a lot of people call it x64 — including author's own friends! — it's just that the author dislikes it. Disliking something is fine, but why claim outright falsehood which you know first-hand is false?
Also, the actual proper name for this ISA is, of course, EM64T. /s
> The fourth entry of the triple (and I repeat myself, yes, it’s still a triple)
Any actual justification except the bald assertions from the personal preferences? Just call it a "tuple", or something...