by iforgotpassword
6 subcomments
- The other issue is that people seem to just copy configure/autotools scripts over from older or other projects because either they are lazy or don't understand them enough to do it themselves. The result is that even with relatively modern code bases that only target something like x86, arm and maybe mips and only gcc/clang, you still get checks for the size of an int, or which header is needed for printf, or whether long long exists.... And then the entire code base never checks the generated macros in a single place, uses int64_t and never checks for stint.h in the configure script...
by creatonez
1 subcomments
- Noticed an easter egg in this article. The text below "I'm sorry, but in the year 2025, this is ridiculous:" is animated entirely without Javascript or .gif files. It's pure CSS.
This is how it was done: https://github.com/tavianator/tavianator.com/blob/cf0e4ef26d...
- I did something like the system described in this article a few years back. [1]
Instead of splitting the "configure" and "make" steps though, I chose to instead fold much of the "configure" step into the "make".
To clarify, this article describes a system where `./configure` runs a bunch of compilations in parallel, then `make` does stuff depending on those compilations.
If one is willing to restrict what the configure can detect/do to writing to header files (rather than affecting variables examined/used in a Makefile), then instead one can have `./configure` generate a `Makefile` (or in my case, a ninja file), and then have the "run the compiler to see what defines to set" and "run compiler to build the executable" can be run in a single `make` or `ninja` invocation.
The simple way here results in _almost_ the same behavior: all the "configure"-like stuff running and then all the "build" stuff running. But if one is a bit more careful/clever and doesn't depend on the entire "config.h" for every "<real source>.c" compilation, then one can start to interleave the work perceived as "configuration" with that seen as "build". (I did not get that fancy)
[1]: https://github.com/codyps/cninja/tree/master/config_h
by epistasis
4 subcomments
- I've spent a fair amount of time over the past decades to make autotools work on my projects, and I've never felt like it was a good use of time.
It's likely that C will continue to be used by everyone for decades to come, but I know that I'll personally never start a new project in C again.
I'm still glad that there's some sort of push to make autotools suck less for legacy projects.
- And on macOS, the notarization checks for all the conftest binaries generated by configure add even more latency. Apple reneged on their former promise to give an opt-out for this.
by fishgoesblub
0 subcomment
- Very nice! I always get annoyed when my fancy 16 thread CPU is left barely used as one thread is burning away with the rest sitting and waiting. Bookmarking this for later to play around with whatever projects I use that still use configure.
Also, I was surprised when the animated text at the top of the article wasn't a gif, but actual text. So cool!
- Autoconf can use cache files [1], which can greatly speed up repeated configures. With cache, a test is run at most once.
[1] https://www.gnu.org/savannah-checkouts/gnu/autoconf/manual/a...
- I get the impression configure not only runs sequentially, but incrementally, where previous results can change the results of tests run later. Were it just sequential, running multiple tests as separate processes would be relatively simple.
Also, you shouldn’t need to run ./configure every time you run make.
by SuperV1234
1 subcomments
- CMake also needs this, badly...
by moralestapia
3 subcomments
- >The purpose of a ./configure script is basically to run the compiler a bunch of times and check which runs succeeded.
Wait is this true? (!)
by kazinator
1 subcomments
- I've implemented a configuration caching mechanism for myself (in one important project) which stores configuration artifacts in a cache directory, associated by the commit hash. It works as a git hook:
$ git bisect good
Bisecting: 7 revisions left to test after this (roughly 3 steps)
restored cached configuration for 2f8679c346a88c07b81ea8e9854c71dae2ade167
[2f8679c346a88c07b81ea8e9854c71dae2ade167] expander: noexpand mechanism.
The "restored cached configuration" message is from the git hook. What it's not saying is that it also saved the config for the commit it is navigating away from.I primed the cache by executing a "git checkout" for each of a range of commits.
Going forward, it will populate itself.
This is the only issue I would conceivably care about with regard to configure performance. When not navigating in git history, I do not often run configure.
Downstream distros do not care; they keep their machines and cores busy by building multiple packages in parallel.
It's not ideal because the cache from one host is not applicable to another; you can't port it. I could write an intelligent script to populate it, which basically identifies commits (within some specified range) that have touched the config system, and then assumes that for all in-between commits, it's the same.
The hook could do this. When it notices that the current sha doesn't have a cached configuration, it could search backwards through history for the most recent commit which does have it. If the configure script (or something influencing it) has not been touched since that commit, then its cached material can be populated for all in-between commits right through the current one. That would take care of large swaths of commits in a typical bisect session.
by gorgoiler
10 subcomments
- On the topic* of having 24 cores and wanting to put them to work: when I were a lad the promise was that pure functional programming would trivially allow for parallel execution of functions. Has this future ever materialized in a modern language / runtime?
x = 2 + 2
y = 2 * 2
z = f(x, y)
print(z)
…where x and y evaluate in parallel without me having to do anything. Clojure, perhaps?*And superficially off the topic of this thread, but possibly not.
- Man, I've spent way too many hours wrestling with build systems like autotools and cmake and they both make me want to just toss my laptop sometimes - feels like it's way harder than it needs to be each time. You ever think we'll actually get to a point where building stuff just works, no endless config scripts or chasing weird cross-platform bugs?
by redleader55
4 subcomments
- Why do we need to even run most of the things in ./configure? Why not just have a file in /etc which is updated when you install various packages which ./configure can read to learn various stats about the environment? Obviously it will still allow setting various things with parameters and create a Makefile, but much faster.
- autotools is a complete disaster. It’s mind boggling to think that everything we build is usually on top of this arcane system
- "./configure" has always been the wrong thing for a very long long time. Also slow...
- (Luckily?) With c++ your build will nearly always take longer then the configuration step.
- As a user I highly appreciate ./configure for the --help flag, which usually tells me how to build a program with or without particular functionalities which may or may not be applicable to my use-case.
by tekknolagi
1 subcomments
- This doesn't mention another use of configure which is manually enabling or disabling features via --with-X -- I might send in a PR for that
by BobbyTables2
0 subcomment
- I was really hoping he worked some autoreconf/macro magic to transform existing configure.ac files into a parallelized result.
Nice writeup though.
by Chocimier
1 subcomments
- It is possible in theory to speed up existing configure scripts by switching interpreter from /bin/sh to something that scans file, splits it to independent blocks and runs them in parallel.
Is there any such previous work?
- I actually think this is possible to improve if you have the autoconf files. You could parse it to find all the checks you know can run in parallel and run those.
- What I'd like to see is a configure with guarantees that if the configure succeeds, then the build will succeed too.
- Wrong solution. Just run ‘configure -C’ so that it caches the results, or reuses the cache if it already exists.
And of course most of the time you don't need to rerun configure at all, just make.
- Good idea!
- is this really a big deal given you run ./configure once?
it's like systemd trading off non-determinism for boot speed, when it takes 5 minutes to get through the POST