Since Linux has no concept of a base system, it's a stand-alone kernel with a hodgepodge of crap around it - this distinction makes no sense on Linux.
/opt is generally for software distros for which you don't have source; only binaries. Like commercial software packages. More common on Real UNIX(R) because most Linux users outside enterprise aren't running commercial software. You're putting your $500k EDA software under /opt.
char string[10000];
strp = string;
for (i=0; i<9; i++)
*strp++ = "/usr/bin/"[i];
p = *argv++;
while(*strp++ = *p++);
// string == "/usr/bin/foo"
execv(string+9, args); // foo (execv returns only in case of error, i.e. when foo does not exist)
execv(string+4, args); // /bin/foo
execv(string, args); // /usr/bin/foo
[1] https://github.com/dspinellis/unix-history-repo/blob/Researc...So the /bin /sbin became redundant.
Sometime around 2020 someone observed that no current Linux can boot without /usr anyway. So what did they do? Move everything from /usr to / and drop the whole /usr legacy? Noooo, that would be too simple. Move / to /usr. And because that is still too simple, also move /bin, /sbin and /usr/sbin to /usr/bin, and then keep symlinks at the old locations because who's gonna fix hardcoded paths in 99% of all Linux apps anyway??
Oh, how I wish I was born in the '60s, when the world was still sane.
I worked at an R&D center where we had hundreds of UNIX systems orf all types(i.e. Sun, Ultrix, HP, Symbolics, etc.) We also had Sun 2's , 3's and 4's - each with different CPU's/architectures and incompatible binaries. Some Suns had no disks at all. And with hundreds of systems, we literally had a hundred different servers across the entire site.
I would compile a program for a Sun 3, and needed a way to install the program once, for use on hundreds of computers. Also teams of people on dozens of different computers needed to share files with each other.
This was before SSH. We had to use NFS.
It was fairly seamless and .... interesting.
Well...
battlestation : ls /opt/
homebrew local
:'-)2. “Then somebody decided /usr/local wasn't a good place to install new packages, so let's add /opt”
Not exactly. /usr/local exists so you don’t accidentally mess up your distro/package manager by changing its files. It’s “local” to your installation. But it is still structured — /usr/local/bin, /usr/local/lib, etcetera — divided into binaries, shared libraries, manpages.
Whereas /opt has no structure. It’s “the wild west”…application binaries, libraries, configuration, data files, etcetera with no distinction. Apps with “universal” packaging, or sometimes secondary package managers.
For example /usr/local/bin is normally part of PATH, but /opt is not (unless eg homebrew adds it to your bashrc).
Doesn't being under $HOME make .local redundant? I guess one could argue for binaries going in an architecture-specific subdirectory if $HOME was on a shared filesystem, but that's not what's being done here.
To me, $HOME/.local/share and its siblings are just a needless level of indirection, forcing me to jump through an extra hoop every time I want to access what's in there.
(I know it's sometimes possible to override it with an environment variable, but the predictably spotty support for those overrides means I would then have to look for things in two places. I think sensible defaults would be nicer.)
We'll skip PDP-7 UNIX, no hierarchical file system yet.
UNIX v1 on the PDP-11 had an RF11 fixed head disk (1mb) for / and swap, and an RK05 moving head disk (2.5mb) for /usr (the user directories)
By v2 they had added a second RK05 at /sys for things like the kernel, manual pages, and system language stuff like the c compiler and m6.
By v3 they added yet another RK05 at /crp for, well, all sorts of crap (literally), including yacc apparently. /usr/bin is mentioned here for the first time.
I don't feel like looking up when sbin was first introduced but it is not a Bell Labs thing. possibly BSD or AT&T UNIX? Binaries that one would normally not want to run were kept in /etc, which includes thing like init, mount, umount, getty, but also the second pass of the assembler (as2), or helpers like glob. Also i don't know when /home became canonical but at Bell Labs it was never a thing (plan 9 has user directories in /usr where they had always belonged logically).
The lib situation is more difficult. Looks like it started with /usr/lib. By v3 we find the equivalent directory as /lib, where it contains the two passes of the C compiler (no optimization pass back then), C runtime and lib[abc].a (assembler, B, C libraries respectively). /usr/lib had been repurposed for non-object type libraries, think text-preparation and typesetting.
By v4 the system had escaped the labs (see the recent news) and at that point everyone modified the system to their taste anyway. Perhaps it should be noted that the v7 distribution (which is the first that is very clearly the ancestor of every modern UNIX) has no /usr/bin, only /bin. /lib and /usr/lib are split however.
These are just some rough notes and due to a lack of early material they're still not as accurate as i would like. Also UNIX ran on more than one machine even in the early days (the manuals mention the number of installations) so there must have been some variation anyway. Something I'd like to know in particular is when and where RP03 disk drives were used. These are pretty huge in comparison to the cute RK05s.
This is a security threat, especially with SETUID programs. If you could change the library, you could install new code and gain privileged access.
This was why /usr/sbin was created - all of the programs there were compiled with static libraries.
Slapping it down as "FHS is now a standard" does not change anything. People will ask why it is suddenly a standard when it hasn't made any sense at all whatsover. bin versus sbin is also pointless. Inertia is one primary reason why nobody fixes things usually.
Practically in this century if I was starting a new OS I would set it up like so:
/bin for all system binaries. Any binary from a package installed by the OS package manager lived here.
/lib same but for shared libraries
/var for variable data. This is where you would put things like your Postgres data files.
/tmp for temporary files.
/home as usual.
/dev as usual.
/boot as usual
/etc as usual
/usr would be what /usr/local is on most systems. So /usr/bin is binaries not installed by the OS package manager. /usr/etc is where you put config files for packages not installed by the package manager and so on.
Get rid of /usr/local and /sbin.
/media replaces /mnt entirely (or vice versa).
Ditch /opt and /srv
Add /sub for subsystems: container overlays should live here. This would allow the root user (or a docker group, etc.) to view the container file system, chroot into it, or run a container on it.
Then again, nobody gave me a PDP-11 to decide so my vote doesn’t count :)
One of our devs was also a gimp contributor, and he dropped gimp into /usr/local and filled up the filesystem. And back then package managers didn't exist, so you had to read the makefile and hope you didn't remove anything that was shared
/opt/gimp or /usr/local/gimp.
Local because in some places they mounted an nfs share, and local was local to you.
Speaking of things which are needlessly complex, I'm reminded of this classic post on the tortured history of the browser User-Agent header:
https://webaim.org/blog/user-agent-string-history/
Highly recommended!
[0]: https://www.psychologytoday.com/us/blog/thinking-makes-it-so...
Mac OS?
Latter, if desired, the system, could override those libraries with another ones (newer compatible or patched), more thinking is needed about this. The key, from the process point of view, would to limit the access of such process to their own directories and some very limited system only local services by default,
And to extend this permissions, each binary in such directory would need to be in companion of a permissions request file that would require the approbation from the user or the system defaults patterns (each distro would have a point of view I guess), in the aim of improve process isolation and system, drivers, services access permissions.
This would need also restructure the console philosophy, how can manage the processes, and so on, that would need a big restructuration.
I mean, anyway people is duplicating space with containers trying to isolate process, remark in trying.
I know this is unrealistic due the deep change it would suppose, so consider I'm just thinking out loud.
PS: If you answer it already exists with AppArmor, SELinux, etc, then you did not understood the root of the issue with such modules.
For me this was an eye-opener. I kept trying to wrap my head around all these different paths and "standards" because I thought it was correct and deliberately designed. Looking back through the history this doesn't seem to be the case; I feel much better for being confused by all the different PATH conventions and strict hierarchies.
Additional info: many rules from many places are now in force that maintain the historical structure.