I delayed upgrading to 15.0 after it was released, but last weekend I finally did it, and it left me wondering why I hadn't done it sooner, because it went quickly and smoothly.
Is there anything FreeBSD can do that, say, Debian cannot? Probably not (at least I cannot think of anything). When I set up the server, ZFS was a huge selling point, but I heard that it works quite well on Linux, these days. But I appreciate the reliability, the good documentation, the community (when I need help).
But there is always pressure for more features, more bloat. In Linux, on the plus side, I can plug in some random gadget and in most cases it just works. And any laptop that's a few years old, you can just install Fedora from its bootable live image, and it will work. Secure boot, suspend, Wifi, the special buttons on the keyboard, and so on. But the downside is enormous bloat and yes, often the kind of tinkering you really don't want to do any more, such as the Brother laser printer drivers still being shipped as 32-bit binaries and the installer silently failing because one particular 32-bit dependency wasn't autoinstalled. Or having to get an Ubuntu-dedicated installer (Displaylink!) to run on Fedora.
But here you have the "mainstream" Unix-ish OS absorbing all the bleeding edge stuff, all the bloat. Allowing FreeBSD free reign to be pure, with a higher average quality of user, which sets the tone of the whole scene. An echo of the old days, like Usenet before "Eternal September" and before Canter & Siegel - for those old enough to remember how it all felt back then.
I'm in the process of converting and consolidating all my home infra into a mono-compose, for the simple reason I don't want to fiddle with shit, I just want to set-and-forget. The joy of technology was in communications and experiences, not having to dive through abstraction layers to figure out why something was being fiddly. Containers promised to remove the fiddliness (as every virtualization advancement inevitably promises), and now I'm forced to either fiddle with Docker and its root security issues, fiddle with Podman and reconfiguring the OS for lower security so containers don't stop (or worse, converting compose to systemd files to make them services), or fiddle with Kubernetes to make things work with a myriad of ancillary services and CRDs for enterprises, not homelabs.
For two years now, there's been a pretty consistent campaign of love-letters for the BSDs that keep tugging at what I love about technology: that the whole point was to enable you to spend more time living, rather than wrangling what a computer does and how it does it. The concept of jails where I can just run software again, no abstractions needed, and trust it to not misbehave? Amazing, I want to learn more.
So yeah, in lieu of setting up the second NUC as a Debian HA node for Docker/QEMU failover, I think I'm going to slap FreeBSD on it and try porting my workloads to it via Jails. Worst case scenario, I learn something new; best case scenario, I finally get what I want and can finally catch up on my books, movies, shows, and music instead of constantly fiddling with why Plex or Jellyfin or my RSS Aggregator stopped functioning, again.
I am just not sure it is worth leaving the Linux ecosystem. What if I want to run a Docker container? Do I have to trust random people for ports of software that runs natively on Linux, or port it myself?
FreeBSD seems good so far, but community and ecosystem are important.
Anyways had enough of the random downtime, I just switched to Linux which didn't have these issues.
I'd say the best part of FreeBSD though is freebsd-update which was a game changer from the previous make world shenanigans.
Not my idea of love. Maybe that hardware was supported on Linux. Switch from Linux to FreeBSD so that you can later switch to Mac when you get frustrated with unsupported hardware is not a good pitch.
Immich assumes you're running Docker and I can't seem to get Linux running in a bhyve VM with Intel Quick Sync acceleration.
I want to have a bunch (5-10) of freebsd cattle-style servers to run a service on.
What’s the preferred infrastructure as code style approach to setting them up?
Some will be bare metal with kvm console access. Some will be VMs. They will be heterogeneous and not in one DC.
I probably don’t need zfs for this application (raw iops matter more than snapshots, etc).
I have previous experience with kubernetes, and am not interested in using it again.
Monitoring, logging, deterministic “zero to working” install and updates are probably the main requirements.
My personal issue is that I do not believe FreeBSD will give me a smooth experience to get my GPU and what not running. On Mint Cinnamon, I only had to install the latest supported kernel to get my RTX and WiFi6 card recognised.
Debian, the reason why I run Debian as a server everywhere, even in my 3D printer is because it just work, and not just that.
I only run Debian Netinst version, that means to only install standard system and SSH, text mode is the way.
We are talking about 300MB of memory being used by Pihole + Unbound Recursive DNS of 512MB running on a Debian 13.
Disk space?? 1GB or so I guess.
These are my blockers to even try raw FreeBSD, lack of proper hardware support and as a server even if I remove everything I can, I do not imagine FreeBSD running with 300MB/1G of resources.
Not to mention if you work in IT in any way, the last thing you wanna is fighting the system you use to solve another problem, that is why I left Ubuntu after 13years or so, it is a Windows within the Linux world now.
A distro Linux must just run, no dramas, no issues, major system release goes like nothing happened. That matters!!!
How do FreeBSD users get around the inconveniences associated with the "the rest of the world" running on Linux?
Ubuntu could have been the one, but they reversed course after dropping support for Zsys in 2022[1].
If there are others, then please let me know, but as far as I can tell, the closest approximations in Linux are:
- Btrfs with Snapper in OpenSuse Tumbleweed/MicroOS
- Snapshot Manager/Boom in RHEL
- OStree in Fedora Atomic, CarbonOS, EndlessOS
- Bootable container implementations in Fedora CoreOS, RHEL10, CarbonOS, Bazzite, BlendOS, etc.
- Snaps in Ubuntu Core
- Generations in NixOS and Guix
- A/B update mechanism in ChromeOS, IncusOS
- OverlayFS in Nitrux Linux
- Ad-hoc implementations with Arch, Alpine, etc.
Excluding the ad-hoc implementations, only OpenSuse and Red Hat approaches allow you to treat your system image and system data the same way. They're great, but fundamentally incompatible, and neither has caught on with other distributions. Capabilities of both approaches are limited compared to ZFS.
The strangest part of the Linux situation IMHO is, every time ZFS on Linux is discussed, someone will invariably bring up XFS. For the past decade, XFS on Linux contains support for Copy-on-Write (CoW) and snapshots via relinking. If this is the preferred path on Linux (for users who don't want checksumming of ZFS/Btrfs/Bcachefs), then how come no major distros besides Red Hat have embraced it[2] to provide an update rollback functionality?
I concede that most of the other approaches do provide a higher level level of determinism for what your root system looks like after an upgrade. It's powerful when you can test that system as an OCI container (or as a VM with Nix/Guix). FWIW, FreeBSD can approximate this with the ability to use it's boot environments as a jail[3].
[0] https://daemonforums.org/showthread.php?t=7099
[1] https://bugs.launchpad.net/ubuntu/+source/ubiquity/+bug/1968...
[2] https://docs.redhat.com/en/documentation/red_hat_enterprise_...
[3] https://man.freebsd.org/cgi/man.cgi?query=bectl&sektion=8&ma...
How do you get hired if you do happen to have proper FreeBSD skills? It's notably absent from all the job listings.
Not clear!
This is indeed a problem now that google search is next to useless. And AI further degrading the quality.
I work around it to some extent by keeping my local knowledge base up to date, as much as that is possible; and using a ton of scripts that help me do things. That works. I am also efficient. But some projects are simply underdocumented. A random example is, in the ruby ecosystem, rack. Have a look here:
Now find the documentation ... try it.
You may find it:
Linked from the github page.
Well, have a look at it.
Remain patient.
Now as you have looked at it ... tell me if someone is troll-roflcopter-joking you.
https://rack.github.io/rack/main/index.html
Yes, you can jump to the individual documentation of the classes, but does that really explain anything? It next to tells you nothing at all about anything about rack.
If you are new to ruby, would you waste any time with such a project? Yes, rack is useful; yes, many people don't use it directly but may use sinatra, rails and so forth, I get it. But this is not the point. The point is whether the documentation is good or bad. And that is not the only example. See ruby-webassembly. Ruby-opal. Numerous more projects (I won't even mention the abandoned gems, but this is of course a problem every language faces, some code will become outdated as maintainers disappear.)
So this is really nothing unique to Linux. I bet on BSD you will also find ... a lack of documentation. Probably even more as so few blog about BSD. OpenBSD claims it has great documentation. Well, if I look at what they have, and look at Arch or Gentoo wiki, then sorry but the BSDs don't understand the problem domain.
It really is a general problem. Documentation is simply too crap in general, with a few exceptions.
> if the team behind this OS puts this much care into its documentation, imagine how solid the system itself must be.
Meh. FreeBSD documentation can barely called the stand-out role model here either. Not sure what the BSD folks think about that.
> I realized almost immediately that GNU/Linux and FreeBSD were so similar they were completely different.
Not really.
There are some differences but I found they are very similar in their respective niche.
Unfortunately my finding convinced me that Linux is the better choice for my use cases. This ranges from e. g. LFS/BLFS to 500 out of top 500 supercomputers running Linux. Sure, I am not in that use case of having a supercomputer, but the point is about quality. Linux is like chaotic quality. Messy. But it works. New Jersey model versus [insert any high quality here]. https://www.jwz.org/doc/worse-is-better.html
> Not only that: Linux would overheat and produce unpredictable results - errors, sudden shutdowns, fans screaming even after compilation finished.
Well, hardware plays a big factor, I get it. I have issues with some nvidia cards, but other cards worked fine on the same computer. But this apocalypse scenario he writes about ... that's rubbish nonsense. Linux works. For the most part - depending on the hardware. But mostly it really works.
> I could read my email in mutt while compiling, something that was practically impossible on Linux
Ok sorry, I stopped reading there. My current computer was fairly cheap; I deliberately put in 64GB RAM (before the insane AI-driven cost increases) and that computer works super-fast. I compile almost everything from source. I have no real issue with anything being too slow; admittedly a few things take quite a bit of compile-power, e. g. LLVM, or qt - compiling that from source takes a while, yes, even on a fast computer. But nah, the attempt to claim how FreeBSD is so much faster than Linux is, that's simply not factual. It is rubbish nonsense. Note that OpenBSD and NetBSD folks never write such strangeness. What's wrong with the FreeBSD guys?