If you ask these people you need to buy expensive hardware and build your own datacenter at home.
I have been hosting all my services on a single Intel Nuc from 10 years ago and a RPI5 as backup for critical services like DNS.
That's it.
You'll truly be amazed at how much stuff you can actually run on very little hardware if you only have between 2 and 5 users like in a family.
Also, MinIO was always a enterprise option. It was never meant for home use. Just use SeaweedFS, Garage or so if you really want S3.
Sidenote: You do not need S3 in your house. Just use the filesystem.
When choosing software that I run in my “homelab” I lean towards community developed projects first. They may not always have as high quality as the ones offered by commercial entities but they’re just safer for the long term and have no artificial limits (Plex). I used to be a happy Plex customer (I have Plex Pass) but several years ago I had enough of their bullshit, switched to Jellyfin and couldn’t be happier!
1. Peer-to-peer model of decentralization like bittorrent, instead of the client-server model. Local web UIs (like Transmission's web UI) may be served locally (either host-only or LAN-only) as frontend for these apps. Consider this as the 'last-mile connectivity' if you will.
2. Applications are resistant to outages. Obviously, home servers can't be expected to be always online. It may even be running on you regular desktops. But you shouldn't lose the utility of the service just because it goes offline. A great example of this is the email service. They can wait for up to 2 days for the destination server to show up before declaring a delivery failure. Even rejections are handled with retries minutes later.
3. The applications should be able to deal with dynamic IPs and NATs. We will probably need a cryptographic identity mechanism and a way to translate that into a connection to the correct end node. But most of these technologies exist today.
4. E2E encrypted and redundant storage and distribution servers for data that must absolutely be online all the time. Nostr relays seem like a good example.
The Solid and Nostr projects embody many of these ideas already. It just needs a bit more polish to feel natural and intuitive. One way to do it is to have a local daemon that acts as a gateway, cache and web-ui to external data.
Enshittification also usually implies that switching to an alternative is difficult (usually because creating a competing service is near impossible because you'd have to get users on it). That flaw doesn't really apply to self hosting like it does with centralized social media. You can just switch to Jellyfin or Garage or Zulip. Migration might be a pain, but it's doable.
You can't as easily stop using LinkedIn or GitHub or Facebook, etc.
I suspect you don’t. I suspect a couple of beelinks could run your whole business (minus the GPU needs).
yes no more dyndns free accounts... but u can still use afraid or do cf tunnels maybe?
and in some cases nowadays u can get away with
docker-compose up
and some of those things like minio and mattermost are complaints about the free tier or complaints about self hosting? i can't tell
indeed the easiest "self hosting" ever was when ngrok happened.. u could get ur port listening on the internet without a sign up... by just running a single binary without a flag...
Oh yes it is. I already self hosted stuff back in 2000 and it was very hard. Then came docker and it is very simple now.
Sure "very simple" mean different things to different people, but if you self host you need to know a lot already.
This is somehiw similar to amateur electronics. You used to do 100% yourself from scratch. Now you have boards and you can start in z much simpler way.
But the biggest thing I am worried about is the hardware prices too.
So I want to ask but is there any hardware (usually ram) which isn't getting its price increase insanely much? Perhaps refurbished or auctioned servers?
What is the best way to now get hardware which is bang for its buck? Should we even buy hardware right now or wait 3-4 years for factory production to rise and AI bubble to crash, I definitely think that ram prices will fall off very steeply (its almost a cycle in the ram business)
I am not sure but buying up small levels of compute feels like a decent idea if you are doing anything computationally expensive and of course if you have something like plex, then I suppose you have to expand on the storage part and not so much on the ram part (perhaps some encoding/decoding which could be ram intensive but I don't know)
I had gotten into the rumour that asus is ramping up chip production or smth to save hardware but it turned out to be fake so not sure how to respond but please some hardware company should definitely see this opportunity smh.
But it does almost seems like there is a squeeze on general purpose computing from all sides, including homelab. The DRAM and SSD prices is just the latest addition to that. There's also Win 11 requiring TPM, which is not an bad thing by itself, but which will almost certainly take away the ability to run arbitrary OSes 5-10 years down the line on PCs. Or you'd still be able to boot them, but nothing will run on it without a fully trusted chain from TPM -> secure boot -> browser.
Unless you have a heavy-duty pipe to your prem you're just risking all kinds of headaches, and you're going to have to put your stuff behind Cloudflare anyway and if you're doing that why not use a VPS?
It's just not practical for someone to run a little blog or app that way.
> Even old hardware isn't safe: DDR4 prices are also affected, so that tiny ThinkCentre M720 won't save us.
Most of my home infrastructure is DDR2 or DDR3. It’s plenty fast for quite a lot of things. I really don’t care whether some background operation takes five minutes or an hour. I rather care how little energy and heat that machine produces.
Also, forking is an option, you can always use AI to keep it current.
If the source code is available for you to fork, modify, and maintain as you see fit, what's the complaining really about?
"Plex added a paid license for remote streaming, a feature that was previously free. And then Plex decided to also sell personal data — I sure love self-hosted software spying on me."
How is it "self-hosted" if it's "remote streaming?" And if you're hosting it, you can throttle any outgoing traffic you want. Right?
The only other examples are Mattermost and MinIO... which I don't know much about, but again: Aren't you in control of your own host?
This article is lame. How about focusing on back-ends that pretend to support self-hosting but make it difficult by perpetuating massive gaps in its documentation (looking at you, Supabase)?