from Hacker News

Why I run FreeBSD for my home servers (2024)

by psxuaw on 3/31/25, 12:59 PM with 243 comments

  • by nine_k on 3/31/25, 4:28 PM

    If systemd is the reason, there are several good distros without systemd (I run Void Linux in particular).

    If "kubesomething" is the reason, there's no requirement to use it. I think most people don't run it on their home servers.

    If containers are the reason, then again, they are not a requirement. But they are pretty similar to BSD's jails. I don't think they are particularly complex.

    FreeBSD has a number of strong suits: ZFS, a different kernel and network stack, a cohesive system from a small(ish) team of authors, the handbook, etc. But the usual Linux hobgoblins listed above are a red herring here, to my mind.

  • by solid_fuel on 4/1/25, 12:20 AM

    I use FreeBSD for my home server and I have for years. For me, the biggest reason is just the stability from a user perspective. I don't mean system stability, although it has been rock solid, I mean the stability in terms of administration - the tools don't change frequently.

    `ifconfig` just works, like it has worked for 20 years. On my linux servers, it's all swallowed into `ip addr` now. I don't mind that, I certainly understand why these things change, but when I update an Ubuntu server I always worry that next time I log in a tool I am used to will be broken or removed.

    I simply don't have those concerns on FreeBSD.

  • by csdvrx on 3/31/25, 4:29 PM

    The main complain of the author seems to be that linux use systemd.

    In my experience, systemd is far better and more reliable than anything else, especially if you need complex logic (ex: when this and that happen, start doing this, except when such and such are present)

    Most of the problems I've seen come from trying to duplicate systemd functions: in the author example, why bother with rsyslog or network-manager?

    I have also seen many people refusing to learn modern tools, instead trying to make it work with the tools they know, by disabling what works better, often with poor results.

    It's like trying to keep using ifconfig and route instead of ip: you can make it work, but for say managing multiple ip on the same interface forces you to go with eth0:0 eth0:1 etc (and let's not even talk about network namespaces).

    I like the various BSD and distributions like postmarket OS, but I wish they had access to modern tools instead of having to "roll my own" with scripts or make do with what they depend on

  • by whalesalad on 3/31/25, 4:09 PM

    I notice FreeBSD admins tend to follow a 'pets not cattle' approach, carefully nurturing individual systems. Linux admins like myself typically prefer the 'cattle not pets' mindset—using infrastructure-as-code where if a server dies, no problem, just spin up another one. Leverage containers. Statelessness.

    I don't want to spend time meticulously configuring things beyond the core infrastructure my services run on. I should probably explore FreeBSD more, but honestly, with containers being everywhere now, I'm not seeing a compelling reason to bother. I realize jails are a valid analogue, but broadly speaking the UX is not the same.

    All this being said, I have this romantic draw to FreeBSD and want to play around with it more. But every time I set up a basic box I feel teleported back to 2007.

    Are there any fun lab projects, posts, educational series targeted at FreeBSD?

  • by asveikau on 3/31/25, 4:22 PM

    ZFS is probably the biggest reason for me. I have a machine with a zfs pool running samba and nfsd.

    Philosophically I tend to prefer *BSDs over Linux. I have a few FreeBSD machines, one OpenBSD, and one Linux.

  • by aeblyve on 3/31/25, 5:17 PM

    Cheap Complexity.

    https://www.schneier.com/blog/archives/2022/08/security-and-...

    The article is directly talking about mass-produced electronic commodities. The same is even more so for bits where the cost of copying is not merely "low" as in microcontrollers, but essentially free.

    In my opinion, systemd does solve a lot of problems, at a cost of somewhat more complexity and resource utilization. But it is the nature of material culture to complexify with time as more physical resources become available, i.e., "progress". More advanced commodities don't come out of a thin air of "better processes", but processes that interweave with other parts of the economy more intimately given the previously produced commodities. Something similar can be true inside the computer.

  • by carlhjerpe on 3/31/25, 5:20 PM

    I started reading but stopped as soon as it was a systemd rant. systemd, while not for everyone is a good for most people.
  • by Torpenn on 4/1/25, 1:09 PM

    Wow OP there, I didn't think anyone would pass by my random blog and post the link here .... I'm surprised at the number of comments for a post that was originally there just to feed my blog with a first page and provide a bit of content. Especially for an article that I feel is barely finished and needs a lot of updating.
  • by efortis on 3/31/25, 8:54 PM

    If you are curious about a two-server infra with FreeBSD VNET jails:

    https://blog.uxtly.com/freebsd-jails-network-setup

  • by jcgrillo on 3/31/25, 10:44 PM

    Just this past weekend I gave up trying to install OpenSUSE on a new laptop. I couldn't figure out which magic combination of overlapping Xorg, Wayland, and who knows what else settings were required to make ctrl:nocaps work both in the console and in KDE. I had FreeBSD running with a X11 Mate desktop, with all my files and software , ready to rock in less than an hour. Only thing remaining is to figure out suspend/hibernate and make the brightness keys work. What a breath of fresh air.
  • by rollcat on 3/31/25, 4:53 PM

    I have mixed feelings about FreeBSD. Some stuff is genuinely good: major/minor release branches, the best ZFS experience you can get OOB, actual man pages, overall a lot "cleaner" than most Linux distros.

    OTOH when you compare it to e.g. OpenBSD (or in many instances, even Linux), it's an actual mess. The default install leaves you browsing thru the handbook to get simple things to work; it has three (three!) distinct firewalls; the split between /usr/local/etc and /etc constantly leaves you guessing where to find a particular config file; even the tiny things such as some default sysctl value being an XML snippet - actually, WTF?

    The desktop story is also pretty bad. OpenBSD asks you during installation, whether you'd like to use X11 - and that's it. You boot to XDM, you get a basic window manager, things like volume buttons just work, all in the base system - no packages, no config files. You can install Gnome or XFCE from there, and rest assured you'll always have a working fallback. FreeBSD still feels like 90's Linux in that area. Regarding usability, both are behind Linux in things like connecting to Wifi networks, but in OpenBSD's case you just save a list of SSIDs/passwords in a text file, and the kernel does the rest for you.

    The author is praising jails. I think it's nice that you can trace the lineage all the way back to 6.x, it sings a song of stability. You can also put each jail on a separate ZFS dataset to get snapshot/restore, cloning, etc. But I think it's still a poor middle ground between OpenBSD and OCI. OpenBSD keeps making steps (privsep, pledge, unveil) to provide isolation, while remaining conceptually simple for the developer and imposing no extra maintenance burden on the operator. Containers by design are declarative, separate the system image from state, etc - it's a wholly different concept for someone used to e.g. managing stateful jails or VMs, but it reinforces what already were good design principles.

  • by steeleduncan on 3/31/25, 10:50 PM

    > Complicated stuff = high probably of failure

    This is a myth. The 787 has about 60 million miles of wiring in it. It is vastly more complicated than an airliner from the 1940s, and it also much, much safer. Poorly engineered technology fails, not necessarily complex technology

    > secondary problem is the stacking of abstraction layers docker / kubersomething

    Then don't use Kubernetes or Docker? They aren't mandatory

  • by lunarlull on 3/31/25, 5:10 PM

    Alpine, Void, Devuan or Artix all would have allowed author to use Linux while addressing his points of concern. I don't think the BSD's have real advantages anymore since so much core performance stuff is in Linux first. When most of the software is available on all these platforms, it mostly comes down to user preference.
  • by giantrobot on 4/1/25, 12:21 AM

    I have tried several times to run FreeBSD on my home servers and every single time it fails to boot let alone run. Each happily runs Linux (with ZFS) all day long with no issues.

    I know the project is smaller in scope and has less funding than Linux (distros and subprojects included) but it's kind of ridiculous. It's incredibly frustrating to set aside the time to set up a machine only to have the kernel panic half way through booting the install media.

    From there it's an annoying and exhausting yak shaving exercise just trying to get the machine to start. Eventually I just give up and put the latest Ubuntu LTS which boots and installs with no problems. I import my ZFS pool and everything just hum along.

    I cut my Unix teeth on FreeBSD 3 and 4. I want to use modern FreeBSD but it never seems to run on hardware I actually own. That's why I don't run FreeBSD on my home servers.

  • by znpy on 3/31/25, 4:57 PM

    > ZFS is more efficient on FreeBSD (Insert Source)

    FreeBSD and Linux share the same ZFS codebase, openzfs.

    FreeBSD had its own zfs implementation but they had to drop it becayse they couldn't keep up with openzfs.

  • by tracker1 on 3/31/25, 7:51 PM

    For me, it's about friction vs total understanding. I accept that I don't know and won't know/understand everything.

    I can install a relatively minimal Linux server (usually Ubuntu Server), disable snaps, install Docker community, copy my app directories (with docker-compose.yaml files in each) and `docker compose up -d` in each directory and be (back) up in moments. When I was trying a couple different hosts for mail delivery, the DNS changes took longer than server setup and copy/migration. It was pretty great.

    It's also lead me to a point where I'm pretty happy or unhappy with given applications by how hard or easy a compose file for the app and it's dependencies are. Even if, like my mail server, the whole host is effectively for a single stack.

    No, I'm not running more complex setups like Kubernetes or even Swarm... I'm just running apps mostly at home and on my hosted server. It's been pretty great for personal use.

    For work, yeah, apps will be deployed via k8s. The main projects I'm on are slated for migration from deployed windows apps, mostly under IIS or Windows Services, to Linux/Docker.

  • by npodbielski on 3/31/25, 5:22 PM

    Thought I agree with points of an author saying that it is wasteful to run 10 SQLs to run ten applications, I am not SYS admin and I do not want to spend few hours every week upgrading my software. With docker you do 'docker compose pull; docker compose up' and you done. You can do that via cron in every dir with your compose file and you are done.

    In fact I think even that thing is still to complicated. We need one-click deploys, automatic updaters for Linux or FreeBsd or similar for regular people to be able to self host and own their data.

    Having local pizzeria hosting its menu on Facebook is not a good thing. Having an online only calendar app as an only way to schedule haircut locally is not good thing. Having all your files stored on OneDrive or GoogleDrive is not a good thing.

    If author thinks FreeBsd is better - cool. Then work on a solution for ordinary people to host file storage server using FreeBsd in a simple way.

    Create simple wizard to install Nextcloud or Owncloud or mail sever on FreeBsd.

    This post is true but it is just a rant that do not solves any real problems. One if them is that people do not want to manage servers. For better or worse - is beside the point.

  • by waynesonfire on 4/2/25, 8:43 AM

    Switched to FreeBSD at home as well and it was such a wonderful decision.

    As a long time Linux user, what was unexpected and eye-opening was seeing how FreeBSD does things. Somethings better and somethings worse. The better things are WAAYY better. And, for many of those things that felt better, in summary it's really the culmination of the entire ecosystem. The whole package is just better to me. The cohesive system is really a killer app. Oh, and pf is great too.

    I havn't given up on linux; it's just running as a VM in bhyve now.

    I also run Freebsd on a Thinkpad T490 laptop. Works great.

  • by mycall on 3/31/25, 9:44 PM

    > Overall system reliability is therefore the product of the individual reliability of each component.

    Is that true? All you need is one bad mosfet and all the other components fine, zero reliability. Doesn't a M x N matrix with only one extreme value average out from many samples over time?

  • by johnea on 3/31/25, 10:06 PM

    > TLDR : the main problem is SYSTEMD

    I couldn't agree more.

    Its a testament that this s/w is _still_ NOT LIKED by so many people.

    I've been a linux on the desktop, FreeBSD for the server user/admin for over 20 years.

    It's a great combination...

  • by hyperbrainer on 3/31/25, 7:32 PM

    For some reason the silverbullet link in the website is broken if I copy it or just click it. But typing the exact same thing works.
  • by exiguus on 3/31/25, 10:24 PM

    Docker is an excellent tool, especially when used with SELinux enabled. It offers process isolation, resource restrictions, and reproducibility. While similar isolation can be achieved with chroot or jails, these methods lack reproducibility. Additionally, managing updates in chroot and jails can be quite challenging compared to Docker or Portainer. Jails and chroot is a big no-no for CI/CD, in my opinion also the reason no one use it.
  • by briandear on 3/31/25, 4:08 PM

    Curious what “home servers” are really for. I’ve gone decades without needing a home server — what am I missing out on?
  • by flas9sd on 4/1/25, 12:28 AM

    come for the reminder to try FreeBSD some time again, read light systemd bashing (can't be harmed at that point), stay for managing Jails backup through.. NocoDB :) Stunning travel photography btw!
  • by vermaden on 3/31/25, 8:40 PM

    ... and if someone looks for more reasons 'why' FreeBSD then here they are:

    - https://vermaden.wordpress.com/2020/09/07/quare-freebsd/

  • by DrNosferatu on 4/1/25, 7:28 PM

    Does it support GPU computing?
  • by dangus on 3/31/25, 4:19 PM

    Just another “I don’t like systemd and refuse to understand it” rant.

    I can’t think of any change that has improved my Linux sysadmin experience more than the move to systemd.

    Is it complicated? Perhaps it is. But this FUD about it being resource intensive or unreliable or difficult to use is complete nonsense.

    And on top of that systemd isn’t even “Linux.” Plenty of popular production-ready distros like Alpine Linux don’t even use it.

    And of course I’m not saying FreeBSD is bad, but I’m not the one writing and publishing an article bashing a system I don’t understand.

  • by caycep on 3/31/25, 8:05 PM

    granted I'm sort of doing it via TrueNAS I suppose.
  • by okanat on 4/1/25, 12:26 AM

    TL;DR "the major component that is primarily made to make managing servers and services easier and providing good journalling on Linux, systemd, makes me uncomfortable." which is quite the litmus test for stubborn sysadmin that didn't deploy anything remotely complex.

    I'm on the opposite camp. If FreeBSD can provide a systemd-like service and device management software, then I would switch to it.

  • by KronisLV on 4/1/25, 2:03 PM

    > The biggest issue with FreeBSD : The very bad habbit of developper to deploy OpenSource software only with Docker

    Holy smokes, that's quite biased!

    Good on the author for having strong beliefs, but in my eyes containers avoid entire categories of problems, like: https://blog.kronis.dev/user/pages/blog/oracle-jdk-and-openj... (request processing times for a random system at work ages ago under load tests, it ran passably in OpenJDK, whereas running with the Oracle JDK resulted in an order of magnitude worse performance, this was before containers were introduced in the project; guess which vendor's version was installed in prod without telling us about it)

    It was an old post, but sometimes there's configurations that do break in mysterious ways when the dependencies and the runtime environments aren't matched exactly to what was developed/tested against; see the whole reproducible software movement, can have good install scripts and other tools to help even without containers, but good luck managing the support matrix of a bunch of separate *nix distros and OSes, as well as having testing environments for those. Not even that much of an enterprise concern.

    We shouldn't need containers for everything, but there aren't that many other good options for packaging software.

    > There is a new fashion on the OpenSource world. Developpers think Docker is the new de-facto standard and only propose to install the tools with Docker.

    It shouldn't be the only way, but I'll take Docker over random shell scripts and the ability to easily launch software and then later tear it down without messing up my host OS or having to mess around with VMs. Tbh, I do kind of wish that Vagrant had caught on more.

    > Most of the time they do no provide any DOCUMENTATION to install their software in a baremetal way.

    > At best, they offer an .rpm or .deb package for installation on a non-docker OS. But most of the time it's a docker file.

    The Dockerfile is more or less living documentation, which is better than having a source code repo with nothing but obviously worse than someone taking the time of the day to write some guides or docs. Then again, if it's FOSS devs, then I'll take anything over nothing, given how much time they (don't) have.

    > Even worse, modern applications often deploy the code and database needed to run the code directly in the docker compose file.

    Deploying DBs or even multiple separate apps within a single container is pretty much an anti-pattern (anyone who has needed to decipher what happens inside of a GitLab Omnibus image, or update the Sonatype Nexus image from the embedded OrientDB to PostgreSQL probably understands the pain) and should be avoided, maybe save for rare cases like a web server with PHP-FPM (maybe with supervisord) if you really need to, or for the lazy/testing setups that will ultimately be a mess to maintain.

    Edit: however if the complaint is literally just about having a Docker Compose file which contains a self-contained DB container, then it's up to you to decide whether you want or don't want to use it, or reference another DB running on the host or elsewhere (e.g. host.docker.internal or equivalent). It doesn't take much work to comment out a block.

    > At what point did these people think it was relevant to run 10 different databases when I'm hosting 10 applications ?

    You don't have to. You could have a single PostgreSQL or MariaDB/MySQL instance that just has multiple users and DBs/schemas. I honestly prefer the more distributed setup because I now can move software across servers trivially and can update the versions as I please and when something misbehaves, the impact is limited (because you can set CPU/RAM limits, easier than with cgroups).

    Good for the author for enjoying FreeBSD, it's a pretty cool OS and feels more like it's "designed" instead of the more organic feel of how Linux distros work.

    But I reject the motionally charged language and stance.

  • by horsawlarway on 3/31/25, 7:44 PM

    Look, as someone running a mix of bsd and linux machines...

    The only salient point in this entire article is that BSD typically is less convoluted as a system (and as a consequence... usually less capable and less supported).

    I find absolutely all of the other points to be "easy cop outs". They're there to provide him a mental justification for doing the thing he wants to do anyways, without actually justifying his logic or challenging any assumptions.

    ---

    Case in point - I used to point all (most of) my hosted services at a single database. It genuinely sucked. It's a larger backup, it's a larger restore, if it goes down everything is down, and you better hope all the software you're hosting supports your preferred DB (hah - they won't, half will use postgres, half will use mysql, and half of the mysql half will actually be using mariadb, and I'm ignoring the annoying group that won't properly support a networked db at all and don't understand why I'm frustrated they only support sqlite).

    You know the only thing it was actually doing for me? Marginally simplifying deployment, usually at first time setup.

    You know what else the author of this post is trashing? Some pretty good tools for simplifying deployments.

    Turns out... if spinning up a database is 3-10 lines in a config file, and automatic backups are super simple to configure with your deployment tool (see - all those k8s things he's bashing)... You don't even feel this pain at all.

    ---

    Basically - This is a lazy argument.

    Perfectly fine personal preference (I also sometimes enjoy the simplicity of my freeBSD machines, and I run opnsense for a reason).

    But a trash argument against the things he's railing against.

    Switching to k3s and running kubernetes was a a pretty giant time sink to get online (think maybe 25 hours) - but since it's come online... I've never had an easier time managing my home services.

    Deployment is SO fucking simple, no one machine going down takes any service down, I get automatic backups, easy storage monitoring (longhorn and NAS), I can configure easy policies to reboot services, or manage their lifecycles, I can provision another machine for the cluster in under 10 minutes, and then it just works (including GPU accelerated workloads).

    These days... It's been so long since I've ssh'd into some of my machines that I occasionally have to think for a minute before I remember the hostname.

    I don't think about most of them AT ALL - they just fucking work (tm).

    I remember the before times - personally, I don't want to go back. It's never been easier to run your own cloud - I currently have 112 online pods across 37 services. I don't restart jack shit on my own - the system runs itself.

    Everything from video streaming to LLM inference to simple wikis and bookstack.

  • by doublerabbit on 3/31/25, 5:26 PM

    Linus is too in-bed with Microsoft.

    RedHat the main powerhouse behind Linux ans is now owned by IBM. And Ubuntu is just corporate Debian who pushes their own proprietary (Flatpak) software which is cobbled together and just generally sucks.

    Systemd is bloated in wanting to do everything at once. I have never had a linux systemd distribution that just shutdowns without prompting me "waiting x/2minutes - x/y retries".

    FreeBSD is my daily driver and will always be my primary. Once you get over the "eww it's bsd" linux snobbery you start to realise how solid it actually is.

    Wifi works, graphics work. Wine and Proton works. Ports is fantastic and kernel compiling is easy. It even works on my MSI 2024 laptop. [1]

    Linux is lost in a communistic maze of leap frog.

    [1] https://bsd-hardware.info/?probe=b7f27b9528