FreeBSD Server Hardware
A big part of my rationale to use FreeBSD for my servers is the server hardware itself. My servers are headless 2U rack servers that sit in my garage, running 24/7. I'll highlight some of the "server" features below.
Major Components
Main Server
- Supermircro 826BEC1C-R920LPB 2U Chassis
- Supermicro X12STH-F motherboard
- Intel Xeon E-2388G
- 4 x 8 GB of Samsung DDR4 UDIMM ECC RAM
- Nvidia Mellanox ConnectX-4 Lx
- Supermicro AOC-S3008L-L8e HBA connected to a Supermicro BPN-SAS3-826EL1 backplane
- SATA HDDs: 4 x 20 TB, 2 x 22 TB WD Red Plus
- SATA SSDs: 1 x 1 TB, 2 x 2 TB Samsung 870 EVO
- NVMe SSD: 1 x 250 GB Samsung 970 EVO Plus
Backup Server
- Supermircro 825TQC-R740LPB 2U Chassis
- Supermicro X12STH-F motherboard
- Intel Xeon E-2324G
- 4 x 8 GB of Samsung DDR4 UDIMM ECC RAM
- Intel X710-DA2 Fiber Network Adapter
- Data Storage Devices Cabled Directly to Motherboard
- SATA HDDs: 2 x 10 TB, 4 x 14 TB WD Red Plus
- SATA SSDs: 2 x 2 TB Samsung 870 EVO
- NVMe SSD: 1 x 250 GB Samsung 970 EVO Plus
FreeBSD Hardware Compatability
FreeBSD is not as big of a project as Linux. Because of that, hardware driver support can lag, or never exist. But what it does support, it supports very well. All of the hardware above works fully.
Obviously there are thousands of parts out there and I've only listed a few. If you're going to try FreeBSD, I'd first checkout the FreeBSD Hardware Notes on the release you plan to use. For example, here are the FreeBSD 14.0-RELEASE Hardware Notes.
Look For Server Hardware Features
FreeBSD excels as a server. If you have server hardware with server features, you'll get even more out of your FreeBSD experience. In particular, if you're building a server, look for some of these:
- IMPI / Redfish - offers all sorts of ways to interact with a machine while not physically near it. Allows you to access things like the BIOS, boot loader, and console as if you had physical access. Massive utility.
- SR-IOV Capable Network Cards - this allows you to give things like virtual machines and containers/jails their own PCI device to configure, which results in much nicer logical separation (and possibly higher performance).
- SES-2 Enclosure Management - allows you to issue a command that returns the physical location of your storage devices among other things
- Intel Quick Sync Video (QSV) - I see a lot of people buying 300 Watt processors and plugging in 300+ Watt GPUs on their servers just to do things like hardware transcode video in Plex. For far less money, you can get something like a Xeon E-2388G, which usually doesn't go much over 100 Watts and can easily transcode multiple video streams with its built-in QSV GPU