← Back to blog

Hardware Options Analysis: Every Path I Evaluated

A deep dive into the six hardware options I considered for my homelab upgrade — from Intel N100 to Mac Mini M4 to a full custom build, with consumption, AI capability, and scalability compared.

homelab hardware planning ai
On this page

In the previous post, I laid out the philosophy behind my hardware choice: expandability over compactness, separation of concerns, power efficiency, and buying used. Now it’s time to get into the details — every option I seriously considered, why each one fell short, and which one won.

This isn’t a spec sheet comparison. It’s a real evaluation filtered through the specific needs of a homelab that runs 20+ Docker containers, needs NAS storage with redundancy, and wants a credible path to local AI inference.

The Evaluation Criteria

Every option was measured against four axes:

  1. Power consumption — target ~30W idle, the less the better for 24/7 operation
  2. Local AI capability — can it run Ollama with meaningful models (7B+), and how?
  3. RAM scalability — can I start at 32 GB and grow to 64-128 GB without replacing the machine?
  4. Noise and form factor — this lives in a home, not a data center

Let’s go through them.


Option 1: Intel N100 / N150 / N305 Mini PC

The ultra-efficient class. These processors are everywhere in the mini PC market — Beelink, MinisForum, Trigkey, GMKtec — all offering small fanless or near-silent boxes at €100-200.

What’s good:

  • Phenomenal idle power: 6-10W for the entire system
  • Completely silent (many are fanless)
  • Perfect for a dedicated Home Assistant or Pi-hole box

What kills it:

  • 4 E-cores, no P-cores — the N100 is an efficiency-first design. Fine for lightweight tasks, but running 20+ Docker containers with Home Assistant, Zigbee2MQTT, Pi-hole, Authentik, monitoring, and more will push it to its limits
  • RAM: 16 GB max, usually soldered — most N100 boxes come with 8 or 16 GB soldered. The N305 sometimes has a DIMM slot, but maxes at 16 GB. That’s not enough for my current workload, let alone growth
  • No PCIe slot — zero GPU expansion
  • AI capability: essentially none — CPU-only inference on 4 efficiency cores is glacially slow. Even a 3B model like Phi-3 Mini takes 10+ seconds per token. Open WebUI is unusable
  • Limited storage — typically one M.2 slot, no SATA. Some have a second M.2, but no 3.5” drive bays for NAS use
  • Not cheap for what you get — a decent configuration like the Beelink EQ14 with 16 GB and 500 GB SSD runs around €400. At that price point, you’re paying serious money for a machine that is, functionally, a lateral move from the NiPoGi — newer silicon, but the same sealed, non-expandable architecture

Idle power: ~8W AI verdict: Not viable RAM ceiling: 16 GB (fixed) NAS capable: No

The Intel N100 class is fine as a dedicated single-role appliance — a Home Assistant box, a Pi-hole server, a VPN endpoint. But as a do-everything homelab server, it’s just the NiPoGi with a newer sticker. Same soldered RAM, same single SSD slot, same dead end. And at ~€400, it’s not even a cheap dead end — it’s a straight replacement that solves none of the problems that pushed me to upgrade in the first place.


Option 2: AMD Ryzen 7 7730U Mini PC

The step up. The Ryzen 7 7730U (Zen 3, 8 cores/16 threads) appears in mid-range mini PCs from Beelink, MinisForum, and others, typically at €300-500 configured.

What’s good:

  • 8 real cores with SMT — genuinely capable multi-threaded performance
  • Vega 8 iGPU with hardware video encoding
  • 15W TDP — very efficient for the performance
  • Some models have DIMM slots (up to 64 GB DDR4 in dual-channel)

What kills it:

  • DDR4, not DDR5 — the 7730U is a Cezanne refresh on Zen 3. It’s last-gen memory, no ECC support
  • Still a sealed box — no PCIe slot, limited storage expansion (typically 1-2 M.2 slots, no SATA)
  • AI capability: marginal — the 8 CPU cores handle 7B models at ~2-3 tokens/second. Functional for testing, but painful for daily use. The iGPU doesn’t meaningfully help with LLM inference
  • No upgrade path — you get what you buy. When 64 GB isn’t enough or you want a GPU, you’re back to buying a new machine

Idle power: ~12-15W AI verdict: Functional but slow (7B only, CPU inference) RAM ceiling: 64 GB (DDR4, if DIMM slots available) NAS capable: No

This is the strongest mini PC option — it can actually handle the current container workload. But it doesn’t solve the three core problems: no NAS storage, no GPU slot, and a platform that can’t grow beyond specs you buy today. It pushes the dead end further out but doesn’t eliminate it.


Option 3: AMD Ryzen AI 9 HX 370/375 Mini PC

The bleeding edge. MinisForum, GMKtec, and Geekom have started offering mini PCs with AMD’s latest mobile processors featuring NPUs (Neural Processing Units) for AI workloads. The Ryzen AI 9 HX 375 has 12 Zen 5 cores, an RDNA 3.5 iGPU, and a 50 TOPS NPU.

What’s good:

  • Flagship mobile performance — Zen 5 cores are a massive generational leap
  • RDNA 3.5 iGPU with hardware ray tracing
  • 50 TOPS NPU for hardware-accelerated AI
  • DDR5 support, up to 64-96 GB
  • Some models offer expandability (couple of SSD slots)

What kills it:

  • Extremely expensive — the MinisForum UM790 Pro or similar Ryzen AI 9 boxes start at €1,100-1,300 for the barebone. Add RAM and storage and you’re at €1,500-1,800. That’s custom build territory
  • The NPU is niche — as of today, the XDNA NPU is supported in very few Linux workloads. Ollama doesn’t use it. Open WebUI doesn’t use it. The software ecosystem for NPU inference on Linux is early-stage at best
  • Still sealed — impressive specs, but the same expandability limits: no PCIe slot, no 3.5” drive bays, no real NAS capability
  • AI capability: decent on paper, limited in practice — the iGPU with ROCm could theoretically accelerate LLM inference, but AMD ROCm support in Ollama for integrated GPUs is spotty and version-dependent. In practice, you often fall back to CPU inference

Idle power: ~15-20W AI verdict: Promising but immature (NPU/iGPU not well supported in container workloads) RAM ceiling: 64-96 GB (DDR5, if DIMM slots) NAS capable: No

If price were no object and you didn’t need NAS storage or a discrete GPU, this would be tempting. But at €1,300+ for the barebone — more than an entire custom AM5 build with a proper GPU — the value proposition collapses. You’re paying flagship prices for a platform that still can’t hold 3.5” drives or a full-size GPU.


Option 4: Apple Mac Mini M4

The wild card. Apple Silicon is genuinely impressive for AI. The M4 chip’s unified memory architecture means the GPU and CPU share the same memory pool — a 16 GB M4 can run a 7B model entirely in “GPU memory” without a discrete card. The M4 Pro with 48 GB unified memory could handle 30B+ models via Metal.

What’s good:

  • Unified memory = no VRAM bottleneck for the model sizes the memory can hold
  • Metal acceleration for Ollama — genuinely fast inference, native on macOS
  • Incredibly efficient: ~5-7W idle for the base M4
  • Excellent build quality, tiny form factor, dead silent
  • macOS ecosystem — Homebrew, native apps, great developer experience

What kills it — the Docker blocker:

This is the critical issue. Docker Desktop for Mac runs containers inside a Linux virtual machine. Apple’s Virtualization framework provides CPU and memory to that VM, but does not expose the Metal GPU or Neural Engine to it. This means:

  • Ollama running inside a Docker container on macOS = CPU-only inference. No Metal. No GPU acceleration. You lose the single biggest advantage of Apple Silicon.
  • To get GPU-accelerated inference, Ollama must run natively on macOS, outside Docker. But the entire homelab architecture is container-based — Home Assistant, Pi-hole, Nginx Proxy Manager, Authentik, monitoring — all in Docker. Running one critical service outside Docker breaks the management model (no Dockge, no unified compose files, separate update process).

For reference, Docker GPU passthrough is only available on Windows with NVIDIA GPUs via WSL2. Docker on macOS and Docker on Linux (for non-NVIDIA GPUs) have no comparable mechanism.

Other limitations:

  • No 3.5” drive bays, no SATA ports — zero NAS capability
  • RAM is soldered — what you buy is what you get (16/24/32/48 GB depending on model)
  • No ECC memory
  • macOS licensing technically restricts virtualization options
  • Starting at €700 (M4 base, 16 GB) to €2,100+ (M4 Pro, 48 GB)

Idle power: ~5-7W AI verdict: Excellent native, useless in Docker (no GPU passthrough on macOS) RAM ceiling: Fixed (16-48 GB depending on model, soldered) NAS capable: No

The Mac Mini M4 is a fantastic personal computer and a great local AI workstation if you run Ollama natively. But as a Docker-based homelab server — which is what I need — the inability to pass GPU to containers defeats its primary advantage. At €700+ for the base model with no expandability, no NAS, and a Docker GPU limitation that has no workaround, it’s the wrong tool for this job.


Option 5: Minisforum MS-A2

The closest miss. The Minisforum MS-A2 is the most interesting compact machine I found. It uses a standard AMD AM5 socket (socketed, not soldered), has DDR5 SO-DIMM slots, dual M.2 NVMe, two 2.5” SATA bays, and a PCIe expansion slot — all in a case smaller than a shoebox.

What’s good:

  • AM5 socket — you choose your own CPU (Ryzen 5/7/9), and can upgrade later
  • DDR5 SO-DIMM — up to 96 GB, swappable
  • PCIe expansion — there’s actually a slot for a GPU
  • Dual NVMe + 2x 2.5” SATA — reasonable storage
  • Compact: smaller than most ITX builds
  • 2.5GbE networking

What kills it:

Storage limitation: Only two 2.5” SATA bays. For a NAS with redundancy, this is insufficient. A ZFS mirror with two 2.5” drives gives you maybe 2-4 TB of redundant storage using laptop drives (expensive per TB). You can’t fit 3.5” NAS drives (€20-25/TB) which are 3-5x more cost-effective. For any serious NAS use, you’d need an external enclosure via USB — which negates the whole point.

GPU limitation: The PCIe slot is low-profile only. Low-profile GPUs are a niche market:

  • Fewer models available
  • Worse cooling (smaller heatsinks, often single-fan)
  • Significantly more expensive — a low-profile RTX 4060 costs €100-150 more than the standard version, with fewer options to choose from
  • Used market is tiny — the “buy used” strategy barely works for low-profile cards

Starting price: The barebone MS-A2 is ~€400-500. Add a Ryzen 7 CPU (€200-300), 32 GB DDR5 (€80-100), and an NVMe drive (€60-80), and you’re at €750-980 before any GPU. A custom mATX or ITX build hits similar prices with zero compromises on GPU, storage, or expandability.

Idle power: ~20-30W (depending on CPU choice) AI verdict: Possible with low-profile GPU, but limited and expensive RAM ceiling: 96 GB (DDR5 SO-DIMM) NAS capable: Barely (2x 2.5” only)

The MS-A2 is the only compact option that came close to meeting my requirements. If you don’t need NAS storage and can live with a low-profile GPU, it’s excellent. But those are my two most important needs after expandability — and it falls short on both.


Option 6: Full Custom Desktop Build

The one that works. A compact mATX or ITX build with off-the-shelf components, assembled to your exact needs.

The fundamental advantage of a custom build isn’t any single component — it’s versatility at every level. You choose the case based on how many 3.5” drive bays you need and how much airflow you want. You choose the motherboard based on SATA ports, M.2 slots, and whether you need ECC support. You choose the CPU based on your workload and power budget. You choose how much RAM to start with, knowing you can double or quadruple it later without replacing anything.

Every single component is a decision you make based on your specific requirements — not a compromise imposed by a pre-built enclosure.

Why it wins on every axis:

  • Power consumption: A well-tuned desktop system can idle at ~25-35W. Compared to the current setup (NiPoGi ~10W + WD MyCloud ~15W = ~25W), a single machine at ~30W idle that replaces both and adds dramatically more capability is barely an increase. The performance per watt improvement is massive.

  • Local AI: A full-size PCIe x16 slot means any GPU works — no low-profile premium, best cooling, widest used market. Docker GPU passthrough works perfectly on Linux with NVIDIA Container Toolkit. Ollama in Docker with full GPU acceleration = the exact workflow I want. Even without a GPU on day one, a modern multi-core CPU is 3-5x faster at inference than the Celeron J4125.

  • RAM scalability: Standard DIMM slots mean you start with what you need and grow to 128 GB over time. ECC support is available on the right platform. No soldering, no fixed ceiling, no platform replacement.

  • NAS storage: Multiple SATA ports and 3.5” drive bays in the case mean real NAS capability with proper redundancy — ZFS mirror, RAID-Z1, whatever fits the need. No laptop drives, no USB enclosures, no compromises.

  • Noise: The one genuine trade-off. A tower is bigger than a mini PC and needs a dedicated spot. But it doesn’t have to be loud — modern NAS drives are designed for quiet operation, efficient CPUs barely need cooling at idle, and a quality case with sound-dampening panels makes it effectively silent in normal use. Not fanless-silent, but living-room-silent.

And then there’s the used market. This is where the custom build truly shines economically. Standard mATX/ITX components are the most competitive and liquid segment of the PC market. CPUs, RAM, GPUs, cases, power supplies — everything is available used at significant discounts. The “buy used” philosophy from the previous post works best when every component is a commodity standard, not a proprietary form factor.


The Comparison Matrix

CriteriaN100Ryzen 7 7730URyzen AI 9Mac Mini M4MS-A2Custom Build
Idle Power~8W~12-15W~15-20W~5-7W~20-30W~25-35W
AI (Day 1)NoneSlow CPUiGPU (spotty)Great native, none in DockerLow-profile GPU possibleCPU (usable)
AI (Future)NoneNoneSameSame Docker limitLimited by LP slotAny NVIDIA GPU
Max RAM16 GB64 GB64-96 GB16-48 GB (fixed)96 GB128 GB
NAS StorageNoNoNoNo2x 2.5” only4+ x 3.5”
GPU ExpansionNoNoNoNo (Docker limit)Low-profile onlyFull-size PCIe x16
ECC SupportNoNoNoNoPossible (AM5)Yes (AM5)
Used MarketLimitedModerateMinimalNoMinimalExcellent
Price Range~€400€300-500€1,100-1,800€700-2,100€750-1,000€500-900

The Verdict

The custom build wins because nothing else can do all three things I need:

  1. Real NAS storage with 3.5” drives and ZFS redundancy
  2. Full-size GPU slot for local AI with Docker passthrough
  3. Genuine scalability — RAM to 128 GB, CPU upgradeable on AM5, storage expandable

Every mini PC, no matter how powerful, hits a wall on at least one of these. The MS-A2 came closest, but its low-profile GPU restriction and lack of 3.5” drive bays are compromises I don’t want to make for a platform I’ll run for 3-5+ years.

What’s Next

The next post covers choosing the case — the component that constrains everything else in a homelab build. How many drives, what GPU, what PSU — it all starts with the box.

The build is taking shape. Time to pick the parts.