Show off your homelab!

Working on replacing my entire network at home.

New network consists of:

1 - 1 CWWK N100 MiniPC with OPNSense for routing / firewall.
2- 3 CWWK N305 MiniPC with Proxmox installed in a 3 node cluster for virtualization
3 - 4 Mikrotek CRS310-8G+2S+IN managed switches with 8 2.5gb Ethernet ports and 2 SFP+ 10gb ports.
4- 1 Mikrotek CRS305-1G-4S+IN managed switch with 1 eth port for management and 4 SFP+ 10gb ports.
5 - 1 ASUS RT-AX88U Pro WIFI in Access Point mode with support for multiple VLANs when connected to a VLAN trunk port on the 2.5gbe LAN1 of the AP.
6 - QNAP TS673A 6 bay NAS filled with 18tb Seagate Iron Wolf drives, 2 1tb NVME SSD drives and 64gb of RAM. 1 PCIe expansion card added with two 10gb SFP+ ports (Media Storage)
7 - QNAP TS253Be with 2 14tb Seagate Iron Wolf drives and 1 PCIe expansion card added with two 10gb SFP+ ports (Camera Storage).
8 - 3 Hubitat C7 hubs and one C4 Hub.

Loving this new setup so far. Hope to start setting up the proxmox cluster this weekend. Most of my home automation outside of Hubitat runs off my old QNAP 4 bay NAS in docker containers. Wanted to not only beef up my network but remove single points of failure with all the home automation which is why I’m moving to the Proxmox cluster.

Picture is just temp setup to configure all the switches and VLAN setup with router.

4 Likes

What have you decided on for storage with the N305 Mini PC's. I have one of them with their 4xm.2 expansion boards. Because of that board i am running 3x4tb crucial p3Plus, a 1tb WD SN770 and a 2TB WD SN560E drive. Unraid is running on it and the 3 4TB drives are running in a ZFS pool.

The P3Plus drives are awesome in this setup because they seem to produce less heat and still perform well considering the PCIe lanes avaliable. Bot the WD Drives run hot.

My lab stuff would really just consist of the aforementioned CWWK Mini PC and a Custom built Home server with a ryzen 5950x system. I have thought a few times about building something out like you have shown.

I added a 2tb single SSD in each of the machines. When I was initially researching I believe one of the NVME ports was x4 while the other was x1. Although I’m told they made revisions over the years and while I have the revision C with the long straight fins, I’m told the x4 SSD port isn’t x4 anymore (although I haven’t been able to confirm that yet). I didn’t get the expansion board because I read on one of the forums (have to look it up) that the expansion board caused issues or at least only ran the SSD slots at x1. I plan to use part of the 6 bay NAS for data storage and keep the VMs and boot on the SSD in the minipc. I won’t have enough ports to bond via LACP the 3 ports plus a management port but will have enough to bond two of them together plus the management port.

That is true. Basically all the board does is split the one PCIe3x4 slot into 4 PCie3x1 ports. The other M.2 slot is keyed for a wifi board, but i believe the N305 version comes with a adapter to convert that slot to another NVME, SATA, or mSATA slot. Atleast mine came with that adapter. So with both of those it can handle 5 nvme drives.

Even with the bandwith reduced to 1 lane I was seeing speeds from my each of the drives around 800-900MB/s which is still near enough to saturate a 10gbps link. With ZFS running over the 3 P3Plus drives i was seeing speeds around 2.6GB/. I would image the decision if this would work or not depends on your use case. It is running Unraid and several dockers like a champ though.

The main reason I did this was to have multiple drives to try to work with vSAN as it needs multiple physical drives. All the stupid crap with Broadcom and the VMWare merger have thrown a wrench in all that though.

@inetjnky See this thread...Show us your rack! (No not that one!)

All Switches throughout house are Catalyst 3750X-48PF-S Since I did this pic I changed over to to a QNAP TS-832PXU-RP-4G raid instead of the drobo...

WAP's are Unifi ap AC-Pro's (one being the wifi 6 version)

Also added 2 blades since the original pic. All switches in the house are Cisco Catalyst 3750X-48PF-S

1 Like

Yeah I do t have room for a full rack like that. I’m using wall space in a closet under my stairs to mount all my gear. How much power do those catalyst switches take up? I wanted newer gear that is more power efficient and quieter. I know the catalyst switches used to be pretty noisy.

Pretty quiet.... I run the blades at low fan. Besides it's in it's own room.

1 Like

I’m just planning on using this for a scrypted install, Plex, influxdb and grafana hosting and a win11 virtualized plus whatever comes down the road in the future.

Love using Scrypted to get my IP cameras and video doorbell into the Apple home app :slightly_smiling_face:.

Agree. Have it running in docker on my NAS currently but want to change it to a LXC in Proxmox.

1 Like

The N305 will handle this easy.

On mine with Unraid it is running dockers of BOINC, DuckDNS, Grafana, Handbrak, Influxdb, Media Info, NodeRed, Plex Media Server, telegraf, and a Unraid API Docker. I also have VM's with Home Assistant, and a Windows 10 . I have had Multiple VM's for proxmox, and VMWare for testing as well when experimenting. It really is a nifty little box.

Not sure how you would set this up in Proxmox, but the Xe Graphics on the N305 are also pretty good for Plex. I had multiple transcodes going to devices in my house to test it and it didn't skip a beat. It was using the built in graphics and handled it well considering everything I have is h.265.

Are you planning on using Cephs storage in Proxmox to create redundancy between the CWWK N305 units?

I haven’t gotten that far. I see some people say run CEPHS and others say use ZFS. I have figure out which one would be better for my application. Others I’ve seen say only use CEPHS if you use 10gb fiber connection. I’m planning on using 2, possibly 3 2.5gb ports on each pc in LACP port bonded for the vlan trunk with one port as the management port. Do I need a management port for each node? I’m assuming the management node is the node that handles the management of the VMs in the cluster?

My setup is fairly basic.

I have 3 dell Ultra Small Form Factor PCs, Tplink omada & 1x c7.

  1. 1 x always-on PC for BlueIris etc
Summary

image

  1. 1 x pc for frontroom/lounge (general use)
Summary

image

  1. 1 x PC which is setup in my workshop with my soldering iron etc
Summary

(don't know the specs, moderate but fine)

Handful of standard 1GB switches and poe's.

Bog standard router

Summary

image

Wifi provided by:

  1. OC200 | Omada Hardware Controller - TP-Link
    (OC200 | Omada Hardware Controller | TP-Link United Kingdom)

  2. EAP653 | AX3000 Ceiling Mount WiFi 6 Access Point (x2)
    (EAP653 | AX3000 Ceiling Mount WiFi 6 Access Point | TP-Link United Kingdom)

I think Cephs and ZFS fill two different kind of disk management roles.

I don't think ZFS can provide storage backend Cephs can between nodes in a virtualized clustered environment. On a single host ZFS is nice because of it's extra data protection and stability it provides. but ZFS is getting also outshined now as NVME's are becoming more popular and the ZFS file system just wasn't designed to deal with the speed of NVME drives. Even on my Unraid with ZFS setup it is kind of a best of not the best options on it.

If i was going to run multiple hosts that have virtual environments I would use something like vSAN or Cephs for the file systems. I think GlusterFS is also a potential option as well. They allow you the ability to setup data protection not only on the host itself, but also between nodes that are included in the file system environment. I don't believe that is part of what ZFS can do.

Networking is certainly a concern with any storage system that runs across multiple Nodes, and as such 10Gbps is probably a minimum for a decent prod environment. 25 or 100 would probably be better. But for a home lab 2.5gbps may be an option as long as the cluster isn't to big. Since you have 10Gbps though you are in good shape.

1 Like

Yeah my limitation is the 2.5gbe on the minipc.