Docker and Home Automation

Continuing the discussion from Node-RED nodes for hubitat:

Total noob in regards to docker but I have been interested in giving it a try. I am currently using a RPi 3B with dietpi to host NR, PiHole, PiVPN, and a NUT server and I have been thinking of transitioning over to docker containers. Could you give me a rundown of why someone would use docker containers over just normal Linux for this type of application and could it be adapted to docker 1:1?

Also where would you recommend starting? I have been thinking about upgrading to a Pi 4, so what are your thoughts on buying one of those and experimenting and cutting my teeth before doing a full transfer over?

1 Like

I have been using docker for a long time and used to install "natively" on the RPi until I started running into many conflicting needs for web servers, ports, etc. Once you get the hang of configuring containers with shell commands or using docker-compose downloading and tying new things out becomes very easy with no "residue" if you decide to remove the application.

Here is a good article on setting up and testing docket on RPIr:

How to install Docker and Docker Compose on Raspberry Pi (devdojo.com)

I suggest you read up on the documentation or watch some very good YouTube video's on docker also. Don't get caught up immediately into Portainer or Rancher until you have gotten you feet wet with the core product.

I have setup a really simple install for my mom's house that uses a RPi 4 with DNSCrypt-Proxy, PiHole, and EMBY media server. I built the same configuration in my lab a few months before for testing and trial and error for the configuration. I run DNSCrypt-proxy and PiHole in a single configuration with docker-compose since the are both needed for the DNS to work together and this took some "advanced" networking.

If you want to see a good inventory of popular applications that have public containers you can look here:

LinuxServer

fleet (linuxserver.io)

3 Likes
  • Application Prerequisite/Dependency Independence - one app (or its prerequisites, like a specific version of node.js) can't interfere with another app
  • Super easy upgrades. Pull the new docker image, it is patched and upgraded, ready to go
    • Can be made even easier if you use Portainer to maange your docker instance.
  • Hardware independence. Can take a docker container and move it to another machine no problem (as long as the hardware has the same processor type - can't move a container as-is from ARM to x86 for example).
  • Easy to control crash/restart behavior. Can also do this with PM2 and linux services, but is super easy in docker.
  • Easy to control memory and CPU usage. If you put smart limits on memory an CPU you can comfortably put more apps/load on the hardware without fear of a rogue app taking down everything else via resource exhaustion.
    • For example I know my main node-red instance uses ~200MB memory. So I can set that container to something like 300MB max and it will run fine, but if it gets a memory leak, etc, it will never use more than 300MB and kill ther host OS or other containers. And if I set it to 1 core it can't hog all the CPU either.
    • Another example, my main MQTT broker uses ~60MB memory. So U can set that container at 200MB max and walk away from it. It can't take down the other apps in my server if it loses its mind. Same comment on CPU usage, I assign it 1 core.
  • Security. One app, if compromised, can't impact another app or the host.
    • Unless Docker itself is compromised, or the container breaks out from it's isolation - which has happened in some cases before with vulnerabilities. But even in those cases it makes it more difficult.
  • Resource Usage (versus VMs - not standalone apps). Uses DRAMATICALLY fewer resources than running an app in a dedicated VM (which I would postulate is the traditional IT way of doing app segregation).

Can you do all your stuff in docker? Probably. Depends on what hardware access the container needs and what ports it uses.

Too big a question for me to answer in a forum. There are zilliions of guides on the net though. @ronv42 linked to some above.

Luckily, you can test at any time. Install docker, shut down whatever app you want to move to docker, install the container and try it. If it doesn't work then shut down the container and go back to your monolithic app.

Make no mistake, though, using Docker initially creates MORE complexity and learning. If you don't want to dig in and learn Docker, probably best not to start. In particular people often have issues getting their head around persistent storage, port exposure/access to the container, and network config.

3 Likes

That is what scares so many away from Docker or causes folks to stop using it. They have to do their research, learn about ports, storage, and networking from the viewpoint that doesn't exist in the native operating system. In addition the Docker training that I have taken started with building your own images vs. managing Docker and using published images. If you are a hobbyist it's more that likely that you will never build your own images.

1 Like

Using something like Portainer to install/manage Docker containers "can" make it easier for some. Still need to know the fundamentals though - like volumes/persistent storage.

(The second link also walks through Docker install. It is Ubuntu specific, but may apply to the rpi - not sure as I've never used dietpi)

Agreed. If I were starting out I would ignore learning about building custom images completely. One can learn that if/when they have need and have already mastered Docker fundamentals.

Too bad that Portainer split into their pro version and the CE version. Also their "compose" versions are limited to version 2 if you define the container and a stack of services. Portainer isn't perfect but it does provide a visual of what you have installed and what is in use.

One more word of advise make sure you know who built the images. There has already been a plethora of "hacks" out there that are building questionable services into images like bitcoin miners etc.

1 Like

True. Try to get containers from the app vendor when possible, "trusted" individuals next, and from some random person next to never (without due diligence).

Thank you all so much for your in depth replies, one small question before I start, as I understand it, with docker, given architecture compatibility, switching from HW to HW or OS install to OS install is a matter of saving the image of the container, updating/switching hardware or OS, then redeploying that same image? And then I would be up and running just as simple as that?

There is a little more than that. Once you create a container from an image you need to make sure your configuration of the application for the container is in a "persistent" location buy creating a volume. If you don't do that any updates to the container version will wipe out the configuration location and you will have to re-configure. Most image creators document the method for persisting the configuration and other data elements.

1 Like

Example for node-red:
Running under Docker : Node-RED

It would be common to run it something like this:

docker run -it -p 1880:1880 -v node_red_data:/data --name mynodered nodered/node-red:latest

the "-v node_red_data:/data" tells Docker to use the host directory "node_red_data" to store the container directory "data" (which is where the container puts all node-red configuration files).

So to move it to a new machine you would copy the contents of that "node_red_data" directory from the old host to the new host, and then make sure you use that location in the docker run command on the new host.

Further, as you may guess "-p" tells Docker to expose that port in the container to the host so that external things can access the service in the container. Just like the "-v" flag, the order is host:container. The example above uses the standard port 1880 for the node-red instance. But you can also remap the ports. So to use port "7777" instead of the standard "1880" for node-red (let's say you want to run more than one node-red on the same host, like I do) it would be "-p 7777:1880".

On my testing system I do something like this for my 3 instances:
-p 1880:1880
-p 18880:1880
-p 18888:1880

1 Like

Oh, and I looked. Once upon a time I used this link for my original RPi installs. They were running Raspbian, but may be useful:

(note you don't need to do step 4 unless you want to create your own container images - unlikely)

https://howchoo.com/g/nmrlzmq1ymn/how-to-install-docker-on-your-raspberry-pi

1 Like

Yeah right now I am in a good place as far as automation and I am simply trying to figure out how to make my setup more robust and resilient to a RPi failure.

Currently for god knows what reason, my RPi does not like to boot on it's own without user intervention, basically the boot device (USB SATA SSD) is not waking up apparently, but the moment I force the boot to continue after going into Emergency Mode via CTRL+D, it boots fine, which is very odd.

But I am trying to find a way to reinstall everything without the headache or downtime, and I figured Docker may be a good avenue for that along with other benefits.

Sometimes you are so far down the rabbit hole that it's best to build a new one from scratch. I have two RPI's here at home that are production capable and mirrored the configuration of Docker apps on my Synolgoy. I test on one of the RPI's then migrate that test to the second RPi and then finally to the Synolgoy. WAF demands that I have redundancy after a RPi had a kernel panic 8 months ago and it was my only DNS on the network. The only single point of failure I really have is the gateway from my provider and my router and switches. And for those devices I do have contingency except for the provider gateway.

I know this feeling all too well :upside_down_face:, how do you have redundancy for this? Just alternate DNS on your router? Are you using PiHole and if so do you find that traffic actually adheres to the primary/secondary/tertiary DNS or does it just do what it wants? All of the PiHole guides I find online state to use soley the RPi as the DNS provider leaving the other ones blank.

Also is there any way to have redundancy for the DHCP server since I am using PiHole as DHCP for device by device statistics, thank Netgear and Orbi for that one.

I have two instances of PiHole running that are "live" I copied the "gravity.db" file from the one I call primary to the secondary PiHole. This preserves the DNS rules but doesn't propagate the configuration of the PiHole server settings. Also with that I have two instances of DNScrypt Proxy running that each PiHole uses as it's DNS endpoint(s0.

PiHole 1--->>  DNScrypt-Proxy 1  --->>>Gateway and DHCP
        \  /                     /
         \/                     /
        /  \                   /
PiHole 2--->>  DNScrypt-Proxy 2

As for DHCP I haven't found a good way of mirroring that. I don't use PiHole for DNS I use my security gateway and have configured both PiHole's in the DHCP settings for clients. I take a snapshot of the configuration of my gateway weekly so if that thing dies I just build a new one and restore but I can temporarily setup a old Asus router or a Netgear router that I have kept around for contingency.

For critical network infrastructure like NAS, switches, AP's, the Pi's etc they all have static addresses configured in them so if I swap a router the core network is in place ready to go with a replacement router. Then I can recover in layers:

  • Tier 0 - network / infrastructure,
  • Tier 1 - Automation and personal devices
  • Tier 2 - Entertainment
  • Tier 3 - Everything Else
1 Like

Seems like a a fault tolerant or high availability VM or a small Docker Swarm would have been a lot less work. :slight_smile:\

But cool solution!

Both the PiHole and it's assocated DNScrypt are in in the same docker-compose stack using Macvaln networking to get around the host networking issues with native layer 1 access. It took a good amount of time to build this out earlier this year after I finished a bunch of of other projects. Used my free time during the holidays to update the yaml file to the new PiHole variables and just did a UP and zero impact on the network as I updated the containers.

Ah, that does make sense then. :slight_smile: And yes, the layer 1 issues in a swarm stack can be a real pain in the arse (so much so I usually just kick out to a VM instead of fighting with the swarm).

Cool!

i use docker with a pre-built image on my qnap to allow the amazon echo app to refresh the cookie automatically. ie

1 Like