Docker and Home Automation

Too bad that Portainer split into their pro version and the CE version. Also their "compose" versions are limited to version 2 if you define the container and a stack of services. Portainer isn't perfect but it does provide a visual of what you have installed and what is in use.

One more word of advise make sure you know who built the images. There has already been a plethora of "hacks" out there that are building questionable services into images like bitcoin miners etc.

1 Like

True. Try to get containers from the app vendor when possible, "trusted" individuals next, and from some random person next to never (without due diligence).

Thank you all so much for your in depth replies, one small question before I start, as I understand it, with docker, given architecture compatibility, switching from HW to HW or OS install to OS install is a matter of saving the image of the container, updating/switching hardware or OS, then redeploying that same image? And then I would be up and running just as simple as that?

There is a little more than that. Once you create a container from an image you need to make sure your configuration of the application for the container is in a "persistent" location buy creating a volume. If you don't do that any updates to the container version will wipe out the configuration location and you will have to re-configure. Most image creators document the method for persisting the configuration and other data elements.

1 Like

Example for node-red:
Running under Docker : Node-RED

It would be common to run it something like this:

docker run -it -p 1880:1880 -v node_red_data:/data --name mynodered nodered/node-red:latest

the "-v node_red_data:/data" tells Docker to use the host directory "node_red_data" to store the container directory "data" (which is where the container puts all node-red configuration files).

So to move it to a new machine you would copy the contents of that "node_red_data" directory from the old host to the new host, and then make sure you use that location in the docker run command on the new host.

Further, as you may guess "-p" tells Docker to expose that port in the container to the host so that external things can access the service in the container. Just like the "-v" flag, the order is host:container. The example above uses the standard port 1880 for the node-red instance. But you can also remap the ports. So to use port "7777" instead of the standard "1880" for node-red (let's say you want to run more than one node-red on the same host, like I do) it would be "-p 7777:1880".

On my testing system I do something like this for my 3 instances:
-p 1880:1880
-p 18880:1880
-p 18888:1880

1 Like

Oh, and I looked. Once upon a time I used this link for my original RPi installs. They were running Raspbian, but may be useful:

(note you don't need to do step 4 unless you want to create your own container images - unlikely)

https://howchoo.com/g/nmrlzmq1ymn/how-to-install-docker-on-your-raspberry-pi

1 Like

Yeah right now I am in a good place as far as automation and I am simply trying to figure out how to make my setup more robust and resilient to a RPi failure.

Currently for god knows what reason, my RPi does not like to boot on it's own without user intervention, basically the boot device (USB SATA SSD) is not waking up apparently, but the moment I force the boot to continue after going into Emergency Mode via CTRL+D, it boots fine, which is very odd.

But I am trying to find a way to reinstall everything without the headache or downtime, and I figured Docker may be a good avenue for that along with other benefits.

Sometimes you are so far down the rabbit hole that it's best to build a new one from scratch. I have two RPI's here at home that are production capable and mirrored the configuration of Docker apps on my Synolgoy. I test on one of the RPI's then migrate that test to the second RPi and then finally to the Synolgoy. WAF demands that I have redundancy after a RPi had a kernel panic 8 months ago and it was my only DNS on the network. The only single point of failure I really have is the gateway from my provider and my router and switches. And for those devices I do have contingency except for the provider gateway.

I know this feeling all too well :upside_down_face:, how do you have redundancy for this? Just alternate DNS on your router? Are you using PiHole and if so do you find that traffic actually adheres to the primary/secondary/tertiary DNS or does it just do what it wants? All of the PiHole guides I find online state to use soley the RPi as the DNS provider leaving the other ones blank.

Also is there any way to have redundancy for the DHCP server since I am using PiHole as DHCP for device by device statistics, thank Netgear and Orbi for that one.

I have two instances of PiHole running that are "live" I copied the "gravity.db" file from the one I call primary to the secondary PiHole. This preserves the DNS rules but doesn't propagate the configuration of the PiHole server settings. Also with that I have two instances of DNScrypt Proxy running that each PiHole uses as it's DNS endpoint(s0.

PiHole 1--->>  DNScrypt-Proxy 1  --->>>Gateway and DHCP
        \  /                     /
         \/                     /
        /  \                   /
PiHole 2--->>  DNScrypt-Proxy 2

As for DHCP I haven't found a good way of mirroring that. I don't use PiHole for DNS I use my security gateway and have configured both PiHole's in the DHCP settings for clients. I take a snapshot of the configuration of my gateway weekly so if that thing dies I just build a new one and restore but I can temporarily setup a old Asus router or a Netgear router that I have kept around for contingency.

For critical network infrastructure like NAS, switches, AP's, the Pi's etc they all have static addresses configured in them so if I swap a router the core network is in place ready to go with a replacement router. Then I can recover in layers:

  • Tier 0 - network / infrastructure,
  • Tier 1 - Automation and personal devices
  • Tier 2 - Entertainment
  • Tier 3 - Everything Else
1 Like

Seems like a a fault tolerant or high availability VM or a small Docker Swarm would have been a lot less work. :slight_smile:\

But cool solution!

Both the PiHole and it's assocated DNScrypt are in in the same docker-compose stack using Macvaln networking to get around the host networking issues with native layer 1 access. It took a good amount of time to build this out earlier this year after I finished a bunch of of other projects. Used my free time during the holidays to update the yaml file to the new PiHole variables and just did a UP and zero impact on the network as I updated the containers.

Ah, that does make sense then. :slight_smile: And yes, the layer 1 issues in a swarm stack can be a real pain in the arse (so much so I usually just kick out to a VM instead of fighting with the swarm).

Cool!

i use docker with a pre-built image on my qnap to allow the amazon echo app to refresh the cookie automatically. ie

1 Like

Hi guys!

I've proposed last October the development of something that I called:

The idea began something like the Hubitat Package Manager, where users could add "packages" e.g. NodeRed and so on. Then, @dman2306 came up with a comment that what I was trying to develop would be something like Docker containers. Bingo! That's it!!!!

Since I never worked/used Docker, it has been a very slooooowwwww development and I must confess that between my regular work - 12 hours shifts that are killing me - and the preparation to - hopefully - moving from Brazil to Portugal, I have little to none time to work on the idea.

What you guys think about it?

1 Like

Docker is the king right now on getting multiple applications running on a OS where there dependency conflicts with software stacks and networking. I have a new motto "if they don't offer it in a docker container I don't use it'.

Looking at what you are trying to approach here I would say having a list of known stable implementations of "Companion" services that work in Docker and on the Pi would be great. One thing for sure that all docker images need to be open with their sources on Github and published into the Docker Hub. Additionally they should support both a classic docker CLI and Docker-Compose implementations.

Certain core services such as SAMBA, WSDD, Time, etc are core the OS but everything else Docker.

I think it so - it could be very useful, wouldn't it?

See, I'm not a "Linux" guy, and a lot of HE users I think are not too, so, having an easy way of installing "companion" services on a RPi would be a really nice addition to HE itself.

One of my worries was on how to manage Docker images - install, update, remove, configure and so on - so it could be easy for a Linux newbie - like me! - to work with Docker images. But I saw a comment in one of the posts here that there is already something available to do that: Portainer! It seems to be perfect for that.

One of the feasible and I believe easy to implement Docker images would be:

It would solve the problem that so many HE users talk about: getting HE backups out of the HE itself, periodically and automatically.

What I'd like to do is to get the attention of someone versed in Docker and take the lead on this. I hope I'll do.

Even a newbie with Docker needs to get their feet wet with the command line. Portainer only fills a small part of managing Docker applications and their components. Yes you can download a image, create volumes, and networks. But you still need to provide the "configuration" to glue them together. Portainer is great for having a view of what is going on inside of Docker but as a tool for implementation of containers I still go for Docker-Compose and yaml files.

If you look at many Docker images and their documentation they also provide the command line and/or compose deployment. They also document the use of persistent storage and environment configuration which will be unique to the device/os being deployed on.

The learning curve isn't too steep since the person using docker has already invested somewhat in Linux configuration if they purchased a Pi.

But if we had a Wiki on "Companion" images and sample configurations that would be a start. Someone would have to take on the curation of such a list.

Well, the idea would be to provide to developers a "roadmap" to create Docker images and they, by their part, would develop such images and made them available for everyone.

I've created a basic Pi configuration with Docker installed - as a trained monkey, to tell the truth ... just typing what the instructions told me to do !

Well, that I could do. I could propose and run a process of Docker "Companion" images submission, "peer evaluation" and management.

We just need a Docker versed guy ...

Seems like a chicken vs egg problem if no one is stepping up offering to create and maintain the tailored containers (I'm definitely not).

I really don't see much value in this initiative, really. For example, the benefit of a node-red container that comes 'pre-loaded' with the hubitat nodes is of very limited benefit over a generic node-red container + a 1 liner telling people how/where to go add the node and configure it. End user still needs to understand how to use node-red - this doesn't lower the technical bar for anyone. :man_shrugging:

But at the same time if you get enough people interested and willing to push this forward, kudos.

3 Likes