I too was getting this. I restarted the docker image and hard refreshed my HE page. I also closed any and all references to the local echo-speaks web page. Finally through the HE echo speaks app, I clicked on the Amazon Auth page, it opened a new browser to where I authenticated and it grabbed the cookie. I hit save and left that window open. Then hit Next in the echo speaks HE app. My authentication section then changed from the red X to a green check mark and I was off and running again.
huge kudos to @vmsman for putting this little how to together. On my Photon docker machine all I had to do was install docker-compose and in less than 60 seconds I have this image running. My only concern is how/who will maintain this image and how do I get updates? Unless this truly was a stop gap until the official image is released. Thanks again. I am now running locally and can kill my Heroku account.
So, Anthony (tonesto7) the author of Echo Speaks maintains this. The code GitHub - tonesto7/echo-speaks-server hasn't been changed on his github in 4 years. That's because all this code does is accept a login and store a cookie that Alexa Voice Services needs to work. Don't expect the v2.7.2 code to change for this. Notably, my Heroku version was v2.7.0. Now that so many are expected to move in this direction, I expect that if there were major changes, Anthony would make updates.
On a side note, I have seen so many fellow Hubitat Elevation users here talk about using a Raspberry Pi for things like Pi-Hole and this Echo Speaks cookie server. As you do more and more automation, you will discover that having ancillary, locally hosted server instances are a common requirement. In my case, I use several. Of special note is the Monocle Camera Gateway that can allow third party cameras with an RTSP stream to be voice controlled by Alexa to Echo Shows and Fire TV Sticks. I also have three Logitech Harmony hubs and my Hubitat drives advanced control of these devices through a local docker container for the Harmony-API. I use a lot of Webcore and so I have a local Webcore server that is not in the cloud. The list goes on.
On my YouTube channel that is devoted to self hosting and networking, I made a video about an inexpensive self hosting server of which I now own two. This server is the Minisforum UM-350 and it comes with 16GB of memory and a 500GB NVMe. This configuration can be purchased for as little as $269US. More expensive than a Pi 4, but it is also expandable to 64GB of memory. The Pi 4 is not expandable and comes with no storage. The processor is a nice mobile Ryzen 5 and fits in the palm of my hand. I have one of these running 17 LXD container instances, some with nested docker and it runs great. Check out my video. Cheap Self Hosting Server - YouTube The price has dropped lately compared with what I state in the video.
The other option is a cheap second hand HP MicroServer - I have 2 (N36L and N40L) and also built one as a NAS for my father. These are only dual-core AMD Athlon II Neo's, however they support all the needed Virtualisation features and the icing on the cake is they support cheap un-buffered ECC RAM. Oh, and they use stuff all power.
Running Open Media Vault on these + Docker + Portainer is a great solution.
Got up to this point with the Green Check. I save. And it reverts back to the red X. Anything I can do to troubleshoot this? I tried in Chrome, Firefox, Waterfox, and Edge. All of which pull "Authentication Good!" in the login page after grabbing the cookie. I deployed the container using Windows Docker container.
Can I manually add the cookie from the docker logs when it captures it successfully?
EDIT: It was my ESET Firewall blocking the calls. Running smoothly with Windows docker container. Thank you for this step by step @vmsman and @jtp10181 for recommending to use docker for windows. I didn't want to go investing into a PI or set up a VM specifically for this container and was about to give up on ESS after the 28th.
I have a QNAP NAS and I already had an Ubuntu VM spun up as a docker host that runs other images. There was a minor hiccup with installing docker compose (I needed to uninstall the apt version and install the version from the official Github repo), and then I ran into the red X issue. I cleared cookies...re-auth'ed many times...followed the troubleshooting everyone above mentioned and nothing worked.
Restarting the docker image fixed it....as others have said above.
So, if you have a QNAP, why not create a LXD image and then install the docker application in it. It's much leaner than the VM approach and won't conflict with the other apps on the VM.
I already have other docker images running in the VM. I don't want to split management planes at this time because I'm going to be migrating to another NAS soon. Having all my containers in one place will make that transition easier. Resources aren't an issue or concern for me at this moment.
@ihatetheohiostatebuc So, you may not realize that you are sacrificing both portability and security by stacking multiple applications inside of one VM. Security means both logical security and isolation. If you would create extremely lightweight LXD containers each bridged to your LAN on your NAS and then nest one Docker app per container, you would have a dedicated and isolated address for each LXD container. This would increase the flexibility for your Docker containers to each expose any ports required without the typical docker port remapping on a docker host with many containers. This also assures that the docker NAT network for each LXD docker host is isolated from the rest. This would also add the ability to do both "LXD Snapshots" for each container as well as LXD exports which would allow you to pick up each container and move it to any other LXD host including your new NAS. The best security for access control as well as backup is obtained by creating dedicated isolated enclaves. I actually have a host with 18 LXD containers, most with nested docker each running on dedicated addresses on various VLANs. The LXD host hosts all 18 LXD containers in about 13GB of memory. So, as you can see, you gain great security isolation, tremendous efficiency, dead simple backups, and absolute portability. I use this technique to move selected docker apps to and from my two NAS's and two other LXD servers and the redundancy and integrity have never had me look back to the VM approach that you are using and that I once used.
Oh and just to clarify, if you use both LXD Dashboard to manage your LXD containers and Portainer with the Edge client, you will have a single management plane. Also, the nesting of Docker inside LXD puts you in control of the Docker version per app which might be critical for some applications.
I realize everything you said, but in your quest to prove your point you missed mine..I plan to migrate..but I can't right now. I don't have the time, I have a major medical issue coming up right now so this was quicker to implement right NOW, with the goal of moving to LXD once I have the time.
Also, it's the holidays and I have 13 family members in town.
I appreciate the concern, but I've already planned for this.
@ihatetheohiostatebuc Happy Thanksgiving to you and yours. If you have any questions later on when you embark on your project, tag up with me on my Discourse server at https://chat.scottibyte.com/.
@vmsman Good info. Thanks! I also checked out your YouTube videos which i am going to peruse this weekend.
In addition to an Echo Speaks server, I also want to host a Unifi controller. I didn't have any experience with LXD or Docker but i have a feeling they are inevitably in my future.
My always-on device choices are a Synology ds220+ and an rPi3b. The rPi was serving as my piVpn and piHole, but I have moved those functions to a pfsense firewall so it is available (although I am considering using it to host a full bitcoin node which I think would make it unshareable). I also have a Windows system running Blue Iris but I'm very hesitant to add any functions to it because I want to keep it dedicated to cameras.
Any ideas on a hosting strategy with these assets?
@CAL.hub consider watching my video entitled "Cheap Self-Hosting Server". In it, I show how I used a Minisforum UM 350 Ryzen 5 3550H machine with 16GB of memory and a 512GB NVMe drive to host 18 LXD Ubuntu containers for my various infrastructure apps on that machine under Ubuntu 22.04 server in about 13GB. My channel is devoted to containerization, both LXD and Docker and I also present a lot of material on networking including bridging and macvlan. It turns out that home automation often requires self-hosted ancillary server functions to provide key functionality. A Pi4 is great, but its current cost, fixed memory size and slow I/O really are not the best choice. I encourage folks to host things like a Pi-hole, a Cast All the Things Server, a Harmony API, a Monocle Camera Gateway, and many others in a little server form factor like the UM350.
I've installed Docker for Windows, but I'm a little confused. Did you have to build a linux machine in docker to run the container or am I supposed to be able to just run the echo speaks container by itself in docker? Basically I've got docker installed on my Windows 10 machine, but can't figure out what to do next. The only instructions I can find on how to install the echo speaks container seem to be for linux.