[SOLVED] Downloading Latest Backup File

Turns out I wasn't patient enough to wait for a reply so I kept experimenting and got it working. Here is my version updated from yours, but working for me:

#!/bin/bash
#
he_login=hubitat_login
he_passwd=hubitat_password
#
he_ipaddr=hubitat_ip_address
backupdir='dir'
cookiefilename='cookiefile'
cookiefile=$backupdir$cookiefilename
#
curl -k -sb $cookiefile https://$he_ipaddr/hub/backup | grep data-fileName | grep download | sed 's/<td class="mdl-data-table__cell--non-numeric"><a class="download mdl-button mdl-js-button mdl-button--raised mdl-js-ripple-effect" href="#" data-fileName="/\ /' | sed 's/">Download</a></td>/\ /' | sed 's/ //g' | tail -1 | xargs -I @ curl -k -sb $cookiefile https://$he_ipaddr/hub//backupDB?fileName=@ -o $backupdir@

Here is the complete (I think) list of changes and the reasons:

  1. Removed "cd [DIRECTORY_OF_YOUR_BACKUPS]" because I put the directory into the commands themselves.
  2. Updated "cookiefile=`/bin/mktemp`" because the wrong single-quote was used.
  3. Added "backupdir='dir'" to avoid having to retype it repeatedly.
  4. Added "cookiefilename='cookiefile'" so I could combine it with the backupdir to leave cookiefile as is in later commands.
  5. Removed " -X POST " because this seemed to be returning a 405 error from the Hubitat.
  6. Replaced "xargs -I @ curl -o @ http://$he_ipaddr/hub/backupDB?fileName=@" with "xargs -I @ curl -k -sb $cookiefile https://$he_ipaddr/hub//backupDB?fileName=@ -o $backupdir@". I did this as the -k was needed to not run into error with the hubitats SSL certificate and is -sb $cookiefile was needed to use the cookie to prove login and finally added the directory to the output rather than depend on the cd that was removed.

Net-net, here is a version of my new version with actual (but made up details):

#!/bin/bash
#
he_login=foo
he_passwd=bar
#
he_ipaddr=192.168.0.2
backupdir='/mnt/backup'
cookiefilename='cookiefile.txt'
cookiefile=$backupdir$cookiefilename
#
curl -k -c $cookiefile -d username=$he_login -d password=$he_passwd https://$he_ipaddr/login
curl -k -sb $cookiefile https://$he_ipaddr/hub/backup | grep data-fileName | grep download | sed 's/<td class="mdl-data-table__cell--non-numeric"><a class="download mdl-button mdl-js-button mdl-button--raised mdl-js-ripple-effect" href="#" data-fileName="/\ /' | sed 's/">Download</a></td>/\ /' | sed 's/ //g' | tail -1 | xargs -I @ curl -k -sb $cookiefile https://$he_ipaddr/hub//backupDB?fileName=@ -o $backupdir@

Finally, I am trying to decide if I want the backup to have the name that was on the HE (actual date data gathered) or if I want the time date I backed it up to my computer. To do the latter, I can change the last line of the script to:

curl -k -sb $cookiefile https://$he_ipaddr/hub/backup | grep data-fileName | grep download | sed 's/<td class="mdl-data-table__cell--non-numeric"><a class="download mdl-button mdl-js-button mdl-button--raised mdl-js-ripple-effect" href="#" data-fileName="/\ /' | sed 's/">Download</a></td>/\ /' | sed 's/ //g' | tail -1 | xargs -I @ curl -k -sb $cookiefile https://$he_ipaddr/hub//backupDB?fileName=@ -o $backupdir/MainHubitat_$(date +%Y-%m-%d-%H%M).lzf

1 Like

Wow, great!

Stupid question, where you place this script? Using crontab? I'm in the very novice training.

Thanks!

Unfortunately your next question is beyond my knowledge. I am just planning to run this manually whenever I want to. The reason I want to do it via script is that I have 4 different HE in 2 different locations. Thus my plan is to expand the script to cover all 4, then run it before I go into each of them to make changes like updating firmware or updating key drivers etc.

Good Luck in finding a way to use crontab or other methods to trigger the script on the schedule that works for you.

1 Like

Thanks, I think I found how. I posted it here just in case anyone else need it.

Another stupid question, I'm getting this error

sed: -e expression #1, char 19: unknown option to `s'

Do you know why?

Thanks

@vjv

I have a simple one, replace your last command with the following:

curl -k -sb $cookiefile https://$he_ipaddr/hub/backup | grep download | awk -F"=" '{ print $5}' | awk -F'"' '{print $2}' | sed '/^$/d' | tail -1 | xargs -I @ curl -k -sb $cookiefile https://$he_ipaddr/hub//backupDB?fileName=@ -o $backupdir/MainHubitat_$(date +%Y-%m-%d-%H%M).lzf

2 Likes

Many thanks!, this worked and now I have the backup in my computer, finally!

This no longer works... I think hubitat "upgrades" have been made to attempt to monetize by forcing people to subscribe.

Can anyone verify that this doesn't work anymore.

Hubitat recently added optional subscription services, one of which includes the ability to save hub backups to the cloud.

This thread has been discussing local, LAN backups (previously the only kind of backup that existed).

It’s entirely possible something changed that makes some user scripts to retrieve backups from the hub over the LAN no longer work as written. I don’t actually use one at the moment, so im not sure.

But I don’t see why one would jump to the conclusion that they are surreptitiously trying to disable previously functional features in an effort to “force” users to sign up for new, optional, cloud-based services.

All while they’ve publicly stated repeatedly that they have no intention of removing access to features that previously existed, just adding on and improving new features for those users that want to pay for them.

3 Likes

Nonsense. Local backup still works. And will continue to work. Here are my backups from this morning:

-rw-r--r-- 1 root root  417952 Mar 27 04:45 HubitatM-20210327-0445.lzf
-rw-r--r-- 1 root root  564960 Mar 27 04:45 HubitatS-20210327-0445.lzf

I can verify that it continues to work as it always has!

2 Likes

Is not working for me either....

FWIW, this is the script that I have used for the last two years:

#
he_login="replace_with_your_login"
he_passwd="replace_with_your_password"
he_ipaddr="your.hubitat.ip.address"
cookiefile=`/bin/mktemp`
backupdir=/mnt/backups/hubitat
backupfile=$backupdir/$(date +%Y%m%d-%H%M).lzf
#
find $backupdir/*.lzf -mtime +5 -exec rm {} \;
curl -k -c $cookiefile -d username=$he_login -d password=$he_passwd https://$he_ipaddr/login
curl -k -sb $cookiefile https://$he_ipaddr/hub/backupDB?fileName=latest -o $backupfile
rm $cookiefile
4 Likes

Thank you so much for posting your script - that works for me whereas the above did not.

Sean

1 Like

did you see post by @aaiyar ? Using his script (slightly modified) worked great for me.

Sean

Thanks

So no one has been forced to subscribe to the cloud backup service after all? :slightly_smiling_face:

4 Likes

To fix this, the change the $5 to a $4, as in:
curl -k -sb $cookiefile https://$he_ipaddr/hub/backup | grep download | awk -F"=" '{ print $4}' | awk -F'"' '{print $2}' | sed '/^$/d' | tail -1 | xargs -I @ curl -k -sb $cookiefile https://$he_ipaddr/hub//backupDB?fileName=@ -o $backupdir/MainHubitat_$(date +%Y-%m-%d-%H%M).lzf

although I'd simplify the middle and just use

curl -k -sb $cookiefile http://$he_ipaddr/hub/backup | awk -F'["=]' '/class=.download/ { file=$9 } END { print file }' | xargs -I@ curl -k -sb $cookiefile https://$he_ipaddr/hub//backupDB?fileName=@ -o $backupdir/MainHubitat_$(date +%Y-%m-%d-%H%M).lzf

2 Likes

Mine (happily stolen from someone else smarter than me, but can't remember who) is still working on my Synology NAS:

On Synology NAS scheduler:

Task 1, Daily: Copy backup file from Hub to NAS:

cd "/path where you want to store backups"; /usr/bin/wget http:///hub/backupDB?fileName=latest --backups=14 --quiet

Set "--backups=14" to the number of backups you want to keep before removing older ones. I save two weeks worth before starting to drop off older ones.

Task 2, Weekly: Creates a copy of the current backup once a week and saves it - these are not automatically removed - they are kept until I manually delete them.

cd "/path where you are storing backups"; /bin/cp backupDB?fileName=latest hubitat_$(date +%Y%m%d).lzf

2 Likes

Thanks @aaiyar.

Would I be right in thinking that one of the differences in your script is that it is downloading the latest backup file already captured by the HE hub automatically, whereas the other methods listed are requesting the HE hub to perform a fresh backup at the time the HTTP calls are made, then downloading that file?

Simon

It’s actually the other way about. The other one is better.