[SOLVED] Downloading Latest Backup File

I'm using a bash script to download the latest backup of my hubitat database. I am using the following link in my wget command:

http://[hubip]/hub/backup?fileName=latest

What's happening though is that the hub is creating a new backup and downloading that, rather than the latest backup that is available. Am I using the wrong link or is this the only method available? I can't just download the last file the hub created?

This is the way it has always worked. With that command you are essentially creating and saving a snapshot at that point in time.

I've never tried but you could probably download the other backups by name. Of course you would need to know the full name in advance so it could not be automated.

I only needed a snapshot so I'm good as is but hopefully someone knows a method to download previous backups without knowing the actual filename.

Yeah...either that or disable the function where hubitat does it's own backup if one was created manually in the last 2 hours let's say. That way, the hub isn't trying to make another backup if one was just created. The backup process seems to be pretty labor-intensive for the hub (at least from anecdotal observations).

Yep...that's why I do my automatic backups at 4am every morning. The hub maintainance would have already completed so the file size will be at its minimum.

2 Likes

@Ryan780,

here is a bash command line that downloads the last created backup and doesn't create a new backup

EDIT: moved to github to prevent formatting issues: https://raw.githubusercontent.com/danTapps/Hubitat/master/latest_backup.sh

No need to know the filename, it is parsed out of the html code
Let's hope that the html code on that particular site never changes :slight_smile:

3 Likes

Sorry, that doesn't work.

Warning: Failed to create the file
Warning: <tdclass=mdl-data-table__cell--non-numeric><aclass=downloadmdl-buttonm
Warning: dl-js-buttonmdl-button--raisedmdl-js-ripple-effecthref=#data-fileName=
Warning: 2019-06-29~2.1.1.122.lzf>Download</a></td>: No such file or directory
  0     0    0     0    0     0      0      0 --:--:-- --:--:-- --:--:--     0
curl: (23) Failed writing body (0 != 1170)

Interesting...... let me take a look....

My string is getting formatted differently by Discourse....

Uploaded it to github, go get it from here:

https://raw.githubusercontent.com/danTapps/Hubitat/master/latest_backup.sh

That worked! Thanks!!

@dan.t, I don't speak curl. Anyway you could translate that into something I could use with NR.
This is the relevant part of the flow I currently use.

[
    {
        "id": "4e1c7a5c.6f5574",
        "type": "file",
        "z": "a5638902.63e4a8",
        "name": "Save File",
        "filename": "",
        "appendNewline": false,
        "createDir": true,
        "overwriteFile": "true",
        "encoding": "none",
        "x": 760,
        "y": 740,
        "wires": [
            []
        ]
    },
    {
        "id": "358aebb5.0850d4",
        "type": "inject",
        "z": "a5638902.63e4a8",
        "name": "Dev Hub Daily 4:00AM",
        "topic": "",
        "payload": "",
        "payloadType": "date",
        "repeat": "",
        "crontab": "00 04 * * *",
        "once": false,
        "onceDelay": 0.1,
        "x": 250,
        "y": 740,
        "wires": [
            [
                "6c262e2f.f4ddd"
            ]
        ]
    },
    {
        "id": "6c262e2f.f4ddd",
        "type": "http request",
        "z": "a5638902.63e4a8",
        "name": "Get backup",
        "method": "GET",
        "ret": "bin",
        "paytoqs": false,
        "url": "http://192.168.7.111/hub/backupDB?fileName=latest",
        "tls": "",
        "proxy": "",
        "authType": "basic",
        "x": 450,
        "y": 740,
        "wires": [
            [
                "271d56fe.3c61fa"
            ]
        ]
    },
    {
        "id": "271d56fe.3c61fa",
        "type": "string",
        "z": "a5638902.63e4a8",
        "name": "Get filename",
        "methods": [
            {
                "name": "strip",
                "params": [
                    {
                        "type": "str",
                        "value": "attachment; filename="
                    }
                ]
            },
            {
                "name": "prepend",
                "params": [
                    {
                        "type": "str",
                        "value": "\\\\mylocation"
                    }
                ]
            }
        ],
        "prop": "headers.content-disposition",
        "propout": "filename",
        "object": "msg",
        "objectout": "msg",
        "x": 610,
        "y": 740,
        "wires": [
            [
                "4e1c7a5c.6f5574"
            ]
        ]
    }
]

Yes you can. Your flow downloads a new backup based on the http call you make. The curl command above downloads the latest backup that was made and doesn’t create a new backup. Both ways work. I am sure that you could create a flow in NR that does the same thing. All I do in that curl/sed command is to parse the html code of the backup website to find the last backup that was taken and download that backup.

Again, nothing wrong with your solution, I was just trying to give Ryan a solution to his particular question.

I am old school and like the command line. Anything I can do with curl/grep/sed will be done that way instead of employing the overhead of a different solution like NR... I still prefer the DOS box on windows over the windows explorer :grimacing:

1 Like

I think you may have misunderstood my post. I understand the difference between my method and yours and would like to use your way instead. However, I don't know how to translate you curl command into a NR flow. I could probably figure it out through google/trial and error but I was hoping you could walk my lazy butt through doing the same thing in a flow :wink:

For anyone wondering...I'm running a simple Bash script via cron on my RPi that runs my Assistand-Relay and Cast-web-API node servers. This script first deletes all but the newest 3 files then downloads the newest. That way I only have 4 files at any one time. Save it to any filename you like, do a quick chmod +x and you're ready to add your cron job.

2 Likes

Yes, I didn’t get that :wink:

Let me take a look at it, shouldn’t be too hard...

@stephack,

here you go. The flow looks like this:

You need to install two plugins:
node-red-contrib-string
node-red-contrib-cheerio-function

You need to edit the two "http request" nodes to change the IP address for your hub
You need to adjust the "string" node to change the path. Right now it is set to /var/log/
You also need to add your "timer" to start the flow, right now I am using a manual inject node called "get data"

I have uploaded the flow here: node-red/HEdownloadLastBackup.json at master · danTapps/node-red · GitHub

Let me know if you have any problems.

3 Likes

Thanks Dan!
I'll check it out soon.

Hey Ryan,

Could you post again this in github? I copied what is the in your post but I'm not sure if that's all. I think this is the easiest way to use my rpi.
Thanks.

Take a look here:

Ryan took that and just edited it to delete all but the last 3 backups. The script I linked here is the base for this but keeps all of the files

1 Like

Thanks, unfortunately I notice this doesn't log in to the hub, I have log in enabled :pensive:

I know that @aaiyar has a script for rebooting with login enabled, you could see if you can take his and modify the curl command to download the backup.

1 Like