Hub error 500

Hey all,

Woke up this morning to nothing working and an error 500 page. Soft reboot didn't work, had to pull the plug. Then I had to restore from backup since the hub thought it was brand new. OK now I'm back up.

Here is what the logs show, it happened at 3:05m when nobody was up and no automation was happening

Looks like things are pretty normal until the db error at 3:05 as it was attempting to call my Echo device

I rebooted and restored at approx 5:40am

FW 2.1.1.116

Thoughts?
Rick

The exact same thing happened to me!
Firmware Version: 2.1.1.114

I didn't bother looking at the logs
Thank god I had a current backup!

Andy

Mm there seem to be several reports of this happening just recently.

Yep...I have had no issues updating my Dev hub but I have not installed the new firmware on my main hub because of these reports.

1 Like

Happened to me as well last Friday. I've seen a number of other reports in the forums of a similar issue. I always have the latest firmware installed.

Ended up doing a soft reset to clean up the database and was back up in 15 minutes.

I have a script that downloads a backup of my hub every 4 hours so I have a lot of backups to minimize any losses.

1 Like

:point_up: this, exactly!

My two production hubs are still running 2.1.0. My dev hub was upgraded to 2.1.1 last night, and made it through the night without any issues.

Only one of my 4 hubs was affected.
I’m still one update away from the latest on all of them.
I’ll be updating to the latest on my dev hub tomorrow

@gavincampbell, like you I have scripted backups running for all hubs.

Andy

1 Like

@Cobra, how are you doing backups? rPi script?

Same here. I take mine daily at ~4am after the hub maintainence so the size is minimal.

But I REALLY don't want to test the process.

I also haven't created a single rule with RM 3.0 because of speculation that it might be part of the issue. I doubt that is the case, but my paranoia won't let me try it yet. Still waiting for the dust to settle on these past few updates.

I do mine via Node Red.

I do mine via an rPi (cron job), here is the curl command:

curl -JLO http://192.168.10.153/hub/backupDB?fileName=latest

2 Likes

Can you share your flow?

My windows servers are always on so I just used a good old fashioned dos batchfile with wget.exe

Andy

1 Like

Funnily enough, I don’t use RM at all.

Probably 95% of apps on my hubs are custom.
Before now I never had a lockup on any of my hubs unless I was messing with code.

Andy

1 Like

Has any of these instances of error 500 followed by hub "amnesia" been reported to Hubitat Support, and if yes, what was the response?

Sure. Keep in mind that this is exported from my Windows instance of NR so any nodes that reference file location would need to follow Linux formatting (assuming you are using a Linux distro for you NR). Basically flip the slashes as necessary and put the actual folders you plan to store the backups in.

[
    {
        "id": "31f80583.137ffa",
        "type": "file",
        "z": "1a25485a.160c68",
        "name": "Save File",
        "filename": "",
        "appendNewline": false,
        "createDir": true,
        "overwriteFile": "true",
        "encoding": "none",
        "x": 660,
        "y": 40,
        "wires": [
            [
                "fcae276c.cfdc28"
            ]
        ]
    },
    {
        "id": "c916945e.2a3d68",
        "type": "inject",
        "z": "1a25485a.160c68",
        "name": "Dev Hub Daily 4:00AM",
        "topic": "",
        "payload": "",
        "payloadType": "date",
        "repeat": "",
        "crontab": "00 04 * * *",
        "once": false,
        "onceDelay": 0.1,
        "x": 150,
        "y": 40,
        "wires": [
            [
                "67731afb.2979e4"
            ]
        ]
    },
    {
        "id": "67731afb.2979e4",
        "type": "http request",
        "z": "1a25485a.160c68",
        "name": "Get backup",
        "method": "GET",
        "ret": "bin",
        "paytoqs": false,
        "url": "http://192.168.7.1.111/hub/backupDB?fileName=latest",
        "tls": "",
        "proxy": "",
        "authType": "basic",
        "x": 350,
        "y": 40,
        "wires": [
            [
                "8eadb6f5.01e168"
            ]
        ]
    },
    {
        "id": "8eadb6f5.01e168",
        "type": "string",
        "z": "1a25485a.160c68",
        "name": "Get filename",
        "methods": [
            {
                "name": "strip",
                "params": [
                    {
                        "type": "str",
                        "value": "attachment; filename="
                    }
                ]
            },
            {
                "name": "prepend",
                "params": [
                    {
                        "type": "str",
                        "value": "\\\\nodered\\\\backups\\\\dev\\\\"
                    }
                ]
            }
        ],
        "prop": "headers.content-disposition",
        "propout": "filename",
        "object": "msg",
        "objectout": "msg",
        "x": 510,
        "y": 40,
        "wires": [
            [
                "31f80583.137ffa"
            ]
        ]
    },
    {
        "id": "fcae276c.cfdc28",
        "type": "fs-ops-dir",
        "z": "1a25485a.160c68",
        "name": "# of Backups",
        "path": "\\nodered\\backups\\dev\\",
        "pathType": "str",
        "filter": "*",
        "filterType": "str",
        "dir": "files",
        "dirType": "msg",
        "x": 130,
        "y": 120,
        "wires": [
            [
                "6a93c3f4.90cd5c"
            ]
        ]
    },
    {
        "id": "6fc01da3.081d04",
        "type": "fs-ops-delete",
        "z": "1a25485a.160c68",
        "name": "Del Oldest BckUp",
        "path": "\\nodered\\backups\\dev\\",
        "pathType": "str",
        "filename": "files[0]",
        "filenameType": "msg",
        "x": 470,
        "y": 120,
        "wires": [
            []
        ]
    },
    {
        "id": "6a93c3f4.90cd5c",
        "type": "switch",
        "z": "1a25485a.160c68",
        "name": "15 File Limit",
        "property": "files.length",
        "propertyType": "msg",
        "rules": [
            {
                "t": "gte",
                "v": "15",
                "vt": "num"
            }
        ],
        "checkall": "true",
        "repair": false,
        "outputs": 1,
        "x": 290,
        "y": 120,
        "wires": [
            [
                "6fc01da3.081d04"
            ]
        ]
    }
]

@cuboy29, I forgot to add that you would need these nodes installed as well:
node-red-contrib-fs-ops
node-red-contrib-string

1 Like

I think I might just create an http device to check from one hub to another.
That way, if one goes off another one can send me a msg.
I can’t use ping because most of the time you can still ping the hub.
This would only work if you have more than one hub though.

Andy

Here is a nice way of monitoring the hub performance with Node-Red, just in case you feel advantageous of diving into Node-Red

It can send you alerts via pushover. I do something very similar and it has proven helpful in tracking things down, my flow is just not that elaborate.

2 Likes

Yup. Bobby took a look at my db and said it was in bad shape and that I should rebuild it. That is done by doing a soft reset and a restore of the last backup

Also, all of my apps had a little (gc) in the title (using ascii characters). I used this to identify anything I wrote easily. It was pointed out that these characters in the title could be a contributing factor. I have removed them all and cleaned it up and am monitoring it now.

98% of my apps/drivers are custom written/rewritten by myself. I don't get any hub lockups/slowdowns except in the morning when its doing its maintenance and I notice it when I happen to be letting my dog out.

The 500 error was a db issue.

1 Like

I actually do this procedure on a quarterly basis to all of my hubs. It is pretty quick and I figured it might be a good maintenance cycle. No, the Hubitat team has not recommended anything like this but my experience with databases is that some type of maintenance is usually required...