C-4 / C-5 / C-7 Hubs - free memory decline over time

Because every time I reboot my zigbee buttons (Lightify) require a double press for them to turn lights on.

Interesting data.

One observation is that you have Maker on all 3 hubs. When I tried Maker, I had extreme slowdowns and lots of lockups. It all went away immediately when I removed Maker. I wonder if there is something going on with it? This was way back in probably the 2.1.9 to 2.2.2 timeframe, and I haven't re-installed Maker since.

I haven't really had trouble running maker so far :crossed_fingers: but anything is possible. Also I run Node-RED exclusively as my rules engine so kind of have to keep it right now.. :grin:

@brianwilson

If you don't mind me asking. how did you get that in Graphana. What is it's data source?

i saw the same recently on one of my hubs..i think the culpret was the shelly driver.. that is the only change i made recently and it was extremely slow today.. i removed it and rebooted.

1 Like

Sure, it's from a node-red performance flow from here: Node-RED Flow - Hubitat Performance Monitor I took his and tweaked it a bit so that it checks memory & response time and will issue a restart if performance drops below a certain threshold vs. just restarting it everyday.

1 Like

Which shelly driver, the Hubitat version?

i'm still on 2.2.3.148 and was using the shelly driver from them not the built in one..

after making sure the sluggishness doesnt return for a few days. i will try the built in one again and see. when i tried it status was not being updated without polling.

1 Like

Here's my memory chart. Hub reboot needed at each of the low points on the chart due to unresponsive hub or zigbee is offline.

1 Like

Have you ever run hub stats?

I have not - just starting to get into it.

Post screenshots here once you do the stats, and maybe we can spot something...

And @cuboy29 I would be really interested to see your hub stats, that is a very unusual looking graph.

1 Like

Does anyone have an unused "empty" hub they can let run with only the OS? Would be interesting to see.

I do - 2 of them. The memory slowly goes down, then settles in at a flat line over time.

Unfortunately I just reboot node-red a few days ago, and I don't retain history on these graphs.

C-4 hub running since 12/11/20:

C-5 hub. This one I reboot today... For 'reasons'. Before the obvious reboot it had been running since 12/11/20.

I got my hub a little over a month ago as well and decided to make my own graph to see my results. Very interesting.

Haven't had any indication that I'd need to restart yet. Low memory isn't necessarily a bad thing either. If the memory is there and the hub is using it for a good reason, than I have no complaints.

I'll keep a close eye on this and see how this goes into the future.

I would watch the Free JVM number. That is the only one I have seen correlate very strongly to hub crashes/lockups/slow response.

For me anyway - may not be the best indicator for everyone. When mine go much below 100k for anything other than a small dip it is usually lights out time.

2 Likes

image

I posted the stats and @gopher.ny looked into it already but didn't see anything interesting. He's recommending that I cycle power my hub twice a week for now since it's a C4 HUB.

I might jump to a C7 soon so waiting to see how things are for you guys with the C7 first :slight_smile:

1 Like

I see your "total JVM" increased. Up to now I assumed total JVM was hardware dependent. Kinda like total hard disk capacity.

Actually as I am reading this thread I have it stuck in my mind that the Hubitat folks are working on this and some major modifications of how the 700 series chip is handled.

1 Like

@JasonJoel - which endpoint do you poll to get the JVM memory stats?