Getting Severe CPU Load on Hub with only a few devices and apps

This is the hub I have in my RV. it controls very few devices and has only a few apps. Total time spent in apps and devices seems relatively small: (3.8% for devices and 1.6% for apps)

RV_Slow_Hubitat_Screenshot_2021-03-08_status

-Jeremy

those are really big numbers for a day or less of stat collection time.
Perhaps look into reducing the polling/reporting intervals you have set, particularly for the battery and solar devices.

I'm not sure I understand what you mean by "a day or less of stat collection time."? The hub has been up for 27 days, so wouldn't that be 27 days of stat collection time?

I believe what this is saying is that for the past 27 days, about 24 hours of time (IE: 3.8%) was spent on one of these devices. 3.8% doesn't seem like much to me?

The battery, solar, and inverters are really just one network device (MQTT). There's a Victron hub that controls those devices and it reports status updates via MQTT. So those devices in Hubitat are just subscribing to various MQTT feeds from that single Victron device. They aren't being polled: I just subscribe and let the MQTT feed deliver updates as they come in.

let's just take the LFP battery the driver fired 1,949,417 times in 27 hours apx.
that's 72,200 times per day, 3008 per hour 50 times per minute, whether each one of those device parse calls yielded an event or not, that's a large amount of busy work.

You need to figure out a way of reducing the number of MQTT events being generated by the Victron hub, that's where all your resources are going and very little of it is actually firing off your automations.

3 Likes

This seems like trying to shift blame and distract from the main issue.

If the hub could handle this number of parse calls per minute on day 1, day 2, day 3, day 4, day 5, etc, why is it on day 27 that it suddenly cannot? It's only spending 11 ms per call. Up until today this hub was running just fine with no noticable slowness.

I have three different hubs, and each one carries a completely different type of workload and I have this same issue on every hub I own. People here have been complaining loudly in large numbers about hubs getting slower over time for a solid year or more now. The auto-rebooter app is one of the the most popular apps for Hubitat hubs. There is clearly a memory leak of some kind in the Java code for the hub, because after the hub slows to a crawl you eventually start getting this screen:

As a Java developer myself I would bet money that you have a memory leak (Yes you can get those in Java too) and as memory pressure develops from referenceable objects that cannot be garbage collected you end up having to run more and more GC (garbage collection) cycles to try and free up enough memory for normal threads to run. This causes the CPU load as the garbage collector has to search through larer and larger numbers of objects trying to fine some that are no longer referenced that can be freed up. This causes a huge CPU spike as available memory dwindles down to almost nothing.

On my other hubs I have it set to auto reboot nightly to avoid this issue. I left this hub alone because it takes a very long time for this hub to slow to a crawl and I figured it would be helpful for me to report these issues to you guys rather than just working around it with a nightly reboot, especially since the new usage stats were added.

But if you're more interested in victim blaming than actually finding the root cause of your memory leaks I can just set up the auto-rebooter on this hub as well and stop trying to offer feedback.

-Jeremy

1 Like

well, i can't provide accurate advise if you only provide a limited subset of data, particular that you have other hubs and you need to reboot constantly to keep things going...

Having said that, memory leak or not, 11ms or not, your system is way out of balance in regards to the number of event calls the driver sees, vs the number of events that are actually making it to your apps.

I'm sure you can appreciate the limited resources available, so why make matters worse than they need to be.