So i am just going to put this out there.
Below are the spikes on my C4 when I was creating a flow for Node-Red to read send UPS data to a virtual driver. Those spikes occured because the Node-Red pallet to access the APC UPS Daemon required sub 13 second refreshes. So ever 12 seconds the flow was processing and sending about 10 updates. I had to update the flow not send updates if the value didn't change and consolidate some of the updates into one update. Ultimately it is stable now, but when doing that with 2 devices It killed the dev hub. It is very easty to create something that works in Node-Red fine and should work but can overwhelm the hub with to many updates at once.
I have never installed Homebridge, so don't know what their directions say. There may not be considering you probably send events from HE back to those respective devices.
So if you click on Logs on the left part of the hub UI and then click on Apps Stats across the top what does it show their for load. You may also want to look at the devices stats tab as well.
That can possibly provide hints to what is creating load.
Nope you are correct - this is the way to do it..
I agree with @mavrrick58 - it's very easy to spam HE depending on how many devices you have and what you are doing. I've ended up adding small delays between HE Command Nodes and generally try and avoid doing direct requests bypassing the HE nodes. Checking the app logs to see if the usage is high is definitely a good start.
I have about 60 Z-Wave devices and 5 group devices exposed to Maker on my C7. Have not had to reboot or anything . Note: my C7 is z-wave only, I also have 2 C5s - one for Zigbee the other for Cloud/Network stuff.
Can you PM me your hub's UID? I can take a look at the engineer logs on the hub.
I think I PM’d you.
Exactly, anything LAN I put on my C4 caused it to drag. It seems the hub is easily overwhelmed. Since removing all but 1 LAN device it's better, but needs reboots 3-4 times per day. The DB swells from 35mb at boot to over 120mb after a few hours. Soft reset and shutdown are no help, the DB continues to grow
What apps do you have loaded on that thing? Also you should tag @support_team so they can look at your engineering logs
The only time i have seen a db grow and gro uncontrollably was when i had a corrupt database. I would continue to try soft resets until that stops happening. I have had a few occasions that it took 2 or 3 to correct the issue..
have done multiple soft resets, no change. I rebooted 3 hours ago and DB is at 93MB now, another timed reboot in 2 hours. Been like this for over a year, support has no idea. No exotic apps loaded, just the usual rule machine, maker API, chromecast(beta for 87 years). Hubconnect is the only 3rd party app, and it has always worked well
Can you PM me your hub id? I can take a look at the engineering logs on the hub...
What do you have using maker API?
Adding in here. My hub has been very stable since January when I was able to clear up some processor spikes.
Since being on 2.3.2.231, I have had almost daily lockups. I’m not saying it’s that, but I have installed no new drivers or apps. Only other change I made is switching a Zwave sensor from hardwire power to battery power.
I have rolled my hub back to 2.3..1 to check if the lockups stop. I also pinged @bobbyD and @gopher.ny in hopes they can look at my engineering logs.
Please try 2.3.2.132 when you get a chance... there was a sneaky (as in no errors or other indications of failure) bug in earlier 2.3.2s that prevented CPU limiting from kicking in. That's been fixed.
Which sensor is this? And, do you know if it functioned as a repeater when it was hardwired? If it did, this change could actually be a very significant change in your z-wave mesh.
Tagging @bcopeland.
At least one device, the Aeotec MS6 has been notoriously bad at routing due to a limited cache. I used to have all powered MS6s in our basement basically surrounding the main hub and had no end of difficulties. I'm sure the MS6s were not completely to blame but they certainly contributed to the instability. After replacing with powered Inovelli ZW and old Iris Zigbee (w/hacked power) sensors everything seemed to settle down - I still have two left (soon to be one) that are at the edge of my mesh and working fine.
This should have required an exclude and re-include, which would have at least notified the hub of the change, but @aaiyar is right in that depending on how thin your mesh is at that location it may have caused a fair amount of re-routing to be needed/have occurred.
Unless the switch was made without an exclude/re-include. In which case the device would’ve functioned as a repeater until batteries ran out, and then all of a sudden, you lose a repeating node, and all hell breaks loose in the mesh.
I’ve known others to have done this.
HE dashboards, and Joe Page's Hubitat Dashboard
Anything on @jpage4500 's dash that is constantly polling maker api? If you disengage maker API does the database issue continue?