NodeRed generates excessive hub load

Well, never seen this one before... app:613 is my nodered, and I suppose it can misbehave - just wish I could find what is happening

When looking at this, it does produce A LOT of events... Is there a way to tell what it was calling so that I can start figuring out which nodes are misbehaving?

It appears you have one or more badly constructed flow(s) or sequence(s).

well duh... thanks cap :slight_smile: that's the basic premise.

I guess it isn't clear to me what you're trying to get help with. Since you haven't shared your flows or sequences, and the Node-Red integration isn't a built-in app?

FWIW, I've used that integration, almost exclusively for automation, for the last 3+ years. And this is what the Node-Red integration stats look like on my two hubs:

2 Likes

interesting, i presume it's inherently chatty. Well, I guess I didn't pose the question correctly, and kinda answered it already - I needed to turn on debug for the app itself and holy cow, is it chatty...

Nope.

Although it is possible to create flows and sequences that will create race-like conditions.

1 Like

It most certainly is.
I mistakenly created a timer loop which seriously slowed everything down.
The only way I found it was by going through every flow that I had recently changed and managed to find the offending flow.

1 Like

When I first started using NR, I made a similar error.

The solution (for me) was to make extensive use of variables to keep track of device state; and to synchronize variables after a hub reboot or after restarting NR.

In the 3 years since, I’ve never run into this issue, or have a lack of concordance between device state and variable value.

I've gone down the same route. It also makes it easier if a device is replaced for whatever reason. Only a few flows need to be amended should I have to do this.

Sorry. Hijacked the thread. :zipper_mouth_face:

1 Like

Like @aaiyar I have Node-Red connected to all 4 of my Hubitat Hubs and although it's the #1 App on 3 of the 4 hubs:



As you can see, the #1 App on each of my hubs is using less than 0.5 % of the total. I would not say it's inherently chatty. All those hubs have 45 days of uptime and have used only a couple hours each on the tasks it performs. I have more fear of growing icicles than anything else :slight_smile:

2 Likes

Well, I am wondering if this "excessive load" is some new message. I haven't changed NR (or at least the offending routine) in at least a year. Basically I have 13 SensorPush sensors around the house and when either temperature or humidity changes on any of those I would update a combined omni sensor. I just changed that to run on webcore on a 5 minute interval - quite enough for me to see all the info I need.

No. It's not new. Been around for at least 1 year.

I have some sequences exactly like that. Thirteen temp sensors and eleven humidity sensors. When any of them change in value, I have NR recalculate an average (for three different conditions), and then store the data using MQTT as well as populate a virtual temp/RH sensor.

I've been running this sequence since October 2020. The virtual sensor gets an update sometimes as frequently as every 30 seconds. Yet, I've never seen that message arise from this sequence.

well, you seem to update 1-2 omni sensors :slight_smile: I was updating 13. But again, not the point really - I have never seen this kinda message popping up before.

??

Actually - no. I update 15 sensors. Those device nodes you see are all virtual. They get updated from sensors paired to z2m.

Well, you READ the data from 15 sensors but then on the right - on the update site - only 1-2. I basically had 1-to-1 update with the exception that both humidity and temperature were updating one omni-sensor.

That's the sequences I showed. Here's the rest of them - as I indicated, the sensors in those flows on the left get their data from 15 sensors paired to z2m. Here's a partial snapshot of those sequences.

These sensors update every 30s to 3 minutes.

There's no reason why your 13 sensors should give that message. I suspect something else is wrong in your NR setup.

Do you do any kind of msg processing to control the size/frequency of changes? I use change nodes to reduce >1 decimal point to 1 decimal point, followed by filter nodes. So a change from xx.101 to xx.110 will not cause an update via MakerAPI, but a change from xx.101 to xx.151 will.

The moral of the story is that if you are getting excessive load messages on the Hubitat side from a Node-RED integration, the issue is always on the Node-RED side not the hub side.

Maker API is well proven at this point and has been thoroughly tested (by many) in terms of its throughput and behavior in edge loading cases.

To troubleshoot, turn on debug logging on the Maker API instance. Then look at the Node-RED logs as well as the Maker API logs to see what is going on, specifically looking for high frequency reads/writes. A misconfigured flow can read/write data a hundred+ times a second, if that's the issue it should be VERY easy to spot.

2 Likes

Looking at the graphic in the first post with the little bit of math you have a update coming in about every 1.5 seconds. that is pretty frequent depending on the complexity of the flow.

I have seen one device take down my dev hub. It was a virtual device I was creating to monitor my ups. It has 10 metrics and the way the node was written in Node-red it pooled the computer connected to the UPS every 12 seconds. Originally the flow in Node-Red was set up to send all of the values every 12 seconds with all 10 metrics joined together in one command in the flow. I never understood why but that killed the hub. The way I ended up solving it was to break out the metrics that have infrequently updates and put them in their own path to submit the updates with a RBE node to filter the updates. Ironically it makes more calls this way, but they are less complex.

Can @xbohdpukc post the flow like others have done with any kind of values that may determine the polling rate for the devices.

1 Like

Side note, the excessive load messages are coming in at like 50-100 ms, too.... That cant be helping hub load at all @gopher.ny ... Shouldn't that be throttled somehow (or does it not add enough load/write/db access to worry about?)?

1 Like