What could be wrong with this RM rule?

One of my RM rule creates a very high load on the C8 hub.
This rule was running OK on C7 hub (I think because hub never issued any warnings)
for more than a year.

Here is a reported app statistic:

And here is a rule in question:

This rule monitors 7 weather parameters from local Ecowitt Gateway, updates correspondent
Local Variables and sends data to the 8x64 Pixel LED Display.
LED Display is driven by Pixelblaze and uses custom driver.
I even added a 5sec blocking timeout for preventing rule from running too often.
Any ideas what could be wrong and how to improve this rule?
Is it a problem with custom driver for this rule?
I am using about 7 different LED Projects based on the same SW/HW but
only this one requires periodic updates to the external Pixelblaze device.

Instead of using the variables and a trigger on every one you could use the already built in attributes in the driver and the html templates
With much much less overhead. Also could lower the overhead by reducing the number of triggers. It Is called for each of them and i believe they are all or most set at the same time.

Idea write one trigger and put in a 5 or 10 sec delay (to give it time to update all the attributes from that ecowitt report) then set your variables from the custom attributes in the driver. This way it should only be called once on each update.

Probably don't need the private boolean or any of that with this approach as it should only be called once every x minutes.

All rule triggers are attributes, not a variables.
But I need a variables updated with the attribute values in order to pass them to the
Pixelblaze driver.

Are you suggesting to use a single Periodic Trigger?
I already tried this with 12sec period. This did not reduce a hub load plus according
to @bertabcd1234 comment this was very undesirable approach. So I returned back to triggering
on attributes but also added a sort of debouncing by using a Private Boolean and 5 sec delay
for next potential trigger event. It looks like this still did not reduce hub load.

I didn't mean that you shouldn't necessarily do it, just that an event-based approach is almost always better. If you aren't having problems, either way could work.

Have you tried a longer delay? Can you tell from Logs (enable them if you haven't, at least just to see for now) or just device events what might be waking the rule so frequently? It's hard to say more with only the stats.

No a single changed trigger.. you don't understand ecowitt. All atributes come in at the same time from the gateway. ..You only need one trigger . The other attributes will be updated during the same time period. That is what the delay is for... just to give hubitat time to update all the attributes before you pull them into the variables.

OK, I must be missing something. And thank you for the tip.

So which attribute to use as a single trigger:

"lastUpdate" or "status"?

Excellent suggestion. Thank you very much. I did miss this option.
I changed rule trigger to trigger on "lastUpdate" attribute and this works as expected.
Now rule will be triggering every 3min (this is driver built-in function) and hub load
should be significantly reduced.

1 Like

it is also not built in.. you can change the time in you ecowitt app on your phone when you set it up the defaul tis every 3 minutes but you cah change it to 5 10 etc if you want to have it report less often to lower the hub load even more.. most people leave it at 3 minutes.. but i have a couple in our condos in fl only checking in every 10 minutes.

Could you please remind me where is this setting?
I can't find it in the app.

Upload interval

What app is it?
I have WSView and WSView Plus on the Android tablet.
Neither one has this setting.
Also my gateway is old version GW1000.

Wsview on.android. its there poke around.

Menu then Device list then.pick.you.mac address
Then more ... weather services .. then.next .next next etc

1 Like

Thank you very much.
I was looking everywhere but not under Weather Services.
Initial setup was done about 2 years ago and I forgot all related details.

1 Like