I read so many posts about folks having problems with their systems. And I'm sure they are, but I have had a different experience.
My system has been 100% reliable for over 3 months now. No down time. No slow downs. The only time my hub gets rebooted is for a firmware update. All my automations, controls, etc. have worked perfectly. I probably have about 100 devices, a mixture of Z-wave and Zigbee. I don't use RM, I have my own apps that take care of everything.
The only exception was a door switch that acted up. But that is not an HE issue.
In the same period of time I have received quite a few emails from ST about system being down in one way or another.
I have 12 RMs (mainly timers and button controllers) and one custom app (CoCoHue). Since I stopped using a lot of custom apps, things have been amazing!
I decided when writing up this post to log in to Hubitat. I almost forgot how to! I don't check it often anymore. Things just work. I have also done a ton of things to make my zigbee network reliable. Ensure separate channels and split ZHA and ZLL devices into different networks. I have Iris v1 contact sensors and they aren't dropping off like they used to.
I have offloaded everything that was done in a custom app to Home Assistant: Sense energy monitor, weather, allergy, cold/flu tracking, TP-link outlets, MyQ, Roku (x3), wifi phone presence monitoring.
Now I spend all my time in Home Assistant making my dashboard pretty.
[Edit] ok.. What I mean is: same here in that mine has also been super reliable although I obtain that performance in a different manner.
I meant "same here" as in I don't have any more slow downs or needing to restart.
I don't know if I would say "heavy lifting". Just Hubitat's main focus is not on supporting WiFi devices. It does an amazing job on supporting devices it was designed for (zigbee and zwave).
No you didn't. Don't worry it's not your post or anything. It just reads very interesting and highlights some items that many have thought and have experienced the same.
I presume you've seen posts in the NR thread indicating similar experiences. From my experience, I don't think RM, per se, is the issue. I can trace my slowdowns that required regular reboots to a single event. A couple months after RM4 was released, I coalesced a large number of simpler rules (~100) into ~25-30 much more complex rules. By complex, I mean rules with multiple changed triggers and multiple conditional statements testing for multiple conditions.
I did that with the thought that fewer rules were better. In hindsight, it is better to have more simpler rules than fewer more complex rules. I think the latter have the propensity to cause database gridlock.
Edit - I've read this elsewhere. Rules are free - there's no need to economize by creating fewer more complex rules.
However NR is only one topic that so far many have reported the same results. The same results have been reported by users offloading automation by other methods as well. Whether it's custom micro apps they write themselves or using an external system the results are always the same. Improved performance and reliability.
It’s not.. But it does give the end user plenty of rope to hang themselves with overactive rules and other insanity.. But it’s not a fault with RM, it’s how you use it..
The problem still exists though and the common denominator of improved performance and reliability is the significant reduction of use or elimination of local apps. Not specifically RM however it is the most common.
By logic then breaking a big rule into several smaller rules should eliminate the issues right? This coincides with other reports of those who write their own micro apps to do things instead of big rules. correct?
Do the results of micro-rules equate the same as custom micro-apps?