If you absolutely gotta try reducing event subscriptions, then there is a method you might be able to try:
Make a backup onto your PC. (The backup is JUST your DB. ) Then do a restore.. to verify your backup is wonderful. Then, just start deleting Rules. Once you're done with the test and have discovered if Event subscriptions matter, you simply restore the same backup.
DB backup seems to take only a minute or two. Restore is pretty much only a minute longer than a reboot, assuming you click reasonably quickly
Good points. I was more just pondering the performance potential of a device like Hubitat as compared to people running full blown server hardware and how often you can improve things by changes in approach.
I've done a bit of Game Development on the Unreal Engine and it is very much an event subscription based model as well, with heavy constraints on processing (per game frame). Mostly performance is maintained through controlling when things subscribe and unsubscribe to events to prevent over-evaluation, so was curious if the need arose what we could do to limit the impact on hub performance down the line. (without just buying extra hubs lol)
Back to basics: What are the events that each rule subscribes to? How frequently do these events occur? What sort of load does an app that exits in response to an event put on the system? Is the system compute bound? What are the causes of load on the CPU? What is the marginal cost of an event subscription?
Not too useful to chase after things that don't make a real difference.
Fair points Bruce. I'm just curious because in the example of game development I've seen new devs code without any care for the constraints resulting in poor performance. Our hubs are certainly not enterprise servers by any stretch, so there is a limit to the resources available of course, not that I can see me having any issue anytime soon. Event based systems spread the load much more than state-based monitoring systems that are constantly polling and updating all the time. Rule evaluation doesn't cost much by what you're indicating, so there wouldn't be much to save there anyway.
The main thing in my view is that a home automation system isn't doing very much most of the time. There are only so many devices and so many event causers in a house. There is a quad core fast CPU. For the most part, the CPU is not challenged, but rather it is (as usual) i/o limited.
Digging into that a bit, there are two levels of i/o limiting to consider: First is device i/o. For Z-Wave and Zigbee devices, their respective networks are crawling along at a relative snails pace. LAN based devices are a bit faster, but the rate of events is relatively low. Second is database i/o. This is an area where a bottleneck could occur. Apps need to load state from the database, and so do drivers. Both may be storing into state as well. So there can be a lot of i/o to the database. Of course, all of this is mitigated to some extent by the design of the underlying system, with things like caching to speed up database reads. Still, the overall system load is spread out in time (mostly). What we would expect is small bursts of heavier load from time to time, but mostly not much going on. How well the system handles these peaks is certainly something that could be explored.
Finally: All of this is happening on a piece of hardware you can purchase for $75. How many hubs do you think we'd sell at $750 (a far cry less than an enterprise server)?
I was pondering what guidance might aid new Devs to the platform in terms of keeping their apps performant. When looking at say the development that occurred over at SmartThings and how the nature of their cloud infrastructure meant less pressure (at least at the dev end) for keeping their code to efficient practice. After all, it's Samsung's servers we'll be melting, not our own lol.
I/O is always an issue though, even in enterprise server land.
Because you selected Mode Is Away as a trigger it auto-populates that condition into the conditions section.
The idea being that you can select it into the conditions for your actions should you wish to.
It doesn't actually do anything unless selected in your actions.
The orange FALSE just tells you the current state.
I have two similar rules and both are facing the same issue: looks like "Cancel Delayed Actions" and "Stop Repeating actions" are not working. The repating action is repeated forever until I press "update rule", then rule is in someway reevaluated and it correctly stops.
I have a very simple RM4 Rule that has been working fine since converting it from RM3 and starting about 10 days ago or so, the switch no longer turns off after 5 mins
Has something changed with recent updates that would require a change to the rule?
Does a rule with lots of ELSE-IF conditions with multiple different joiners (AND/ORs) really slow down rule machine (even editing the actions got slow) or did something go bad with the job I built? I recreated Hue's color wheel roll through (where it will slowly change over time through the color wheel) and wanted it to pick up from the color it started on like Hue does but the job takes seconds to run through the conditions instead of the MS it normally does.
I can probably just hard code though the wheel but then it always will start at a certain color.