RM Rule response optimization with multiple triggers

This is a theoretical question.
Assuming a rule has multiple triggers which could create an triggering events number of
milliseconds apart from each other. For instance, multiple Motion Sensors in the same area.
If I want to execute actions as fast as possible is it wise to use Private Boolean set to false
and than true around actions which should be executed as fast as possible.

Here is an example rule:

Triggers:
Trigger-1 or Tigger-2 or ... Trigger-N

Actions:
Turn Switch(es) ON
Do something else
Wait for expression ()
Do something else
...

Will the Switches be turned on faster if prevent the rule from retriggering with
Private Boolean.

Here is a potentially faster modified rule:

Required Expression:
Private Boolean = True

Triggers:
Trigger-1 or Tigger-2 or ... Trigger-N

Actions:
Set Private Boolean to False
Turn Switch(es) ON
Do something else
Set Private Boolean to True
Wait for expression ()
Do something else
...

So, the question is:
Will the modified rule be executed faster of not really?

I would recommend creating a couple of rules in the manner you describe, and making the actions log something (the current time maybe?), and see what is faster based on the time stamps.

I would suspect that the difference will be measured in milliseconds. I also suspect that the bottle-neck will be with sending messages to the device rather than on the hub. Will be curious to find out what comes from your testing!

Well, I didn't know how to create a trusted simulation for this case in RM.
It could be possible with custom app but I am not a sw guy to write it.
That is why I said this is a theoretical question.
I think, preventing multiple retrigering happening say, within a second in
theory should lead to a faster execution of actions. But I cannot prove this.

I can't imagine why the modification would run faster; if anything, it seems like it would be slower, given that you have an additional action before the "on" (though that action should finish nearly instantaneously and there'd certainly be no human-discernable difference). What you'd have in the first case is multiple instances of the rule running at the same time, but the first one will probably hit its "On" action before any of the others start running, so that seems unlikely to matter.

Well, you have the outline of an actual rule you could write to test. That's one half. The other half sounds like just getting all the triggers to happen, which you could do with virtual sensors and another rule if you want (e.g., manually running its actions, which make the sensors active), or however that needs to work for the way your actual rule is set up.

But again, this is all probably unlikely to matter in the real world. Is there a particular problem you are trying to solve?

1 Like

Are you sure each sequential trigger will create a separate running instance?
I remember, I saw somewhere Bruce's explanation this is only one single running
instance but all timers are restarted.

Commands sent do devices are not instantaneous. For a single ON action/command
the difference most likely is not human noticable. But what if you have few staggered
ON commands and second Trigger hits in between? I am sure, the rule will restart and
following ON commands will be delayed by reexecuting everything what is in front.
Unless platform is smart enough and will not resend ON command to the device which
is already ON. So, how the above scenario will be running?

I don't have any problem but trying to understand the RM behavior in deeper details.

Yes, this is what each trigger does. And the relationship in time as between the different instances in not determinative.

So, is it always better to prevent unnecessary rule retriggering with Private Boolean?
I can see a case when multiple dependent rules may lock each other if multiple
instances of the same rule are running with undetermined timing.
Actually I may already stepped into this problem.

Usually not. You really have to show a specific case. Your questions in the abstract are, well, too abstract.

Yes, it is. At the very begging I mentioned this is a theoretical question.
But taking in account that each trigger creates a running instance it could become very practical.
Here is a theoretical example of a rule which potentially may kill a hub.

Trigger:
Every 10 sec.

Actions:
Do something
Wait for a 1 min
Do something

Now 5 new running instances will be created every 1 min (6 triggers, one wait expires).
And in this example this is never ending story until eventually all memory will be eaten
and hub will die. Is not it? Please correct me if I am wrong.

And now a related question:
Why not to kill all previously created and running instances?
After every new trigger previously crated and running instance is practically useless
but potentially a resource killer.

PS.
I have many dependent rules. Periodically with no apparent reason rule may not finish
leaving some control variables in a wrong state. Next time the dependent rule will not
run correctly. Of course, at this time there is no logs (normally all logs are off) and I had
no way to figure out what went wrong. Now taking in account that more than one rule
instance could be active may explain this abnormal occasional behavior.

Not exactly:

This is wrong. When you reach the "Wait for event: elapsed time 0:01:00," the rule will create a scheduled job, then go to sleep. It will wake when it is time to do the scheduled job. In between, it is no longer "running." (It will almost certainly be in this state by the time your next trigger is reached in 10 seconds.) Further, as mentioned above, a re-trigger cancels any waits, so with a "Wait for event" as in your example, you'd never even get multiple scheduled jobs from this to re-wake the rule--just one, whichever was created most recently.

"Delay" is different, but the point that should still be noted is that they will not truly be "running" while they are waiting for the delay. However, this also addresses your second point -- it mostly already works as you suggest (except that cancellation is up to you if you use "Delay"; sometimes, this amount of control can be helpful, but for most people I'd suggest sticking to "Wait for event: elapsed time"). But the underlying concern isn't really there in the first place, either.

This behavior is all documented, by the way:

1 Like

I am sorry about my wording.
I meant not a practical for usage but practical for understanding what is going on.

OK, I knew this but it does not explain what exactly with already useless but still somewht
active instance.

But what happens to the memory? At what point memory will be freed from useless
slipping instance?

As I mentioned, occasionally some rules are not finishing to the end leaving some control
variables in a wrong state. Usually resetting variables happens at the very end of rule.
The above logic usually have IF and WAIT statements. By necked eye it is impossible to
figure out why rule did not finish but of course, there is no related logs.
I am trying to understand a details and trying to figure out what I am doing wrong.
Real logical mistakes is easy to debug and fix. But if something happens very seldom
it makes debugging extremely difficult.

I do not understand what you mean with this phrase. If it is the same question as what's below, perhaps that answer will help.

A "sleeping" app (probably a term I made up, though I think I've seen it before...) is the same as any app you have "installed" that is not actively "running." Again, a delay, wait, or similar just creates a scheduled job or subscription, which will wake the app whenever that happens -- the same as a rule wakes on a trigger event (which is also a schedule or subscription), for example. Apps are not always "running." For more on this, I would consult the "App Lifecycle" section in the app developer docs, which is applicable to this situation (even though you are not writing a custom app*; a Rule Machine rule is just a regular app that works the same way as any app): App Overview | Hubitat Documentation

If a rule is not working as you expect and you can't tell from your actions alone, your only hope is enabling all logging and seeing what "Logs" says (or hoping someone else will want to help you do the same). There should be some clues there. In no case should there be no logs from a rule with all logging enabled that actually ran any actions. It may be a matter of figuring out what the logs mean (e..g, you now know that a "Trigger" log, indicating a rule has (re)triggered, will cancel all waits).

Smaller, simpler rules will, of course, be easier to troubleshoot than longer, complicated rules -- one of many reasons I'd recommend that approach instead.

*even though I think this might be a good option for you :slight_smile:

The problem is - all logs are usually off.
Rules are running 99.99% reliably and failing about once over 1000 time of successful runs.
Since I have no idea which rule may fail next time and when I will have to enable logs for
all rules in question (will be too many). And than I will have to find a log related to the
failing condition. Filtering logs per rule is easy enough but than finding a portion related to the
failure is not easy even by narrowing range to approximate timeframe. This is very ineffective
and time consuming debugging technique for very seldom failures.
That is why I am trying to understand all theoretically potential bottlenecks by brainstorming.
Unfortunately there are many implementation unknows in HE/RM for the effective brainstorming.

I guess, at this point this discussion better to be stopped.