Performance Numbers (Just Watched the Apr. 28th Live Youtube)

I just watched the Apr. 28th live Youtube event, and I really liked the information provided. However, I am reading about hub slowdowns (I don't have this issue yet, but I am just starting my automation stuff), and it's been said before that the CPU is not the main issue, which is why we don't have a TOP-like report available in the web interface. According to the Youtube event, it was about KISS since the hub has finite resources, meaning there are computational limits that can realistically be hit with poorly thought out/poorly optimised rules. If the latter is true, why not expose those performance numbers so we can better diagnose whether a specific rule is causing issues? It seems so simple, and would greatly assist in helping people understand where the problems are.

As well, and I realise there would be issues implementing this due to waits and such: How about a KISS rating per app? You get a KISS rating by taking the historical event information (reports/day for the events your rule is being triggered by ... so, by your example of a power meter putting out a ton of information, you'd only include the "power over 1000w" events, for example) as the numerator, and some kind of complexity rating as the denominator (I know this is the harder part). For the complexity rating, there must be some amount of time it generally takes for each type of decision. So, an IF might be 0.1/per decision, sending a command is 0.5, etc etc ... summed together. It wouldn't be perfect, since a rule whose 1st thing is "IF time = noon THEN" would only happen at noon, thus the 200 other lines of code won't happen.

Otherwise, how about the denominator being a "time to execute" for each app. Essentially, if the rule executes, the hub marks the start and end time of it executing. Put that in the log, average those out over the same period.

Just an idea.