Am I missing out on Node Red

I still have quite a few rules and custom apps I wrote but setup NodeRed as a docker container on my NAS to automate things I couldnā€™t easily do in HE and to support my HE hubs. With the help of @stephack he shared a flow to download the HE backups every evening as an example. Sure there are curl examples by @aaiyar but this was my first flow to learn from. From there I added flows that connect to the event and logging web sockets of my 4 HE hubs (again thank you @stephack for an example flow) and drop the data into Influx and a MariaDB running in my NAS so I could look back on things and graph data via Grafana. I have since added a few more flows but just wanted to provide other examples on how NR can compliment your HE setup. Itā€™s worth the effort to learn especially if you own hardware that can run NR already to minimize expenses and the HA addiction.

3 Likes

I do this directly from HE which goes to show yet another way to skin the cat.

This topic is quite interesting to me because it's revealed a wide variety of use cases and solutions as well as the variety of preferences implementers have AND the variety of backgrounds/skill sets.

I may have to setup NR again just to find the fastest response time for simple motion lighting! I'm like @april.brandt and love to tinker just because I can. I've also got my share of OCD so I got a good chuckle when aligning nodes was mentioned.

I'm curious about the inefficiency of HE processing. Since ANYTHING external requires network traffic I thought that HE HAS to be faster. Granted, network latency is less than 20ms in most cases and my HE simple motion lighting response times are at least 400ms so there is certainly room for faster response from an external process. Beyond the obvious that HE is not as efficient is the question, why not? It would seem to have all the advantages and my sense is that the HE developers are fully competent so that leaves the platform itself, no? And the developers chose that as well so I just don't understand. :man_shrugging:

4 Likes

Oh man... this is at least a 3000 word response to do it justice. I'll try to oversimplify....

You've touched on one element: network latency. Which is faster, a 100 megabit ethernet wire or a 40kilobit radio? (250k for Zigbee) The answer is: overwhelmingly the Network.. 100 megabits / 40 kilobits = 2500 times faster. Even Zigbee is beaten by 400x.

That alone is the dominant reason why offloading processing of logic (Rules) is beneficial to so many of us here in this community. It doesn't particularly seem to matter to what the offloading is done... some use multiple Hubitat hubs, some use Node-Red or HASS or combinations of all. The common case is: offloading, NOT the OS that's running the offloading is the #1 choice. 'to do or not to do, that is...'

The Hubitat platform is I/O bound.. meaning the Z-Radios. They are a huge burden. They are Half Duplex, Low Speed, demanding instant response. The platform has finite resources (gigahertz and gigabytes) and for thousands and thousands of users, there's no problem. Want to turn a light ON?? If the Z-Device is a couple hops away you'll see 6 packets on the mesh. They are tiny packets so they don't eat much time on the mesh network, but the physical devices are not quad core gigahertz CPUs with a gig of ram. They are tiny single chips running a single processor with memory measured in Kilobytes. There's latency in other words. Your repeaters take a measurable unit of time to receive, process and if necessary, forward on the packets. Remember, every Z-Device in radio range MUST process each packet if only to find out "not for me" and discard it. They can't be doing anything else at the time. This is the basis of "have a strong mesh" advice you see everywhere in this community. If you have a Hub, then the ONE Repeater, then all of your devices, you are going to have competition through that repeater. We all have Sensors and they just suddenly burst to life and spit out a message: "Door Open", "Motion Detected", "25 watts", "77 degrees", "Motion Inactive". The hub can handle a lot but each of us has a unique mesh.. it might be the mesh is contributing to latency. The winner of each competition for repeater resources gets their packet sent along, all the losers have to retry. More Traffic on the mesh.

By turning over almost all of the Hub's processing power to managing the most crucial action, Z-Radio queue management, we give the radios the best chance of being responsive. Again, it's not that the hub is under powered, it's simply that we want it focused more on the radio than others in this community. IF you are fetching 60k of Weather data every minute, that processing CAN cause the processor to not work the Radio Queue for a couple of milliseconds. That's a gap in the airwaves that multiplies out because of the repeaters.

Anyway.. I'm going to stop now because these are already 500 too many words. :smiley:

9 Likes

So, load up node red. Jump in. Then one of you create a new topic for help and tag us and also use the tag nodered in your topic. If any of us is available we'll help you get your feet wet. Take a look at a post I found that was built by seasoned users. It helped me. Although I would also add bool-gate to the list if it wasn't already added. It's easy as pie to use.

I'm just going to add that once something is solved, change the topic a bit to reflect a better description so that others will benefit from the thread and mark as solved.

2 Likes

This is an interesting response and does bring out a follow-up question. In your opinion, would the hub benefit from a better processor? Whether it be multi-core, or more power, or dual processors, or whatever. Something that could handle fetching that weather data and sending commands to the radios at the same time.

I've often found the hub struggled with any complex logic which slowed things to a crawl, and that could be in part because of the overhead in RM. But would a better processor not speed that up?

1 Like

The hub has a quad core processor. But, Hubitat is a Java Machine fundamentally. It's a Java Machine running a Java Database all on a Linux OS running on a quad core ARM processor. If I had to speculate, I'd be all 'squinty eye' on Java. :smiley:

But I'm not a Hubitat employee.. so my 'eye' doesn't contribute much, if anything. I think, again, it's the reason I've had multiple hubs for my entire Home Automation interest. And for this, I'm saying Node-Red is just another hub, as is HASS, et al.

3 Likes

Thanks, @csteele, for your insight on the internals of HE.

However, there is a little confusion (for me anyway) that I'd like to separate out. Let's use a simple motion lighting process for the example:

motion detected -> detector to HE via mesh -> HE processing -> HE to switch via mesh -> light on

If the processing is done external to HE then the block "HE processing" gets replaced with:

-> comms to external system (via Maker, MQTT, ???) -> processing -> comms from external system (via Maker, MQTT, ???)

Since everything outside of "HE processing" in the first case is a constant, or nearly so, for any given system the external processing time plus the comms time (twice) would be expected, by most people, to be longer than the "HE processing" time. But it's not, at least for most users.*

IMO, this points to inefficiency in HE's processing exclusively. Whatever other loads HE has (getting weather date, whatever) would only make it worse. I'll point out that I have only 30 devices, about 50/50 Z-wave and Zigbee, and my processing on HE is all with the canned apps, no RM, so it's a pretty clean comparison against NR with the equivalent simple flows.

I'm so interested in this timing thing (curse you, @april.brandt! :wink:) that I'll probably setup NR on an Rpi (using NR's bundled install) and test again...but not today. I regret that I did not save the Ethernet packet captures that showed the external, to HE, timing from my previous experiment but I'll do so next time.

*it seems I picked the worst possible "external system" because my response times with HE alone are less than half of what I got using NR. What I setup was NR in a Docker container (don't ask) on a box that I'll just call Server. Also on Server was the MQTT broker. So my external consisted of HE -> MQTT broker -> NR (in the Docker on the same box) -> MQTT broker -> HE. You'll have to take my word for it that Server was more than up to the task and much beefier than a Rpi.

1 Like

Dunno. I have always ran Node-RED in docker, and it has always been faster than when I had the rules in RM. :man_shrugging:

I can make very fast groovy apps, and am more than capable of programming in groovy... But I really don't want to manage a bunch of homemade apps long term, so that wasn't attractive to me.

3 Likes

I didn't expect this to blow up so much, I'm struggling to keep up. I'll enjoy catching up on everyone's views on NR

1 Like

If the Hubitat Hub had that ONE item to process then Offloading wouldn't bring much to the picnic. But the point of Offloading, to me, is to get more consistency, with the least possible long term effort.

As @JasonJoel indicates, maintaining a Groovy app next year, with zero minutes looking at it between, is an error waiting to happen. It might be true of Node-Red too.. upgrades over the next year might break a flow. But that's what backups are for, right? :smiley:

If I assume that the Z-Radio queue management is reasonably efficient, then it's best times, it's most responsiveness, is what I want to see all of the time. If there's variability in response, I want to find a way around that. The answer for Me, is to use more hubs. To Offload the things I can, so that my Hubitat Hubs are a) consistent and b) at the 'best' end of responsiveness.

Said yet another way... I've always thrown multiple solutions at my Home Automation problem set. Hubitat is at the core of my most recent iteration. I have multiple Hubitat hubs, as well as a set of NodeJS 'helper hubs' thrown at the current problem set. This is NOT final for me. New hubs, new solutions will present themselves. I'll keep whacking away at this until someone does come up with a full and complete solution to my problem set.

3 Likes

I think itā€™s important to understand where speed is critical and desirable. In the context of saving a couple of 100 mS and compared to other benefits.

You have a scheduled event for 6pm .. not important
You log or send a push message when an event happens ..not important.
You walk into a room and the PIR brings the lights on .. important
You approach your house and the door unlocks .. probably not important.
Temperature drops and you bring the heating on early ..not important
You approach your house and the the lighting, heating comes on, not important.

All my automations slow down from 500ms to 5 secs.. problematic
All my automations slow down from 500ms to 50 secs .. disastrous.
My automations fail,. disastrous.

Really the main speed need is is event driven/immediate reaction and typically these are Just motion >lights or switch/button press for lighting. These can be sped up many way, most effectively by direct wireless association sensor > light or using direct hardware. Agreed logic can then not be applied easily.

I venture to say that most people disappearing down the NodeRED Rabbit hole want reliability and repeatability*, not necessarily speed but thatā€™s a bonus. Things must complete in predictable sequence order too, If this is the reason for ā€˜burrowingā€™ , and it does appear substantiated , then Hubitat take note as this shouldnā€™t be the case and it indirectly provides a migration path elsewhere.

*Actually NodeRED offers many diverse and useful nodes and features that RM/HE donā€™t have which is a big draw too.

7 Likes

Someone emailed me with a valid comment that NodeRED is only a solution and of interest to a minor group of users, the techies, thatā€™s true. Hubitatā€™s market focus and volume is the other 90%.

Even more a reason for Hubitat ensuring RM is the engine of choice. The fastest, most reliable and easiest to use, and no installation hurdle.

1 Like

Well, I would point out that home automation as a whole is mostly in the realm of techie users... :man_shrugging:

4 Likes

I meant really within the existing users category (owners)

What % of existing purchasers could (or want to) set up a separate NodeRED server and learn to make flows. Every manufacturer of hubs is trying to widen the base of the pyramid making HA accessible to more and more users.

2 Likes

True / good point.

1 Like

What % of users know about node red prior to purchasing their hubs? I find that there is a lot of ignorance floating around the forums until one gets curious and asks. Or sees a post that sparks interest. I'm that person. I discover only by happenstance that someone mentioned it and it sparked my interest. There is a large number of users that follow the "norm" until they're led to a rabbit hole.

3 Likes

HA in the last couple of years has really evolved, moving from I wish I could do this or buy that productā€™ to so many things I can do now by so many different ways.. ā€˜ I wish I had more timeā€™ !

The confluence of things like IP capable controllers, NodeRED, IFTTT,Alexa, Google and cloud services, local APIā€™s , all separately very powerful but now combined,,, fantastic times ..(for techies)

One canā€™t but admire what IBM did with NodeRED

3 Likes

How would say an inbuilt Blockly version of RM serve peopleā€™s needs who think they are ā€˜missing outā€™ on NodeRED. It would help a large volume of users I feel.

@bravenel . Any merit at all ?

2 Likes

You supped that ā€˜Drink Meā€™ bottle.. :cup_with_straw:

1 Like

I think @bravenel hit the nail on the head years ago when he said WebCoRE should produce Groovy, not interpretive code. WebCoRE already has an online Editor, producing exchangeable code. It runs on the Cloud or via a local instance. If only it created a 'child app' that can be pasted into Hubitat, there'd be a great RM alternative for people that don't like the RM UI.

2 Likes