Its definitely worth it. My Good Night Routine is over 20 devices but usually only 4-5 are on so I query all of the ZWave Plus devices to see if they are on and it made the routine noticeably quicker to complete. For my ZWave non-plus devices, I just send the OFF command in case the state is wrong.
Automatically whenever they change in hubitat. In this case they happen to be life360.
Just pay attention to the differences between the two...
- gives you events for all devices, not just the ones selected in Maker API.
- ws does NO deduplication of events (aka you could get multiple present/on/etc events in a row when using the ws connector). So you could get things triggering that wouldn't if you use webhook. Whether this is important or not depends on your logic and devices.
- ws is unsupported by hubitat and could go away at any time (although it hasn't in > a year it has been in there).
- ws events have slightly different data than webhook (no type - 'physical'/'digital' for one).
- ws is a little faster than webhook.
- ws requires less configuration - just the IP of the hub
- officially supported by Hubitat
- events have more details in some cases (type, and a few other items)
- only sends events for devices specified in maker api
- sends events post-internal deduplication, so events sent should match the events in a device history log
- requires a little more config (have to configure the POST url in maker api)
any pointers on how to "fix" what ever was preventing it from updating when I was using wh?
If it isn't updating via webhook it is almost always a POST url config issue or a host firewall issue on the machine running node-red.
99% of the time it is the POST url. Users either put in the wrong IP, forget to put in the port, or other wrong info. Clicking "Confgure Webhook" only checks that it was successfully written to hubitat - not that the URL is actually correct.
Example from one of mine.
192.168.2.5 is my NODE-RED IP (not hubitat).
1880 is my NODE-red port (the default port #)
Also make sure you actually type in the ip/port info! The config node will show what it thinks should go in there in 'grey', but it still needs to be typed in and the config node updated.
Synology NAS. I had the firewall turned off. I turned it back on, made the exception for NR port and now it is working. something to keep an eye on, but thank you for the explanation of dif.
This may want to make you flip a table... lol
From what I see, you check all yours doors and windows to make sure they're all closed. I do something similar, except in 1 line for 9 doors/windows.
HA has this awesome little node called "get entities" which will scan a group and output every device in that group which matches the condition. I had been doing them the same way till I found this gem.
Fyi mine has an underscore in the path
Thanks for the help. I have been trying to do this in RM for months and think I finally got it in NR.
If you cannot remove underscore from the input, it's because you have other config nodes that already bind the
You can view all your config node in the
Configuration nodes menu
@TechMedX My 2nd hub has the underscore added to it. Do you you have 2 hubs connected to Node Red?
Yes, both my 'Devices' and 'Apps' hubs (the Apps hub has some zigbee lights on it too)
To be honest, the path name doesn't matter. But if you want to be clear in your config, you can change the path to
/hubitat/apps/webhook (or whatever you want to distinguish your both hubs) or leave underscore
I don't care what you call me, just don't call me late for dinner!!
Sadly I don't have that, looks messy but all its doing is outputting 1, and once we have 10, they are closed. If not, something isn't closed. Everything stops, and I'll be setting the mode back, so I can fix the issue, and run it again.
I do kind of miss my webCoRE for that, as that would tell me which had the issue.
But I want to try and keep everything in one place. My first run was last night, and it was rapid! So long winded, but works
Ya, it's a node in the HA pallet and once I learned about it I literally use it everywhere now. It could be a real useful addition to the HE pallet in the future.
My goodnight routine was always slow because it would send an off command to 30-40 lights, close command to all the garage doors, lock commands to 5 locks, alarm commands, led change commands, etc.. etc.. There'd be so much traffic on the zwave mesh it would take a solid minute to complete. With this node it checks everything against the state it should be in, creates a list of the ones that don't match and only sends the commands to those devices. My goodnight routine now completes in 5-10s.
Well, it would have been even faster than that if you checked the current state with the hubitat device nodes and then just sent commands to the needed nodes. As the device node always checks the cached value in node-red the only thing that would go to the hub is the actual needed off commands. Yes, it is a lot of nodes you have to put on the sheet though.
I actually find that vastly superior over the way the HA node does it. But I did use that node a lot when I was using HA more, and it is pretty convenient.
The same type of node, but just checking against the hubitat cache in node-red, might be interesting depending on how it worked / was designed.
I've been trying to think through suggestions to fblackburn on more access to the device/attribute cache... Still ruminating on it though.
Maybe something like a multi-device node that can accept an array of deviceId+attribute as an input, and output an array of deviceId, attribute name, attribute value that then could be used in other nodes via split or function node.
Yeah, I would have gone that way if I hadn't moved all my Zigbee devices off HE.
I've too many unsupported devices, and sick of them dropping constantly.
I'm now running all Zigbee through deCONZ, and all is surprisingly stable.
If the trade off is large flows, for stability, I'll take it
Curious why you find that superior than 1 node that does the same thing without having to place 100 nodes and a plate of spaghetti on your sheet lol
I did it that way initially for smaller automations like a double tap to turn on/off certain lights. I would check the state first then send only to those devices. But hadn't done it for the whole-house automations like goodnight and away.
For unknown reasons my simple flow stopped, with a lot of error messages (can submit later if needed but don't want to bother everybody with that without trying to solve myself the problem)
I noticed that my NR is full of tests I made before: MQTT (local mosquitto, cloud mqqt), also several modbus plc's I connected (I have 4 of them and harvested data from both of them).
I tried to remove all those servers without success.
I exported my latest "Hubitat OK" flow with few sensors on my 4th modbus plc. Exported current flow, all flows (bigger file).
Tried several tests, reading node-red forum about similar errors: created new empty flow, imported the current flow, etc. Whatever I do, this damned flow always contains all the useless servers.
I exported the flow as text, erased all the useless servers, created a new flow and imported the cleaned text: no way, the servers are still there
When I tried to configure my modbus server (not needed, the parameters are correct), the list is full of duplicate...
I still don't know how to get rid of those useless servers and I reach my limit here.
Any idea ? I'm ready to erase everything in case of, but I doubt for now that this will even solve the problem...
*Update: I installed a brand new node-red latest stable release (12.18.2) + modbusTCP + Node-red Hubitat on a HP Tablet running W8 (what I found on my desk for a quick re-start). And installed my little flow. Managed to change the IP address on Maker API. Managed to add the token on node-red.
All seems OK after 5 minutes.
Will see if everything is ok in few hours...