MakerApi question

Looking for some guidance from people who have gone down this road.

Is MakerApi the best way to read and write to a hub?
Looking at using Labview to do all the logic and data review/analysis.
Fundamentally what is driving this is the inability to graph some of the variables in the hub.
I would also like to email some notifications in some instances. Been using LV for 25 years that side won’t be the problem.
Thanks

Then yes, you can use the MakerAPI along with it's new system of sending events out as POST messages but you are going to hate the timing. It's going to take forever for everything to process and then send a command back ot the hub. Any motion lighting rules would take FOREVER to process in this manor.

What variables? If you're talking about extracting sensor data for graphing purposes, that's not what the hub is designed to do. It's designed as a Home AUTOMATION platform. The things that you mentioned would be too resource heavy for the hub.

Sorry, what I meant was the inability of HE to graph data was driving me to Labview. I disagree graphing outside air temperature vs a rooms temperature over time is extremely important.
When you say forever are you saying the latency is 1 sec, 5 sec, more?

I'm confused.....you say,

Do you mean for everything or just for sensor data? If sensor data, then why would you have to send anything back to the hub?
if you're looking at collecting the sensor data out the hub, take a look at this:

Might give you some ideas on how to get the sensor data into labview

We disagree...but I think we can agree that asking the hub to do that would be putting too high a burden on the hub, correct?

Look I’m not asking for an HE to do anything the developers don’t want it to do.
I just want to pull data off and update a device output from an outside computer.

The link ryan gave you is a good reference. If you are just pulling data, you should use either the the websocket connection method or use MakerAPI post events.

Many of us pull the websocket connection into node-red (but it could be into whatever you want) and then parse the messages and shove the data into influx or any other database.

Now that MakerAPI supports sending events by POST, you could set that up and parse POST events similar to the websocket connection events too.

The websocket connection isn't officially supported (although it is widely used by many). The MakerAPI POST events are fully supported/official.

So you have a few options.

You're ignoring my whole other post.....

Ryan, sorry.
Really want to read and write to my hub. Graphing is the catalyst to get started but I also will do a GUI, programming (rules), notifications, device watchdog, etc.
I’m not dissing HE, it’s a good platform and does HA well. It’s extremely good at communicating with my zwave devices. It’s I/O costs are low and it has a Zigbee radio.
A while ago my hub crashed and It was recommended that I keep the apps and drivers official HE ones. OK offload the funky stuff.
Do I need to use Labview, no but it will do anything I want it to. I’d be stupid not to let HE do what it does well. The main reason I switched from Vera was the latency of that platform. I have some virtual 3 way switches and will keep those on my controller.

Matt,

I think you should be able to use the Hubitat MakerAPI App to do exactly what you're trying to accomplish with LabView. This really is no different that those who use NodeRed, HomeBridge, or HubConnect to augment the functionality and capabilities of their Hubitat Elevation hubs. As long as LV supports fairly standard http calls to send and receive data, especially receiving unsolicited data from the hub when a device's value changes, you should have very decent performance.

Keep us updated on your progress. Always interesting to hear about these types of integrations. At work, some engineers use LabView extensively for prototyping projects. We usually industrialize the solution on our standard controls/SCADA platforms before going into production. I can see the allure of using LabView for Home Automation, especially if it something your familiar with.

Dan

1 Like

Yes, as I said, you can do all of that. However, if you are going to try and respond to sensor events within labview, waiting for labview to query Hubitat for that change would mean a significant delay in the lag time between the sensor event and the action by labview. You can use other methods, such as the eventstream to have labview receive those values. My point was is that the MakerAPI is only part of the solution you are looking for.

But no matter what your method of implementation, you are going to see a lag between the sensor event and the response from the device simply because the data has to travel to another system and the action has to return to the hub. That is going to take time, even if very small.

I doubt the delay would be significant, as the MakerAPI will issue the Post as soon as the device status changes. Labview running on modern PC hardware with process this data nearly instantly and can send a response back to the hub. Total round trip would probably be under 50ms.

There are many HubConnect users that use one hub for Zigbee and Z-wave devices only, and the run their rules and other automations on a second hub. They all report near instantaneous performance. So, I don’t know why using Labview on a blazing fast desktop computer would be any slower.

1 Like

Really? I find that most of them report significant hub slowdowns actually. I would hardly call that near instantaneous.

I don't understand how you can add 2 network transmissions and 3 additional process in Hubitat and say it will not take longer. That simply is not logical. You can say the increase is not significant in your opinion but you can't say it won't take longer.

Yes sending the data to another system introduces lag/latency/delays. That is a true and fair point.

Whether it is significant or not is for him/her to determine.

Funny story... for quite some time now I've had some test RM rules duplicated in node-red. When my Hubitat HUB is loaded a running slowly, my off node node-red logic actually finishes before my RM rule. LOL.

1 Like

And if that were the last step in the process, it would be faster. But in order for the rule to take some action it has to be sent back to the hub where it will be affected by the same slowdown that is making the rules run slowly in Hubitat.

Also, if you are talking about extremely complex rules/apps, then yes, that might be faster on another system that can process that complex automation more quickly. But I would argue that is true of any system and that your automation is causing the slow-down. I was referring to simple automations...motion lights, timing lights, etc.

Many HubConnect users split the workload of their home automation across multiple HE hubs to IMPROVE performance.

As @JasonJoel mentioned above, I have seen posts by users saying they exported automation logic to another platform and it was actually quicker than Rule Machine or even Motion Lighting. The fact that all of the Hubitat Apps are Groovy, running in a JVM on a somewhat small hardware platform (versus compiled binary code on beefy desktop PC hardware) may explain why things are not instantaneous.

Let’s let @matt1 give it a try and report back his findings.

6 Likes

No, I was talking round trip to see the final virtual switch move. When the hub is bogged down maker API read/write seems to still get through pretty quickly. Aka it seems to be less affected by the slowdown than the RM rule execution and completion.

Now that is to a virtual driver. If it was going to a radio device maybe it would be a wash/still slow. Don't know - I don't have measured data on that.

1 Like

Well, the overhead on RM is very high. But that is to be expected when you have an app that is trying to do everything.

I would suspect it would get in line before all the other radio messages to be processed. Since the radio is a bottleneck on the system I suspect you'd see that affect this as well.

My point is, that when you take processing the actual automation our of the mix, within the hub is always going to be faster. So, for things where speed is important, its better to make them simple and keep them in the hub. Then, the complex, do this only after checking these 20 other conditions, type of automations, yeah, those would definitely be done somewhere else.

Which makes me just re-iterate an idea I had a long time ago...SmartThings+Hubitat. Call it SmartThings Local Edition. All the speed and local control when you need it (Motion Lighting, HSM/STHM) and the power of the cloud when speed isn't a factor(processing logs, controlling schedules, updating your weather). Add a premium cloud component to Hubitat that would give you access to cloud resources similar to those you get on ST and I think you'd have a game-changer. Now, if I was just a little less lazy... :wink:

Yes, I agree with that. So yes, up to the point where the hub decides to slow down, it would be faster to do it in the hub.

That still isn't to say that it isn't fast ENOUGH to do it externally. Only the end user can decide that. Or that it might be EASIER to just do all logic in one place, externally for this discussion, as opposed to doing things in 2 different places (even if that makes the simple automations somewhat slower).

2 Likes

Spot on.. There are a lot of factors to consider... It's reassonable to conclude that connecting two hubs will result in a marginal, but almost inperceptibly slower performance due to the network overhead and need to raise an event on the remote hub. But that's a very, very, low overhead, as in milliseconds. And when using websockets, there's no extra load on the sending hub as it's already writing those events to the websocket.

Having multiple hubs allows for a degree of parallelism that cannot be achieved with a single hub. It enables the flexibility to decide where a time-critical automation like motion lighting can be localized, while other more complex or compute-intensive automations can be farmed out to a remote hub.

It also builds a more resilient home automation environment.. In my case I have three worker hubs, each have Zigbee & ZWave stacks.. Hub 1 has 247 devices, Hub 2 has 198 devices, and Hub 3 has 96. A single hub could never run all of that... A fact that led me to create HubConnect in the first place.

This topology allows my home to weather the ocasional hub slowdown or even crash, affecting only part of the system. It's certainly not for everyone, but there comes a point where a system scales so large a single hub just won't work.

2 Likes