I've been running a script that exports data from the maker api and after I updated my c-7 hub on 9/28 the Maker API has started to become slower every request. Eventually my hub will lock up and Ill get warnings about CPU usage. The only fix I've found is to call the api less and reboot the hub.
Here is a graph from the last 10 months of response time from the Maker API in seconds.
What exactly are you doing with this call into Maker API. Clearly that is the busiest process on the hub based on your app stats.
I would be really surprised if memory was going to be a problem it looks like your memory is pretty high so though it is going down a little bit, it isn't to bad.
One thing i would ask is why 10 seconds. It looks like the default for that project was 30 seconds. I have had bad experiences with trying to push too many requests through Maker API. Generally it has presented similar to this where it gradually can't keep up and eventually the hub takes a dump. It was simply about the amount of calls being pushed through.
One big difference in the scenario I am talking about is I was pushing updates through Maker API every 12 seconds. I would think it would be easier to do what you are asking though.
I don't have a good reason why 10s. That's just what I use for all my Prometheus jobs. I do like the resolution I get from that sample rate though.
Doing some reading that seems like a common issue with sending too many request. I have plans to cache the all device endpoint to help reduce the calls but haven't had the time to update the code.
Yes I could see how pushing updates could cause issues but this is only reading data. Which has normally been super fast.
Here are some recent graphs, keep in mind I have reduced the device count to a few temperature sensors to help keep the hub alive while this issue is being sorted out.
When Prometheus scrapes the exporter script (converts the json from maker API to a format prometheus reads) it creates a meta metric called scrape_duration_seconds
scrape_duration_seconds{job="hubitat2prom"} will then show how long prometheus took to complete the call.
I plan on adding some other metrics too based on each individual call the script makes, once I do that I will probably release the code to github.
That's pretty interesting. I'm not seeing the same thing - I do 100% of my control and monitoring via Maker API (to Home Assistant and Node-Red - two separate Maker API instances).
I forget, have you tried a soft reset to make sure you don't have database issues on the hubitat side?
Here is what I'm seeing (Maker API get devices call in Node-Red every 5m, then I store the response time in HA for trending):
The big spike is what it reported when I reboot the hub. The flatline is where I turned off the collection as I was troubleshooting something else hub related and wanted to minimize variables.
EDIT: Was was doing http://ip.addr/apps/api/8/devices?access_token=<token>
I'll change it to the following and see if it makes a difference. Side note, I never do devices/all, as in the past it did bad things versus using "*" (no idea if that is true any more, haven't tested it in years): http://ip.addr/apps/api/8/devices/*?access_token=<token>
For reference, my devices/* call which gets every attribute on every device added to Maker API (90+ devices on mine) only takes about 450-500ms.
It may be more efficient for your code to do that instead of individual calls... Dunno - you know your code better than I would.
Or maybe my hardware/network is more performant and the speed difference is on the client side, not the hub? Just thinking out loud. My node-red client is in a VM running on an AMD 5600X CPU.
You may be right on the /* call, I think I had to use the per device call because not all attributes I wanted were listed but they were on the per device call. I remember being annoyed about that haha.
Hardware/network could be an issue somewhere, I cant fully rule it out. A hub reboot shouldn't impact latency in that case I would think though.
OKAY I finally got some time to test the /* endpoint and it works a good bit better. Takes the same amount of time as the single device call. Going to wait and see if the response time continues to grow over time with this new endpoint.