Z-wave command queue - how to view/reset?

I'm working on improving the https://community.hubitat.com/t/release-zipato-mini-rfid-keypad that @syepes kindly made a few days ago. Unfortunately I'm frequently running into states where the driver is completely flooded with incoming zwave response commands. Sometimes it's a legitimate bug where a wakeup response tiggers even more requests and responses. Sometimes it seems completely random and undeserved. The log at this point is showing 20+ responses per second recevied for that single device and I can see the Hub UI slowing down.

When this happens I can spend minutes trying to flush out all the queued zwave commands and responses in the system. Resetting by unplugging the battery to the battery endpoint device doesn't help. Rebooting the Hubitat hub works but takes a long time.

  1. Is there a way to see the outstanding z-wave commands queued in the hub? I'm suspecting there are a ton waiting somewhere because of my driver bugs.
  2. Can I reset the pending command queue without rebooting the hub, much like a soft-reset?


I cant help you with the above, but I'm using this driver and its been absolutely fine for me :+1:

1 Like

I tried it but there is so much SmartThings junk code in it, but anyway let's not talk about that since the question is about z-wave commands, queues, and debugging.

I don't think we can clear the queue that are already queued on the Z-Wave stick. I am guessing, based on the PC Controller software, that the stick/chip has a queue because it appears to be visible from this software. I don't think we can see or control that queue in Hubitat.

That being said, some drivers queue up commands in their state and hold them for when a device wakes up. If that driver does that, then you can hunt around for where those commands are queued and reset that data structure.

1 Like

Ah thanks! So I'm looking at how all the drivers' zwave handlers return a results array which is the response to any commands created. I'm assuming those results are put in some queue in the hub? Or are the commands added to the hub during the response(cmds) call?

In any case, is it possible to examine any previous cmds or results as returned by previous calls?

Groovy is weird so any time there is a collection at the end of a method it is implicitly returned as the return value. So, even a collection of commands that is built at the end of a method but not explicitly returned with a "return" statement or passed to a "response" method is still queued. I'm guessing all of the handler type methods in the driver you are looking at queue commands.

And no, there is no way to examine them unless you are logging them yourself or saving them to a giant collection of commands in the sky...

I would have thought if there were too many messages waiting on processing it would invoke the below? That's assuming your are on V 2.1.9.

A change was made to the event processor in the hub. It was found that some hubs can send so many events that the processor is overloaded and the hub will stop processing events. This processor has been reworked and there is now a limit to the number of events that can be sent to the processor at one time (1024), if the event queue goes over the limit an error message will be thrown and the system will log an error “Limit Exceeded Exception, Event Queue is Full”.

I've been thinking of using this method in drivers I write for sleepy devices.. It would give you a place to look for diagnostic purposes..

This is related to event processing not protocol message queues

Ah ok, bugger :expressionless:

Yeah running latest hub v2.1.9 but 1024 is huge since I only seem to be receiving about 20-30 messages a second, so blasting through a full queue would take 30 seconds. Especially if my code is buggy and keeps filling the queue up with new commands if I have a some error in my handler methods.

I'm also curious how the hub handles/sorts command queues with "delay X" entries in it. Let's say I have one handler with some minor 300ms delays between get/set commands and a few larger 5s delays. Is that the right way to spread out commands with longer delays? I'm assuming a single return with interleaved delays will be treated as a serialized stream, but what if there are new processing and new commands in a second or two? Are they appended at the end of the existing queue or interleaved with the previous commands?