Bond integration throwing queue full errors

Does anyone know why the built in Bond integration will start throwing these errors? It doesn't seem to affect the operation of my fans. They work as they are supposed to even with this error being thrown every minute.

I tried hitting refresh, config, refresh & config, as well as disable & enable & config. None of this stops the errors. Rebooting the hub does stop the queue full errors for what seems a random amount of time.

The Bond integration has done this for years. I have asked before but no one was able to figure out why. I thought I would ask again.

Checked and can't replicate w/my fans on Bond. :man_shrugging:

Same here - I just have one fan, but there's (literally) nothing in my logs for the Bond integration (which is working fine).

Did you by chance use the community Bond app at some point? If so, I wonder if it's some kind of fragment left behind from that...

2 Likes

Ages ago but they are set up completely separate of each other. They shared no resources that I could see.

When I set up the built in integration (again) back in September of 2025 there were no errors and I hadn't had any until recently. This is the pattern I've experienced. The integration works well for awhile then the errors start. Since everything works I'll just ignore it and reboot when I notice them. It doesn't seem to put any stress on the hub.

The community integration never threw errors but I like the way the built in integration works better.

Just thought I would see if anyone else had seen it. Maybe it's a C5 thing....

My Hubitat developed Bond was throwing the same errors, and I went with the community-developed Bond app. Working like a charm!

1 Like

I recall (perhaps incorrectly :sweat_smile:) that the native and community each tapped into a different way to pull the Bond info in... Perhaps there's some edge cases where the one way works better than the other :person_shrugging:

1 Like

Hey, there is at least one other person.

1 Like

I went back through and looked up my most recent post about this in 2023 and there was another one of us, though he fixed his. It was caused by having groups (or using rooms) in the bond app. The built in integration didn't seem to like that. Unfortunately, that's not what's causing the errors for me.

I'll switch back to the community version this week. The author is no longer using Hubitat but, if it were to stop working, there is probably someone here, or maybe even AI, that can help bring it back to life

Thanks everyone for the input.

Just FYI the community one uses the old v1 API and you do not get instant status updates, it has to rely on polling. Although usually you are controlling the fans via Hubitat so you probably would not notice the difference.

1 Like

Thanks. I reinstalled the community integration this morning and noticed (remembered) the polling. That and that the author of the community integration leaving the platform is why I have kept trying the HE version. The author being gone more so then the polling.

Nothing is triggered in HE if the fans are turned on externally so a possible 30 second lag between turning on the fan and the dashboard showing it isn't an issue. I am going to work to move my rules back over tonight while sitting in front of the TV. Unless the community integration fails and can't be revived, I'm done with the HE version.

You might be looking in the wrong direction. Unless you have substantial logging coming from Bond, before the queue full errors, those errors you see on Bond could very well be from other integrations that are blowing up your hub.

That may not be a symptom of the Bond integration. Is purely the effect of something blowing up your hub. Could be Bond Integration, or any other integration. Best is to monitor your logs after power cycle when the queue is no longer full. You must narrow down what is really causing the Queue Full before is full.

@bobbyD

Thanks for the reply. Device 3103 is the Bond integration so I am confused as to how the queue full could be coming from somewhere else. The errors are being generated by the Bond integration.

Now, something else in conjunction with the Bond integration creating some type of edge case is a definitely a possibility. When the errors start showing up is completely random though. I have been running since September, error free, during this attempt. I was on vacation and the Bond wasn't even being sent any commands when they started appearing in the logs.

There is nothing else in the logs other than the queue full errors, or the 404 errors when you send a command (that still works), so troubleshooting is near impossible and is really the equivalent to finding a needle in a haystack. There just are not any clues in the logs. I have tried to look for them.

I think this is my third post about this over the years and being that there have been only two other people that have seen it definitely makes it an outlier. One of the other two claimed to cure it by removing groups from the Bond app itself. He stated the built in integration was not able to process them. This is not my issue unfortunately. I have two fans and zero groups. If there was a way to kill the errors being logged I would have ignored the issue. It doesn't effect the function at all. It just floods the logs and makes them difficult to use.

Of all the things I make this poor C5 hub do, and it's a lot with 331 devices and 468 apps, having one flakey thing is still a great track record. The bonus of having another direction to make it still work is even better. I am in no way disappointed with the $100 hub I bought in 2020.

Again, thanks for the reply, this is all moot now as I did switch back to the community integration last night and everything is working.

Those are after "incident" logs, and would be encountered by all integrations alike. Absence of "queue full" logs from others, doesn't mean they are not impacted, just means those integrations don't post the logs.

You need to dig into logs before the queue is full. Look for excessive logs/usage from all other integrations, I am sure you can find the culprit. Just need to eliminate the bloat.

I was thinking the same thing, but I figured you had already made up your mind about switching.

I was considering suggesting that a simple reboot would make it go away for a while, suggesting that it is the hub having an issue and not specifically that integration.

Once the hub gets into a bad place lots of different apps may toss errors but may not be the root cause of the issue.

Power cycle is needed to clear the queue. Once it gets full, nothing using websocket works. The best course of action is to identify what caused the hub to fill up. I've seen this happening with integrations that close the websocket, then devices panic because the connection is not available.

I don't understand this statement because everything still seems to work, including the Bond integration while the logs are being flooded.

I know I have other integrations that use websockets so I'll plug along here and wait to see if these integrations stop working in the next 6 months.

Which Bond? If you switched to the community one, is not using websocket, so it would not be impacted.

Your bond integration. At the time this started, again, I only had the integrated version installed. That's why the statement that queue full errors will cause everything to stop doesn't jive with what I saw.