Sonoff Leak Detector issue

(Posting as its own topic, since the other thread was very old)
Having problems with Sonoff SNZB leak detector. It got into a wet state (legitimately), and is not reverting to "dry" at all. It still seems to be communicating with the hub fine, because the battery checks are coming through periodically. Here are the last few log entries:

dev:5442025-10-13 12:13:17.959 AMinfoMoisture-MBR Right Sink battery is 100%
dev:5442025-10-12 12:08:28.818 PMinfoMoisture-MBR Right Sink battery is 100%
dev:5442025-10-11 06:40:12.386 PMinfoMoisture-MBR Right Sink is wet
dev:5442025-10-11 02:10:36.074 PMinfoMoisture-MBR Right Sink is dry
dev:5442025-10-11 02:09:36.802 PMinfoMoisture-MBR Right Sink is wet
dev:5442025-10-11 02:09:36.680 PMinfoMoisture-MBR Right Sink is dry

Here are the things I have tried:

  1. The unit has the "wick" extension. In order to try to force a "dry" condition, I removed the extension, so that the base unit bare. It is dry as a bone, but yet nothing is reported back.
  2. I brought the puck close to the hub, to see if it was a communication problem. Even sitting next to the hub, nothing is reported.
  3. I removed the back and pressed the button a couple of times. Each time the button is pressed, the LED flashes as if it is sending a message. No message is reported in the logs.

What else can I try, short of deleting and re-adding the sensor?-


C7 hub 2.4.3.133

Enable the Debug logging, then pair the device again close to the hub (without removing it).
Anything in the live logs?
If you remove the battery for 30 seconds and insert it again, what is in the logs (Debug preference must be switched on).

OK, with the Debug logging enabled, I pushed the button (I believe that's the pairing button, but I'm not sure). These entries appeared in the Live Log:

dev:5442025-10-14 02:40:58.928 PMdebugskipped:[raw:catchall: 0000 0006 00 00 0040 00 A1E7 00 00 0000 00 00 0600000401000157FC, profileId:0000, clusterId:0006, clusterInt:6, sourceEndpoint:00, destinationEndpoint:00, options:0040, messageType:00, dni:A1E7, isClusterSpecific:false, isManufacturerSpecific:false, manufacturerId:0000, command:00, direction:00, data:[06, 00, 00, 04, 01, 00, 01, 57, FC]]
dev:5442025-10-14 02:40:58.924 PMdebugdescMap: [raw:catchall: 0000 0006 00 00 0040 00 A1E7 00 00 0000 00 00 0600000401000157FC, profileId:0000, clusterId:0006, clusterInt:6, sourceEndpoint:00, destinationEndpoint:00, options:0040, messageType:00, dni:A1E7, isClusterSpecific:false, isManufacturerSpecific:false, manufacturerId:0000, command:00, direction:00, data:[06, 00, 00, 04, 01, 00, 01, 57, FC]]
dev:5442025-10-14 02:40:58.918 PMdebugparse: catchall: 0000 0006 00 00 0040 00 A1E7 00 00 0000 00 00 0600000401000157FC
dev:5442025-10-14 02:40:57.440 PMdebugdescMap: [raw:catchall: 0104 0500 01 01 0040 00 A1E7 00 00 0000 04 01 00, profileId:0104, clusterId:0500, clusterInt:1280, sourceEndpoint:01, destinationEndpoint:01, options:0040, messageType:00, dni:A1E7, isClusterSpecific:false, isManufacturerSpecific:false, manufacturerId:0000, command:04, direction:01, data:[00]]
dev:5442025-10-14 02:40:57.436 PMdebugparse: catchall: 0104 0500 01 01 0040 00 A1E7 00 00 0000 04 01 00
dev:5442025-10-14 02:40:57.150 PMdebugparse: enroll request endpoint 0x01 : data 0x002A
dev:5442025-10-14 02:40:54.744 PMdebugskipped:[raw:catchall: 0000 0006 00 00 0040 00 A1E7 00 00 0000 00 00 0500000401000157FC, profileId:0000, clusterId:0006, clusterInt:6, sourceEndpoint:00, destinationEndpoint:00, options:0040, messageType:00, dni:A1E7, isClusterSpecific:false, isManufacturerSpecific:false, manufacturerId:0000, command:00, direction:00, data:[05, 00, 00, 04, 01, 00, 01, 57, FC]]
dev:5442025-10-14 02:40:54.740 PMdebugdescMap: [raw:catchall: 0000 0006 00 00 0040 00 A1E7 00 00 0000 00 00 0500000401000157FC, profileId:0000, clusterId:0006, clusterInt:6, sourceEndpoint:00, destinationEndpoint:00, options:0040, messageType:00, dni:A1E7, isClusterSpecific:false, isManufacturerSpecific:false, manufacturerId:0000, command:00, direction:00, data:[05, 00, 00, 04, 01, 00, 01, 57, FC]]
dev:5442025-10-14 02:40:54.736 PMdebugparse: catchall: 0000 0006 00 00 0040 00 A1E7 00 00 0000 00 00 0500000401000157FC
dev:5442025-10-14 02:40:50.060 PMdebugskipped:[raw:catchall: 0000 0006 00 00 0040 00 A1E7 00 00 0000 00 00 0300000401000157FC, profileId:0000, clusterId:0006, clusterInt:6, sourceEndpoint:00, destinationEndpoint:00, options:0040, messageType:00, dni:A1E7, isClusterSpecific:false, isManufacturerSpecific:false, manufacturerId:0000, command:00, direction:00, data:[03, 00, 00, 04, 01, 00, 01, 57, FC]]
dev:5442025-10-14 02:40:50.055 PMdebugdescMap: [raw:catchall: 0000 0006 00 00 0040 00 A1E7 00 00 0000 00 00 0300000401000157FC, profileId:0000, clusterId:0006, clusterInt:6, sourceEndpoint:00, destinationEndpoint:00, options:0040, messageType:00, dni:A1E7, isClusterSpecific:false, isManufacturerSpecific:false, manufacturerId:0000, command:00, direction:00, data:[03, 00, 00, 04, 01, 00, 01, 57, FC]]
dev:5442025-10-14 02:40:50.051 PMdebugparse: catchall: 0000 0006 00 00 0040 00 A1E7 00 00 0000 00 00 0300000401000157FC
dev:5442025-10-14 02:40:47.177 PMdebugskipped:[raw:catchall: 0000 0013 00 00 0040 00 A1E7 00 00 0000 00 00 81E7A1064175FEFFA7DB2880, profileId:0000, clusterId:0013, clusterInt:19, sourceEndpoint:00, destinationEndpoint:00, options:0040, messageType:00, dni:A1E7, isClusterSpecific:false, isManufacturerSpecific:false, manufacturerId:0000, command:00, direction:00, data:[81, E7, A1, 06, 41, 75, FE, FF, A7, DB, 28, 80]]
dev:5442025-10-14 02:40:47.173 PMdebugdescMap: [raw:catchall: 0000 0013 00 00 0040 00 A1E7 00 00 0000 00 00 81E7A1064175FEFFA7DB2880, profileId:0000, clusterId:0013, clusterInt:19, sourceEndpoint:00, destinationEndpoint:00, options:0040, messageType:00, dni:A1E7, isClusterSpecific:false, isManufacturerSpecific:false, manufacturerId:0000, command:00, direction:00, data:[81, E7, A1, 06, 41, 75, FE, FF, A7, DB, 28, 80]]
dev:5442025-10-14 02:40:47.168 PMdebugparse: catchall: 0000 0013 00 00 0040 00 A1E7 00 00 0000 00 00 81E7A1064175FEFFA7DB2880
dev:5442025-10-14 02:40:43.291 PMdebugskipped:[raw:catchall: 0104 0003 01 01 0040 00 A1E7 01 00 0000 00 00 0100, profileId:0104, clusterId:0003, clusterInt:3, sourceEndpoint:01, destinationEndpoint:01, options:0040, messageType:00, dni:A1E7, isClusterSpecific:true, isManufacturerSpecific:false, manufacturerId:0000, command:00, direction:00, data:[01, 00]]
dev:5442025-10-14 02:40:43.287 PMdebugdescMap: [raw:catchall: 0104 0003 01 01 0040 00 A1E7 01 00 0000 00 00 0100, profileId:0104, clusterId:0003, clusterInt:3, sourceEndpoint:01, destinationEndpoint:01, options:0040, messageType:00, dni:A1E7, isClusterSpecific:true, isManufacturerSpecific:false, manufacturerId:0000, command:00, direction:00, data:[01, 00]]
dev:5442025-10-14 02:40:43.274 PMdebugparse: catchall: 0104 0003 01 01 0040 00 A1E7 01 00 0000 00 00 0100

Does this provide useful information?

1 Like

Removing and replacing battery after 30 secs produces something similar:

dev:5442025-10-14 02:49:59.693 PMdebugdescMap: [raw:catchall: 0104 0500 01 01 0040 00 A1E7 00 00 0000 04 01 00, profileId:0104, clusterId:0500, clusterInt:1280, sourceEndpoint:01, destinationEndpoint:01, options:0040, messageType:00, dni:A1E7, isClusterSpecific:false, isManufacturerSpecific:false, manufacturerId:0000, command:04, direction:01, data:[00]]
dev:5442025-10-14 02:49:59.691 PMdebugparse: catchall: 0104 0500 01 01 0040 00 A1E7 00 00 0000 04 01 00
dev:5442025-10-14 02:49:59.406 PMdebugparse: enroll request endpoint 0x01 : data 0x002A
dev:5442025-10-14 02:49:57.231 PMdebugskipped:[raw:catchall: 0000 0006 00 00 0040 00 A1E7 00 00 0000 00 00 0700000401000157FC, profileId:0000, clusterId:0006, clusterInt:6, sourceEndpoint:00, destinationEndpoint:00, options:0040, messageType:00, dni:A1E7, isClusterSpecific:false, isManufacturerSpecific:false, manufacturerId:0000, command:00, direction:00, data:[07, 00, 00, 04, 01, 00, 01, 57, FC]]
dev:5442025-10-14 02:49:57.228 PMdebugdescMap: [raw:catchall: 0000 0006 00 00 0040 00 A1E7 00 00 0000 00 00 0700000401000157FC, profileId:0000, clusterId:0006, clusterInt:6, sourceEndpoint:00, destinationEndpoint:00, options:0040, messageType:00, dni:A1E7, isClusterSpecific:false, isManufacturerSpecific:false, manufacturerId:0000, command:00, direction:00, data:[07, 00, 00, 04, 01, 00, 01, 57, FC]]
dev:5442025-10-14 02:49:57.225 PMdebugparse: catchall: 0000 0006 00 00 0040 00 A1E7 00 00 0000 00 00 0700000401000157FC
dev:5442025-10-14 02:49:55.904 PMdebugskipped:[raw:catchall: 0000 0006 00 00 0040 00 A1E7 00 00 0000 00 00 0600000401000157FC, profileId:0000, clusterId:0006, clusterInt:6, sourceEndpoint:00, destinationEndpoint:00, options:0040, messageType:00, dni:A1E7, isClusterSpecific:false, isManufacturerSpecific:false, manufacturerId:0000, command:00, direction:00, data:[06, 00, 00, 04, 01, 00, 01, 57, FC]]
dev:5442025-10-14 02:49:55.901 PMdebugdescMap: [raw:catchall: 0000 0006 00 00 0040 00 A1E7 00 00 0000 00 00 0600000401000157FC, profileId:0000, clusterId:0006, clusterInt:6, sourceEndpoint:00, destinationEndpoint:00, options:0040, messageType:00, dni:A1E7, isClusterSpecific:false, isManufacturerSpecific:false, manufacturerId:0000, command:00, direction:00, data:[06, 00, 00, 04, 01, 00, 01, 57, FC]]
dev:5442025-10-14 02:49:55.899 PMdebugparse: catchall: 0000 0006 00 00 0040 00 A1E7 00 00 0000 00 00 0600000401000157FC
dev:5442025-10-14 02:49:52.315 PMdebugskipped:[raw:catchall: 0000 0006 00 00 0040 00 A1E7 00 00 0000 00 00 0400000401000157FC, profileId:0000, clusterId:0006, clusterInt:6, sourceEndpoint:00, destinationEndpoint:00, options:0040, messageType:00, dni:A1E7, isClusterSpecific:false, isManufacturerSpecific:false, manufacturerId:0000, command:00, direction:00, data:[04, 00, 00, 04, 01, 00, 01, 57, FC]]
dev:5442025-10-14 02:49:52.312 PMdebugdescMap: [raw:catchall: 0000 0006 00 00 0040 00 A1E7 00 00 0000 00 00 0400000401000157FC, profileId:0000, clusterId:0006, clusterInt:6, sourceEndpoint:00, destinationEndpoint:00, options:0040, messageType:00, dni:A1E7, isClusterSpecific:false, isManufacturerSpecific:false, manufacturerId:0000, command:00, direction:00, data:[04, 00, 00, 04, 01, 00, 01, 57, FC]]
dev:5442025-10-14 02:49:52.309 PMdebugparse: catchall: 0000 0006 00 00 0040 00 A1E7 00 00 0000 00 00 0400000401000157FC
dev:5442025-10-14 02:49:50.951 PMdebugskipped:[raw:catchall: 0000 0006 00 00 0040 00 A1E7 00 00 0000 00 00 03FDFF040101190000, profileId:0000, clusterId:0006, clusterInt:6, sourceEndpoint:00, destinationEndpoint:00, options:0040, messageType:00, dni:A1E7, isClusterSpecific:false, isManufacturerSpecific:false, manufacturerId:0000, command:00, direction:00, data:[03, FD, FF, 04, 01, 01, 19, 00, 00]]
dev:5442025-10-14 02:49:50.948 PMdebugdescMap: [raw:catchall: 0000 0006 00 00 0040 00 A1E7 00 00 0000 00 00 03FDFF040101190000, profileId:0000, clusterId:0006, clusterInt:6, sourceEndpoint:00, destinationEndpoint:00, options:0040, messageType:00, dni:A1E7, isClusterSpecific:false, isManufacturerSpecific:false, manufacturerId:0000, command:00, direction:00, data:[03, FD, FF, 04, 01, 01, 19, 00, 00]]
dev:5442025-10-14 02:49:50.946 PMdebugparse: catchall: 0000 0006 00 00 0040 00 A1E7 00 00 0000 00 00 03FDFF040101190000
dev:5442025-10-14 02:49:49.436 PMdebugskipped:[raw:catchall: 0000 0013 00 00 0040 00 A1E7 00 00 0000 00 00 81E7A1064175FEFFA7DB2880, profileId:0000, clusterId:0013, clusterInt:19, sourceEndpoint:00, destinationEndpoint:00, options:0040, messageType:00, dni:A1E7, isClusterSpecific:false, isManufacturerSpecific:false, manufacturerId:0000, command:00, direction:00, data:[81, E7, A1, 06, 41, 75, FE, FF, A7, DB, 28, 80]]
dev:5442025-10-14 02:49:49.432 PMdebugdescMap: [raw:catchall: 0000 0013 00 00 0040 00 A1E7 00 00 0000 00 00 81E7A1064175FEFFA7DB2880, profileId:0000, clusterId:0013, clusterInt:19, sourceEndpoint:00, destinationEndpoint:00, options:0040, messageType:00, dni:A1E7, isClusterSpecific:false, isManufacturerSpecific:false, manufacturerId:0000, command:00, direction:00, data:[81, E7, A1, 06, 41, 75, FE, FF, A7, DB, 28, 80]]
dev:5442025-10-14 02:49:49.430 PMdebugparse: catchall: 0000 0013 00 00 0040 00 A1E7 00 00 0000 00 00 81E7A1064175FEFFA7DB2880

Again, I'm not able to make sense of this. The moisture sensor is still showing "wet" though dry.

Did you try to reset it? Usually with battery Zigbee devices, start Zigbee Pairing and just do a reset (probably hold the button for 5 sec or something similar). It usually finds it and it reconnects to the same device.

I reset it. It re-paired as the same device, and showed condition as "wet".

As a desperation act, I made it good and wet first, then dried it off with a towel. It reported "dry" and is now showing as "dry".

So, it seems like the way this device works, it sends ONE message when going from wet to dry or vice versa, and if the hub misses it, it is forever stuck in that state, unless it goes to the other? That seems incredibly lame. Could it be?

Yes, binary sensors only change between two states, if the hub does not get an update it does not send an event, and the sensor does not keep sending the current status. True for contact sensors, motions sensors, switches, etc. If you use the event as a trigger, the hub won't send the event as the driver did not get a change of status from the device. If the sensor has a refresh command, sometimes that will force it to send its state again, then it will update the driver and send the event if the received status is a change of state.

The question you should ask is why you are missing state changes from the sensor. That sounds like a mesh issue and that sensor does not have a solid path back to the hub. The fact that it completely lost connection confirms that as well. Maybe add some repeaters?

Ok, got it. The strange thing is it is quite close to the hub (maybe 25’ max). I did try several “refresh” commands to no avail. Also, it does send a nightly battery level update. Would have been close to free to include the current status, but apparently not. By rhe way, it is not correct to say it had completely lost connection. Subsequent battery updates arrived just fine. Just that one command got missed.

Not sure where to go from here.

Hi Krishkal,

Thank you for the logs. I see that this device behaves similarly to some other Sonoff devices (it requests a reply from the hub that is not sent by HE platform), but probably this is not the problem.

As chrisbvt mentioned, all Zigbee water, contact, and motion sensors operate in the same way—they only send a state change notification once. If the message does not reach the Zigbee coordinator (the hub) within a time frame of 6 to 10 seconds, the state change Zigbee message will be lost.

I have a workaround implemented in a custom driver for Zigbee contact sensors - the current state of the contact is refreshed automatically aftrer every battery report. There is a time window of 1-2 seconds after a bettery report when the sleepy device will respond to a refresh (attribute read) commands. The same approach should work in the custom water leak driver, but it is not implemented at this time.

1 Like

Hey, I have a 3rd reality window sensor that I have trouble with sometimes, I should try your custom driver!

So, is there any hope that the leak sensor would be covered by the custom driver any time?

Any other suggestions for me?

As a workaround, make a RM5 rule that forces all of your leak sensors to a ‘dry’ state. Sticking in a ‘wet’ state shouldn’t happen often .. it means water flooding (or testing)

Ah, of course! Thanks.

Well, thinking some more… I am worried about the case where I’m away on vacation and there is a flood. I wouldn’t want to just sweep that under the rug. I guess there are really only 2 ways to do this:

  1. Use “repeaters” or whatever to get the comm 100% solid. What would be the best device to use for this?
  2. Piggyback on battery status, as you have suggested, when you get around to it. Pretty please?

The most reliable remedy is to strengthen your Zigbee mesh by adding at least one (preferably more) mains-powered Zigbee router between (or around) your hub and the sensor. That could be a smart plug, an in-wall switch, or a dedicated repeater.

Once you have more repeaters in place, you’ll likely see those state transitions reliably delivered, and the “stuck wet” syndrome should go away.

Refreshing the leak sensor on each battery report won’t really help — those reports usually come every 4–12 hours, so you could get a flood alert way too late. The real fix is improving the Zigbee mesh with more routers.

This makes sense. @chrisbvt also said the same thing. I realize now that all my smart plugs are Z-Wave and the only Zigbees I have are battery powered. This is a bad situation for mesh strength. I do have a Tuya USB repeater, but I don’t think it’s enough. I am just going to buy a 4 pack of Zigbee wall plugs (which I don’t really need for anything) and just distribute them around the house. Make sense?

I have 8 of these.

And 16 Zigbee leak sensors.

2 Likes

https://www.amazon.com/EIGHTREE-Eightree-Smart-Plug/dp/B0DN1LT337
A bit cheaper and seems to do the same thing…

These are energy monitors. I’d check with @kkossev to see if they are spammy or are ok.

Most Tuya plugs do not report power or energy automatically; instead, they require periodic polling. This process is managed in the custom driver, and polling can be disabled in the Preferences if it's not necessary.

2 Likes

I am simply going to be using them for their Repeater function. So, energy monitoring is not a consideration, or am I missing something?