[RELEASE] Roborock Robot Vacuum

Once the room command is issued the robot is running in its own. Does it do the same thing from the mobile app?

And there is no code to build or modify maps. I guess technically you could use the ‘execute’ command to do something with maps. But it doesn’t sound like that is the case here.

Suggest you clear the new map. And get the robot working from the mobile app and then refresh the device driver.

I already cleared the new map and now everything is working fine. But I didn't understand why it started creating a new map.

Glad you got it working. Nothing is certain but THIS driver does not have any capability to modify or initiate maps. Not sure a direction to help.

Released 1.1.5 update to hopefully address an odd de-authorization from the mqtt that requires a fresh login. It would show in the logs as:

There is now a preference that should attempt to login again after # time when this happens, and is configured here:

The rate for a new login is limited (not sure what that is exactly), so the minimum time allowed is 15 minutes to prevent problems.

1 Like

Anyone else having connection issues today?

I had this yesterday and it took a Hubitat reboot to solve.

Running 2.3.9.177. Same?

Thanks 2.3.9.180 was released today, updating now to that which will also reboot so hopefully that resolves it.

After updating and reboot connection successful thanks!

Same here since 10:30 this morning did not reboot yet.

What Hubitat version you running @nclark ?

@gopher.ny any changes to the IP stack that could effect mqtt with binary data? There are three cases in the last couple of days that took a reboot to fix the mqtt connection; and this code hasn't changed other than upgrades to 2.3.9.177. I know there has been discussion on mDNS changes, but didn't know if something has been changed.

interfaces.mqtt.connect(rriot.r.m, "${device.deviceNetworkId}", mqttUser, mqttPassword, byteInterface:true)

1 Like

I was on 2.3.9.178 Beta, upgraded last night to the latest .180

Please update to .180, there were some DNS fallback changes related between 177 and 180.

2 Likes

Have two hubs failing again on 2.3.9.180. Had to reboot them to get the driver working. Was calling a 408 Request Timeout error on both from an asyncHttpGet method. My network is stable and this driver hasn't been changed in some time. Something has changed (I think since .177 but might be a version or two sooner).

What device is it?

Roborock S8 on two different C7s running .180.

I got this error this morning as well when my Roborock was supposed to clean a few rooms.

Same here, when everyone leaves the house, the vacuum starts automatically if it did not already vacuumed that day, had to come back because I forgot something and noticed the vacuum was not running, checked logs and saw this, a reboot of the hub and all is ok...

That is four different hub failures now on 180. This driver uses the mqtt interface, which might be still might be the problem and exhausting the TCP stack. LMK if I can give you some type of debug or hold a hub broken (I have a dev one) for inspection next time it happens.

1 Like

@gopher.ny failure (408 Request Timeout asyncHttpGet method) again after less than 24 hours on two different C7 hubs running 180. Again this driver hasn't changed in some time. Something happened between 166 and 180 that broke. Again the this is mostly a mqtt binary solution that has some asynchttp methods for other metadata. Only thing that solves is a reboot of the hub.

The mqtt socket fails first and will not reconnect (by design) until the http stack returns good. Then the http stack starts returning 408s until hub reboot.

This is the param map sent to asyncHttpGet:
[uri:https://api-us.roborock.com, path:/v2/user/homes/191152, headers:[Authorization:Hawk id="4PP....Pmj", s="4...T", ts="1726313338", nonce="9254a23e", mac="U/Qz6....ik="]]

It 'appears' that DNS to that https site is in a failure mode for this driver? (just guessing).

A little more information....that uri is hitting a AWS ELB load balancer. So not sure if the hub is holding what it thinks is a good IPv4 when it should be updating. Here is a nslookup:

nslookup api-us.roborock.com
Server: 10.0.0.1
Address: 10.0.0.1#53

Non-authoritative answer:
api-us.roborock.com canonical name = api-slb-1106974124.us-east-1.elb.amazonaws.com.

Name: api-slb-1106974124.us-east-1.elb.amazonaws.com
Address: 50.17.160.102

Name: api-slb-1106974124.us-east-1.elb.amazonaws.com
Address: 34.204.253.25

Name: api-slb-1106974124.us-east-1.elb.amazonaws.com
Address: 54.86.58.42

Name: api-slb-1106974124.us-east-1.elb.amazonaws.com
Address: 52.21.43.208

Name: api-slb-1106974124.us-east-1.elb.amazonaws.com
Address: 34.224.123.113

Name: api-slb-1106974124.us-east-1.elb.amazonaws.com
Address: 107.20.240.14

** 24 hour update: No failures since yesterday and I now have a wireshark filter ready to capture if/when the failure happens again on one of the hubs.

1 Like