InfluxDB Logger Support

Yeah, and that's where I'm confused as to what is going on.

:crossed_fingers:

Completely new. I'll worry about migration of v1.8 data after I get all the kinks worked out.

can you try running this older code and see if the problem still happens.

No difference as far as I can see.

I just checked my database and the data is there and i am getting new data so My instance isn't dropping it..

1 Like

Thanks for checking.

EDIT: Now the data is in the database! I wasn't even here to change anything so :man_shrugging:

Sorry for the false alarm.

I'm guessing the "no content" means that the 204 response has nothing to add. Earlier I thought it was Influx saying that the write request had no content. :man_shrugging:

Hi, I was running an older version of the logger (the Jan 12 2023 version), against my Influx database (v2.6.1). I tried upgrading the logger to the current v3, but now I get 422 errors when it tries to post the data to Influx:

Post of 150 events failed - elapsed time 0.068 seconds - Status: 422, Error: Unprocessable Entity, Headers: [X-Influxdb-Build:OSS, X-Platform-Error-Code:unprocessable entity, X-Influxdb-Version:v2.6.1, Content-Length:218, Date:Mon, 13 Mar 2023 00:16:34 GMT, Content-Type:application/json; charset=utf-8], Data: null

I'm not quite sure what this error means, and whether it's on the Hubitat or InfluxDB side... sounds like it might something to do with the formatting of the data being posted to Influx?

For now I switched back to the old version and everything is working again. Figured I'd ask here to see what might have changed in the logger.

Thanks!

Clearly a problem. Can you to set the batch size to a small value like 1 or 10 and then once the error happens again you could go into the app status (gear icon on the app page) and capture the state viable "loggerQueue" variable?

That will give some indication what is happening. Thanks!

Btw, just to clarify: By "Jan 12 2023 version" do you mean "2023-01-12 Denny Page Automatic migration of Influx 1.x settings." is the last modification you see listed in the source file?

'Apologies if this is a silly question but I can't seem to find a way to send hub variables to influxDB from within the community app. Is this indeed not supported or have I overlooked a section in the 'Devices to Monitor' selector somehow? I have a few daily counters (INTs) for appliance cycles that I'd like to trend. Thanks in advance.

:edit: Okay so it seems I have to have checked "Get Access to All Attributes" to see my Variables. However I assume this means that for my previously selected devices I'm forced to send all device attributes, probably trippling my dataset? (for example instead of strictly sending a motion's lux, I'm now sending the motion state, battery life, etc, etc). If this is the case, could we possibly add a 'Variable' sub-section which is available when 'Get Access to All Attributes' is unchecked?

Hi Denny,
Yes, that's the version. I think it was v2.00? It is possible I might have updated to one of the minor versions after that via HPM, but I'd held out a while. So when I upgraded via HPM today, I didn't note what old version I was on. I did save a copy of the 2023-01-12 code locally, so that's what I manually rolled back to.

OK, looks like I have 800+ items built up in LoogerQueue from earlier, based on the timestamps. It just added the new entries to the end. I won't post it here at this time given the length (or maybe I can PM it to you). I'm presuming the issue could likely be in the older/existing queued items that it is trying to re-post. Is there an easy way to clear out the LoggerQueue variable?

I guess the harder approach would be to completely uninstall and reinstall, to clear everything out, then slowly add devices back for logging. Will probably have to wait until next weekend before trying that. Oh, actually maybe I can try and create a new instance of v300 and run it against a new test InfluxDB instance so I can leave the old one running.

Given the error message mentions "Data:null" at the end, I did look through the LoggerQueue and noticed that I have a couple devices where "unit=null", but this includes system variables like sunrise and sunset.

Thanks!
PS forgot to mention, I'm on a C7 hub running 2.3.4.132 right now. But I don't think I'm behind enough to be an issue.

Yes, it will be the old items. If you could bear with me a bit so that I can try and figure out what the issue is to help address it for everyone...

Please set the Batch size limit to 1, and let it run until each post shows a warning like so:

warn Post of 1 events failed

Once you have reached that point, the post record causing the problem should be the topmost item in the loggerQueue. You can find this by going into the app state (gear icon on the app menu) and looking at the state variable "loggerQueue". You only need post the first few lines. It will look something like this:

temperature,deviceName=Temp\ sensor,deviceId=182,hubName="dev",unit=°F value=62.5 1678684432046000000, switch,deviceName=vr1,deviceId=169,hubName="dev",unit=switch value="off",valueBinary=0i 1678684432046000000, cpu15Min,deviceName=Hub\ Information\ V3,deviceId=173,hubName="dev",unit=null value=0.02 1678684432046000000, cpu15Pct,deviceName=Hub\ Information\ V3,deviceId=173,hubName="dev",unit=% value=0.42 , 1678684432046000000, _hubInfo,hubName="dev" localIP="192.168.230.223",firmwareVersion="2.3.5.109",upTime="93292",mode="Day",sunriseTime="07:03",sunsetTime="18:54" 1678684432169000000

Thanks.

When you enable "Get Access to All Attributes", you then go into each device and select the individual attributes that you want to post to InfluxDB. If you don't select any attributes, then nothing from that device will be posted to InfluxDB.

Thanks for the clarification - I obviously hadn't clicked through to see this. Assuming I select the same sensors/attributes, will this impact the "format" said attributes are reported to InfluxDB?

Okay, here's a much simpler approach. Get a pre-release copy from my repo here and install it.

Following the instructions here, set the Batch size limit to 1. The entries that InfluxDB rejects will show as errors in the log. Please post an example of a rejected entry. Thanks.

For standard attributes, choosing via attribute->device (non Advanced attribute selection) or device->attribute (Advanced attribute selection) will both have the same result.

Note that Advanced attribute selection gives you access to all attributes of the device. You can select non-standard attributes that InfluxDB will reject, so be careful what you choose.

1 Like

OK, all the errors came from one device event, when the button on a Ring doorbell is pushed (using the Unofficial Ring integration):

Failed record was: pushed,deviceName=Ring\ Video\ Doorbell\ Pro\ -\ Front\ Door,deviceId=663,hubName="Home",unit=button value="1",valueBinary=1i 1678665856039000000

I going to guess that it's the "valueBInary=1i" part which is messing things up since "1i" isn't a binary value.

I took that device out from the Logger, will have to look into what's going on there.

But everything seems to be working fine now, thanks Denny!!!
Would you recommend rolling back to the 3/12/23 version, or just stick with this one?

Okay, I will have to set up the same name here to test with. The button processing is new. FWIW, I think the valueBinary=1i should be fine--the 'i' is integer, and historicaly '0i' and '1i' are used throughout the logger for binary status.

I do recommend staying with the 2023-03-13 pre-release version. The only change over the 2023-03-12 version is the added logging of the failed record when the batch size is one, and that will be included in the next release regardless.

I'll be back to you as soon as I can set up the test. Thank you for the info--much appreciated.

Could it be the - in the name of the device? or the unit value of button maybe?

Interestingly, my InfluxDB server (v1.8), accepts that exact record without complaint.

What version of InfluxDB are you running against? Is it something that you are managing? Or someone's cloud version?

Running InfluxDB v2.6.1 locally in a Docker container on a Synology NAS (DS920+, current version of DSM).
I was running InfluxDB v1.8 before, but upgraded earlier this year.
IIRC I set up v2.6.1 as a new/clean install, and then transferred the data over (export/import).