Help getting Syslog Device to automatically initialize (to send logging messages) after hub reboot

I think this is a pretty basic question, but I have a created a "syslog" device for my C-8 running 2.3.9.174 to send syslog the Hubs logs to a syslog server. It works great but for whenever the hub is rebooted (for updates, manual reboot, power hit, ect) I have to log into the Hub and go to the syslog device and click initialize. I am not that familar with changing coding/scripting so is there an easy way to make my syslog device automatically start every time the Hub boots up.

Here's the device:

I have a my own modified version of that driver (I think same one). Let me gather some stuff and post it on Gist for you. I know the issue with it connecting at boot was something I worked on and fixed.

Ok so I am trying to break down what I did exactly and I pieced together the original driver which logs to an external syslog combined with the one from @mavrrick58 which logs directly to Influx. My goal was to get it into Influx but I was also setting up a syslog capture for my router so I wanted it all to funnel through that. I ended up with one that sends to syslog, some of the upgrades from @mavrrick58, plus a bunch of extra work on top of that.

So... which direction are you trying to go and which one are you using currently?
@mavrrick58 version Hubitat-by-Mavrrick/InfluxDB_Logger_V2/InfluxDB_Hubitat_Live_Logger at main · Mavrrick/Hubitat-by-Mavrrick · GitHub

Or original from @staylorx hubitatCode/drivers/Syslog.groovy at master · staylorx/hubitatCode · GitHub

The preferences in the screenshot are nothing like my driver. So at the very least it is not mine.

Looking at both drivers code one thing that stands out to me is that the original one seems to only have the connect as part of the initialize routine. That runs 5 seconds after the driver is started and at times i have found my hub is very busy on reboot. It could be a timing problem with when the driver starts and the websocket used to pull in the live logging.

My driver just schedules a job to occur every x seconds that will write the data out to influxdb after a set time window.

My guess is that the driver is trigger that job to connect to fast on bootup and that is causing it to fail.

You could try changing line 149
from

        runIn(5, connect)

to

        runIn(180, connect)

to give everything more time to start up. that should delay it 3 min. The downside is that then you are not capturing data for that amount of time.

If you have debug logging turned on and reboot do you see any messages with "attempting connection" followed by a error?

Ofcourse my statements above are assuming you are using @staylorx driver.

Another thought just occurred to me. If you are using @staylorx driver, you may be getting allot of errors with that driver as it has the potential to exceed a limit introduced a few firmwares ago with generating to many http connections at one time. See that code will create a connection and send it to the remote Syslog Server for every single entry in the Live Logging. If you have a few chatty devices logging debug data ect, it isn't hard to see occasions where more then 10 lines of logging are created at once. This can potentially cause errors and delay processing of live logging until the hub catches up. It isn't a real problem as long as the hub catches up, but if to may logging entries are occurring and the hub can't catch up then it will eventually slow all processing and make the hub unresponsive. This is why I made the last update to my driver. It effectively queues the records for a time and then sends many records at once. This is just something to consider as a potential issue.

1 Like

So yeah guessing @HubiRaFan is using the original driver.

Here is my adaptation of that driver: Driver to export hubitat logs to a syslog receiver · GitHub

@mavrrick58 you might be interested in my validation of the connection code, not sure if you have something like that or not. I am basically having the driver log a message it is connected when it gets the proper status back, then the parse looks for that log entry to confirm the connection is really working. Then I have a reoccurring check to validation the connection and reconnect if needed. Might be overkill but since I added all that it has never failed me. Before that I would randomly find that it stopped working at some point and never recovered.

1 Like

I will take a look at it. I don't think i have had much of a problem with that, but i can see how it would happen.

Yes, I was using the original.

I just updated with your adaptation and problem solved (and some other nice enhancements). Thank you the work on this!

2 Likes