I've noticed that this has been discussed a bit around 5 years ago here and here.
But the limits are not clear, and in my experience way too limiting (no pun intended).
Today I wanted to look at some past logs and I could see only about 17 hours back! When there is an issue, I can't even check how the device behaved at the same time one day before.
The past logs page would show me only 4358 log entries.
If I save the whole history in a UTF-8 text file, it amounts up to only 573KB. And that is not a database optimized format.
I don't have any device logging in debug and no crazy amounts of logs for any device or app.
With 93 Zigbee devices, I'm getting a history of less than 47 log entries per device on average.
The logging in the hub can be cpu intensive depending on a few things. I am sure Hubitat has there reasons for the limit.
I have a driver that listens to the logging websocket and then streams the output to a InfluxDB database. I retain like 90 days of logs. Grafana acts as the frontend. I have almost never actually use it, but have it if i want to.
If you want to use it look up inflixdb live Logger
I'd love to know their reasons, because as a software developer/architect with 33 years of experience, some sort of limit seems reasonable, but 573KB of logs doesn't make any sense.
My C8 Pro runs all of the time under 10% of CPU usage, most of the time under 5%, has normally 1.3-1.4GB of RAM available, and 999.6 MB of storage space left. I'm sure it is not lacking resources.
I wouldn't like to add another piece of software and access another page to do what could be done in the hub itself.
At least give the users an option of how many days to keep logs, limit it to, let's say, 15 days.
If that starts to impact the hub's performance, it is my problem.
Don't be like Apple that thinks it knows better than their users what they want.
Well i have a few theories, but it is all speculation.
As I recall i believe we heard the log file is a flat file and is simply appended to the end and then trimmed at the beginning. It isn't in the DB and as such has some unique performance impacts. When activies use that file they often have to load the whole thing and scan the whole thing. I suspect at some point the performance just drops off a cliff and is abysmal. There is a app out there call Influx DB logger. It is fantastic and has a lot of good data handling to prevent data loss in queues. It works great for the first 5 to 7k records when the hub is in a problem condition but after that performance of the app tanks because the time to scan through records skyrockets. Post operations that normally take less then a second will go up and up into the several min.
Picking a good value to prevent bad conditions like that from happening is likely what happened. Considering how bad that will be for user experience i would expect it to be forcefully prevented and not let to the users whim.
If that is the case, then it is bad engineering. It should be a database table, with proper indices for performance, it could even be an in-memory table. Done right, the performance would be great.
If it is in memory, one could use the app you mentioned if they want non-volatile log storage, for me, I don't care if the logs are gone after a reboot.
I wanted to point out that this is largely dependent on each persons use case. I just checked what was currently on my hub. It has 10171 lines when I checked and the total size of the 854,111. It also managed to go back 5 days. So your activity going to the log isn't insignificant.
I think it comes down to the difference between OLTP vs OLAP processing. The types of DB work the hub does would falls inline with OLTP vs the analysis type work load of analyzing logs would be more like OLAP.
You can run both types of activities on a single DB platform, but in many cases they are separated because it is very possible to have one type negatively impact the other. Generally speaking you optimize for one type of processing and the other gets negatively impacted. Some DB's are better and handling different types of data sets to. The potential impact is likely why it wasn't even put in a DB and put in a flat file in the system.
I would imagine that Hubitat uses a fast lightweight optimized OLTP database. That would likely not handle logging the best. Some databases are built for Timeseries data like InfluxDB. They are optimized for the type of data like logs or event history. That is why i suggested InfluxDB and Grafana. Both tools designed for the job.
@eduardo, Both influxDB and Grafana support free tierd cloud options if you don't want to run another host with them. They also can be run locally if you choose to. There is allot of good info in these forums about using the combination for a variety of tasks. They can also both be run on a Raspberry pi if you want to go that route. We have to remember that this hub, even with the C8 Pro, is a small box with a oldish Arm processor and EMMC storage. CPU and Disk IO performance is not nearly as forgiving as it would be on other systems with lets say even a Intel atom processor and a SSD of some flavor. EMMC memory is better then a micro SD card but not by a huge amount.
The key point to keep in mind is that logs serve no operational purpose on the hub.
Therefore, good engineering means keeping processing to a minimum on the hub and leaving it to the browser running on a much more capable system (including any modern phone) to process the log file into structured data (or some other system they are offloaded to as has been suggested) if that serves the users purpose.
I've had a read through the linked posts regarding InfluxDB, Grafana etc before and it always sounded way too complicated. However I decided to take another look. Kudos to @mavrrick58 for the step by step instructions. I was able to follow through a step at a time to create an Influx CloudDB account and get InfluxDB Logger going on HE. I can see on the cloud that events are going through. I'm a little lost after that with regard to the Grafana instructions. I'm not sure where Grafana has to be installed and run to view a nicely presented, filterable table of the events. Does Grafana need to be installed on an always on device - PC, Mac, Pi? Thanks. Oh and while I think on, I take it that we can only log device events here rather than those from RM?
Edit: I'd missed the link that detailed Grafana also can be setup in the cloud or on various OS's so will take a look at the cloud option.
Grafana and influx DB need to be on a system that is always on. The easiest way to accomplish that is using the free cloud tiers. That just comes with all of the good and bad things of putting data in the cloud. Probably the second easiest way is to use a Raspberry Pi and the image I posted in that thread. That image is fairly old now though so it would need to be updated as soon as it is loaded.
That is why in the post above I included the First link. It discusses a driver that connects to the Logs websocket and puts all of that logging info in the InfluxDB server. Then there is a dashboard that allows you can use to access the data. and present it in a very usable way.
With Node-Red in between, yes. Push events to NR with Maker, then transform and filter the data, then save to Influx. This is how I log all my device events. I set it up before the new logger app came out so I have kept it.
Otherwise if you just want device event data there is a specialized app that will save directly to Influx.