C7 Hub: Low Memory After 2 Days

Hi,

Hub = C7
Software Version = 2.3.5.125

I’ve noticed that I’ve been getting low memory errors with my C7 frequently. The latest one went 2 days before triggering the memory issue. Is there any way to troubleshoot why this is happening so I can get to the bottom of this? Concerned about the frequency of this occurrence.

Thanks in advance.

Look at your logs and events for chatty stuff. Look at app stats as well.

Go to Logs > Device Stats and also App Stats.

By default it sorts the heaviest usage stuff to the top. If you are not sure what you are looking at post screenshots of those pages.

Thanks for replying and helping me to understand the troubleshooting process. I ended up having to reboot before the responses rolled into this thread. Here are my current stats as the time of writing:

Devices:

Apps:

Memory and Current Uptime:
Screenshot 2023-04-17 164447

The only thing that jumps out at me is the Ecobee Suite Manager, however I'm not using any of the helper applications. Additionally all values in the preferences area are set to the recommended values including polling.

Anything else jump out for you guys? Thanks.

Might be more useful if you turn on more of the columns on the log pages. I usually just turn them all on. Its in the display settings at the top. Also include the overall stats in your screenshot (right above where you got last last time).

1 Like

Apps:

Devices:

Didn't catch the last part for your "overall stats" comment...? Where do I do this?

This overall info at the top:

But nothing looks bad there. Not sure what is normal for the Ecobee Suite but that is high compared to anything I have on my hub.

Are you sure the alert is not triggering during backups? That will cause low mem but it should recover shortly after its done. Do you have hub protect with cloud backup? Check the size of your backup files in the backup settings, also check the time they are scheduled to run.

1 Like

Thanks, I'll look more into the Ecobee Suite Manager app and switch to the built-in app if need be. Alert is not triggering during the backups. Do not have hub protect, I have a script that runs/dumps the backup to my Raspberry Pi at 11:00pm. Hubitat does its db clean and backup at 3:01am. Backup size is 2.3MB.

Now that I have a better idea how to troubleshoot this, I'll keep a closer eye on things. Thanks again for showing me how to do this.

That seems pretty typical for the Ecobee Suite application. Below you can see what I have for it in my stats. Seems pretty similar. What is your interval set to Ecobee suites. It has been suggested by the developer to set it to 1 min.

My understanding is Echo Speaks can possibly be a little bit of a hog. How long has that been on there. Are the echo's very busy.

I have also noticed a few occasions where my memory seems to tank for a few min but then recovers. I raised it as a concern and basically the consensus was that it was a backend process. The first time it happened to me was in the morning long after my backups completed. I was already lower on memory as the hub had been running for days and then the process dropped the hub down to about 20mb of memory and triggered a restart via a Node-Red Flow. I can see this event every once in a while, but as long as i am not below 180MB my hub doesn't seem to flinch at it.

1 Like

nothing adverse in logs. but my hub actually rebooted today due to low memory on its own... it did the could backup last night and these in the middle of local backup at 2 am ( backup file was on 215k when normal is 4 meg) the hub spotanously rebooted.. i was worried about corruption so restored an earlier back this morn.

Good to know, thanks for commenting on this. Same, I made sure it was set to 1 minute. This was the most recent application I installed, so it was my first suspect. After reading your comments, glad to know my instance is running normally.

Pretty sure I installed it a month after I bought a C7....Jan 2023? Will keep an eye on it, I haven't really used it much yet.

Yeah, I'm crossing my fingers. I'm on 1.5 days and my memory is at ~212MB - still seems rather steep of a decline in the past 20 hours. I was at ~284MB. At least I have a better idea of what to look for.

For comparison, I am at 191Mb after a week of uptime.
Since someone will ask, this is made with Grafana, stats from Hub Info Driver which can be exported to a database in various ways.

2 Likes

I still need to do this. I have an rpi to set it up on but need to wrap my head around a couple of things... While I am very good at being a network engineer, I am database stupid... like really stupid...lol

1 Like

Use the Influx 2.x (current version)
Use the Influx logging driver, it is being maintained again, I have not tested it but should work good.
I can share any of my dashboards so they can be imported, may just need to change some minor things since I use a custom data export via NR. I sent them to another user in PM and he was able to convert them over easily and use them.

2 Likes

Something else you could consider is also to put the data in a cloud database as well. The new version of the InfluxDB Logger app can write to a InfluxDB Cloud account. You can get a free account and it won't have to pay for anything or worry about managing the hardware. It does have some limits, but it would work allot of use cases I think.

Just under a week now and at 190is MB of ram. It did drop to around 90 though during the backup this morning.

1 Like

Technically you don't even have to have Grafana. InfluxDB2.x integrates a visualization tool with it. Here is a quick dashboard I put together for my dev hub which outputs to the cloud. It includes CPU, DB size, Temp, and Free Mem.

Here is the template for that dashboard if you want to import it to get started with those stats.

[{"apiVersion":"influxdata.com/v2alpha1","kind":"Dashboard","metadata":{"name":"spectacular-lichterman-da0001"},"spec":{"charts":[{"axes":[{"base":"10","name":"x","scale":"linear"},{"base":"10","label":"Percent Used","name":"y","scale":"linear"}],"colorMapping":{"value-cpuPct-1-Hub Dev-HubitatDev-HubitatDev-%-mean-":"#31C0F6"},"colorizeRows":true,"colors":[{"id":"uMorniw-cfNfEtY1ZuWOx","name":"Nineteen Eighty Four","type":"scale","hex":"#31C0F6"},{"id":"lfJn4JAYuz_FjHsWuP0i2","name":"Nineteen Eighty Four","type":"scale","hex":"#A500A5"},{"id":"xJp_mMX-Xc5TvRkZLx6qr","name":"Nineteen Eighty Four","type":"scale","hex":"#FF7E27"}],"geom":"line","height":4,"hoverDimension":"auto","kind":"Xy","legendColorizeRows":true,"legendOpacity":1,"legendOrientationThreshold":100000000,"name":"CPU Usage","opacity":1,"orientationThreshold":100000000,"position":"overlaid","queries":[{"query":"from(bucket: \"Hubitat\")\n  |> range(start: v.timeRangeStart, stop: v.timeRangeStop)\n  |> filter(fn: (r) => r[\"_measurement\"] == \"cpuPct\")\n  |> filter(fn: (r) => r[\"_field\"] == \"value\")\n  |> aggregateWindow(every: v.windowPeriod, fn: mean, createEmpty: false)\n  |> yield(name: \"mean\")"}],"staticLegend":{"colorizeRows":true,"opacity":1,"orientationThreshold":100000000,"widthRatio":1},"width":5,"widthRatio":1,"xCol":"_time","yCol":"_value"},{"axes":[{"base":"10","name":"x","scale":"linear"},{"base":"10","label":"Memory in Kilobytes","name":"y","scale":"linear"}],"colorMapping":{"value-freeMemory-1-Hub Dev-HubitatDev-HubitatDev-KB-mean-":"#31C0F6"},"colorizeRows":true,"colors":[{"id":"uMorniw-cfNfEtY1ZuWOx","name":"Nineteen Eighty Four","type":"scale","hex":"#31C0F6"},{"id":"lfJn4JAYuz_FjHsWuP0i2","name":"Nineteen Eighty Four","type":"scale","hex":"#A500A5"},{"id":"xJp_mMX-Xc5TvRkZLx6qr","name":"Nineteen Eighty Four","type":"scale","hex":"#FF7E27"}],"geom":"line","height":4,"hoverDimension":"auto","kind":"Xy","legendColorizeRows":true,"legendOpacity":1,"legendOrientationThreshold":100000000,"name":"Free Memory","opacity":1,"orientationThreshold":100000000,"position":"overlaid","queries":[{"query":"from(bucket: \"Hubitat\")\n  |> range(start: v.timeRangeStart, stop: v.timeRangeStop)\n  |> filter(fn: (r) => r[\"_measurement\"] == \"freeMemory\")\n  |> filter(fn: (r) => r[\"_field\"] == \"value\")\n  |> aggregateWindow(every: v.windowPeriod, fn: mean, createEmpty: false)\n  |> yield(name: \"mean\")"}],"staticLegend":{"colorizeRows":true,"opacity":1,"orientationThreshold":100000000,"widthRatio":1},"width":5,"widthRatio":1,"xCol":"_time","yCol":"_value","yPos":4},{"axes":[{"base":"10","name":"x","scale":"linear"},{"base":"10","label":"Degrees","name":"y","scale":"linear"}],"colorMapping":{"value-temperature-1-Hub Dev-HubitatDev-HubitatDev-°F-mean-":"#31C0F6"},"colorizeRows":true,"colors":[{"id":"uMorniw-cfNfEtY1ZuWOx","name":"Nineteen Eighty Four","type":"scale","hex":"#31C0F6"},{"id":"lfJn4JAYuz_FjHsWuP0i2","name":"Nineteen Eighty Four","type":"scale","hex":"#A500A5"},{"id":"xJp_mMX-Xc5TvRkZLx6qr","name":"Nineteen Eighty Four","type":"scale","hex":"#FF7E27"}],"geom":"line","height":4,"hoverDimension":"auto","kind":"Xy","legendColorizeRows":true,"legendOpacity":1,"legendOrientationThreshold":100000000,"name":"Hub Temperature","opacity":1,"orientationThreshold":100000000,"position":"overlaid","queries":[{"query":"from(bucket: \"Hubitat\")\n  |> range(start: v.timeRangeStart, stop: v.timeRangeStop)\n  |> filter(fn: (r) => r[\"_measurement\"] == \"temperature\")\n  |> filter(fn: (r) => r[\"_field\"] == \"value\")\n  |> filter(fn: (r) => r[\"deviceId\"] == \"1\")\n  |> aggregateWindow(every: v.windowPeriod, fn: mean, createEmpty: false)\n  |> yield(name: \"mean\")"}],"staticLegend":{"colorizeRows":true,"opacity":1,"orientationThreshold":100000000,"widthRatio":1},"width":5,"widthRatio":1,"xCol":"_time","xPos":5,"yCol":"_value"},{"axes":[{"base":"10","name":"x","scale":"linear"},{"base":"10","label":"Megabytes","name":"y","scale":"linear"}],"colorMapping":{"value-dbSize-1-Hub Dev-HubitatDev-HubitatDev-MB-mean-":"#31C0F6"},"colorizeRows":true,"colors":[{"id":"uMorniw-cfNfEtY1ZuWOx","name":"Nineteen Eighty Four","type":"scale","hex":"#31C0F6"},{"id":"lfJn4JAYuz_FjHsWuP0i2","name":"Nineteen Eighty Four","type":"scale","hex":"#A500A5"},{"id":"xJp_mMX-Xc5TvRkZLx6qr","name":"Nineteen Eighty Four","type":"scale","hex":"#FF7E27"}],"geom":"line","height":4,"hoverDimension":"auto","kind":"Xy","legendColorizeRows":true,"legendOpacity":1,"legendOrientationThreshold":100000000,"name":"Database Size","opacity":1,"orientationThreshold":100000000,"position":"overlaid","queries":[{"query":"from(bucket: \"Hubitat\")\n  |> range(start: v.timeRangeStart, stop: v.timeRangeStop)\n  |> filter(fn: (r) => r[\"_measurement\"] == \"dbSize\")\n  |> filter(fn: (r) => r[\"_field\"] == \"value\")\n  |> aggregateWindow(every: v.windowPeriod, fn: mean, createEmpty: false)\n  |> yield(name: \"mean\")"}],"staticLegend":{"colorizeRows":true,"opacity":1,"orientationThreshold":100000000,"widthRatio":1},"width":5,"widthRatio":1,"xCol":"_time","xPos":5,"yCol":"_value","yPos":4}],"name":"Hubitat Dev Stats"}}]

In my case the big chatty hogs are the Envisalink Integration and to a lesser extent Ecobee Suite and my YoLink Device Service:

However, given that, I usually still get around two weeks or more between automated reboots (set at <175,000 KB free memory:

My simplistic graph is old school Hubigraph and plain ‘ol Hubitat Dashboard fetching from Hub Information Driver v3.

@salieri , suggest looking further for other leaks as well. 2 days to low memory indicates possibility of other problems?

1 Like

I took a screen capture.

Reference: http://hub_ip_address/hub/advanced/freeOSMemoryHistory

It looks like between these two times there's a steep decline:

2023-04-20 17:12:03,510192,0.01
2023-04-20 17:17:08,395972,0.13

and then again:

2023-04-20 22:27:12,352696,0.08
2023-04-20 22:32:16,294996,0.09

As you guys have stated, there's nothing seen out of the ordinary in my 'App' and 'Device' stats logs. I tried looking in 'Past Logs' to see if anything is going on and there isn't between the times I noted above.

Any other ideas or thoughts....?

Screen capture of history:

If it doesn't go below 130 I wouldn't worry too much about it. Mine does the same thing. After a few days it settles on around 160-170 and dips during the nightly backup but then recovers. Though I've let mine get down to 80 with no ill effects before I rebooted,

Wanted to provide an update on this thread. Contacted Hubitat support, they took a look at my hub through the engineering logs. They think the memory leaks may be occurring due to my database having some type of corruption.

I followed this article as instructed and will continue to monitor: Soft Reset | Hubitat Documentation

Thanks again to everyone for the help so far.

3 Likes