InfluxDB Logger Support

Well probably for good reason. I am seeing some loss of performance when having the child apps interact with the parent to load the queue. Translates to a few ms each way. I just adjusted a few things and it did improve for the soft polling, but didn't make a difference at all for event based tracking child app I have. It is probably a small difference overall at this point though.

Really my only thought with using the parent app was to keep the posting to InfluxDB in one spot to try to keep that process as fast as possible. I wasn't thinking about multiple database as that is what @hubitrep seems to be working on.

That is a common use case for Child/Parent apps, but i have used it both ways in the past on Smartthings.

In the Paradigm I was working on the Parent app has all of the setup to talk to the InfluxDB Database, Manages writing to the DB and those respective settings and would handle Hub tasks.

The child apps would simply handle the events that either need to be generated for polling, or process from subscriptions, and allow for seperate input options and settings for those devices. The soft polling cold have different types of devices and different intervals with custom or non custom attributes. The Event monitoring jobs would have the ability to decide between Custom or standard attributes and then select what you want to send.

I just wish there wasn't the impact of a few ms when passing the values to the queue in the parent app.

Also not sure why but my initial testing with the @hubitrep code was causing even more slowdowns. Not sure what the deal was with that, but it was even more impactful to performance. But i need to do further analysis on that

I've just submitted a new PR (don't worry I think I'm done for a good while). Solves the issue from my standpoint, by adding a column in the measurement that indicates whether the event was soft-polled or not.

That closes the whole soft-polling discussion for me (at least) as I can now filter those soft-polled data points (or not) in either InfluxDB or Grafana, depending on my use-case (still need that filterEvents option exposed though :wink: ). I am okay with grouping devices together into an app instance based on polling preferences (I will have few groups) even if that means I might end up with more than one app instance per db connection. I don't think it's going to be a big problem honestly.

Acknowledged. I have no idea other than maybe the ConcurrentHashMap synchronisation is slowing things down somehow.

I like this idea a lot.

The problem with making the parent child relationship that way now is how do you do it and keep compatibility with current installs. I don't think you can. It would require a complete restart for everyone as they would have to install the child app once they upgrade.

1 Like

Perhaps that's the reason why some apps end up getting renamed (even built-in apps like RM, or Thermostat Scheduler for which the old version was renamed "legacy"). Allows people to decide what they want to do but it does add a bit of work when transitioning. I wonder if it would be possible to export the app running the current version, then import it in the (new) child app? I don't have an opinion on whether this change warrants something like that.

Appearently there is a addChildApp method. Not sure how usable it would be to help with a migration to a parent child setup though. I have never tried it and this is the only mention I can find of it.

maybe this

addChildApp("mynamespace", "my app", "My app label", [settings:[var1:["type":"bool", "value":true]]])

Please have a look at this PR. It moves the queue into the application state and allows multiple instances to be installed. Note that this requires Hubitat 2.2.9 or above, and will be reflected in the package manifest when published.

Unfortunately, I was not able to figure out any way to migrate any pending but uncommitted updates. Fortunately it's a one-time loss.

As @hubitrep discovered, a new loggerQueue was being created each time that the application source was being written, so the loss of pending data was already happening. I had never noticed. On the plus side, with the move of the queue into the application state, pending data should be preserved when doing future updates.

I don't know of any way for a parent app to adopt an existing application its child. I believe moving to full parent/child would require introducing a (completely) new version of the application.

As an end user - a) polling is very useful for battery powered measuring devices - temp and lux 2 obvious measures. IMO it's "truer" than interpolation. b) go for breaking changes for multiple instances.

Other than the one-shot loss of pending data when the new version is installed, breaking changes haven't actually been required yet.

That said, if there were no install base, I would switch this to parent/child in a heartbeat. :slight_smile:

1 Like

@dennypage Looking at a few things quickly it looks like this removes the speed improvement we found with the Influxdb2.x upgrade PR. So this change to a state value for the logger queue will cost performance. Does seem to work though.

I don't either, but I was hopefull when i saw the addChildApp method. That said I can't for the life of me figure it out.

Yea, I'm not too surprised. Updating the state.* map seems to be expensive. Balancing that against multiple instances and fixing the data loss on update issue though.

There is probably a reasonable way to get overall performance back though. Remembering that the performance degradation is beyond linear, you can scale horizontally by using multiple app instances and dividing up the device load amongst them. So instead of a single instance with a queue limit of 50, three instances with a queue limit of 15.

The other alternative is Field variable maps like @hubitrep proposed. However this requires global mutexes, setters and getters, each of which has its own performance impact. Also, with globals each app instance will compete with other instances for access to the concurrent mutex and queue maps, so you may not be able to scale out as effectively. Unless there is a significant performance difference, I think it makes sense to go with the simpler approach.

Well it was well beyond linear. I would probably say on orders of magnitude improvements in time to process the step to create the post record. Keep in mind this is also about CPU Consumption. The decrease in cpu time was also a big boost.

So I have all 3 code bases loaded on my hubs.

The current published code base seems to do the Write the queue out for posting in 1-2 ms regardless of size. Even with 34 recordswhich is where my limit is it stays at 2ms

With @hubitrep's code time to Write the queue out for posting for aprox 30 records goes to 100-120ms around around 3-4 ms per record

With your last change to use a state value time to write the queue for posting went up to 159ms for 38 records ore has gone back up to around 4.2ms per record

Another thought is that we take advantage of the current app using the same queue. We simply document and provide directions on how to use it properly so not to get bad data. Like don't use soft polling in more then one app, Don't select the same devices and attribute between apps. Do use multiple installs if you need access to all attributes as well a the legacy method.

My concern is i don't want to not think about system impact when some of this stuff is changed. I am pretty certain that in the early days of InfluxDB Logger on Hubitat there were some performance issues/impact to the hub. I think that is partly why folks use Node-Red allot to get data into InfluxDB instead of this app sometimes.

Agree simpler is better.

If I read this right, you seem to have gotten rid of the semaphore altogether. I have read other discussions here where it is said that state.* is not considered thread-safe and the app's event handler can get called from multiple threads. The alternative I've seen suggested is to use atomicState.* variables but, for those, writes go to the db directly. I suspect the performance might get even worse.

You are correct, the state map is not multi-thread safe. The semaphore isn't really gone, it's just externalized. The magic is on line 52. In Hubitat 2.2.9, they introduced the singleThreaded attribute for both apps and drivers, which provides externally managed thread exclusion. It is faster than atomicState and designed for just this kind of situation.

[Edit: singleThreaded attribute described here]

1 Like

If you want to try something with my version while I sort out my branches, try commenting out the mutex calls in queueToInfluxDb() and writeQueuedDataToInfluxDb(). I'm not sure it's needed. The mutex should be necessary only to protect the reference to the ConcurrentHashMap, not the ConcurrentHashMap itself, which is supposed to be thread-safe.