On Managing Third Party Apps

Rather than just a list of approved/questionable apps (which quickly becomes unmanagable -- see the "Compatible Devices List" with 500+ replies, and which not every user will consult), I'd suggest that HE provides some automated guidance before a user installs an app or device driver.

For example, HE could easily have a database installed on each Hub (ie., no cloud connection required) containing:

  • App or Device name
  • Version number
  • Developer
  • Checksum
  • App Rating
  • Dev Rating

When a user begins to install an app, there would be a database lookup, returning guidance to the user, such as:

App Foobar version 1.6 is known to be stable. Are you sure you want to install version 1.5?

or

Application BizzBop version 1.8 is already installed and is known to conflict with Application Foobar. Are you sure you want to continue installing application Foobar?

or

Developer Hac0ErDEADBEEF has a reputation score of -9 in the Hubitat Community. Are you sure you want to install application "Connect my Bank and my Door Locks" from this developer?

or

Application SNAFU version 1.8 from Developer JSmith should have checksum 0x123. Are you sure you want to install a version with checksum 0x666?

Note that in each of these examples, HE is not taking responsibility for certifying that the application is OK, but is providing useful information to the end user and allowing them to shoot themselves in their chosen appendage.

There should be a choice to update the DB from the cloud (ie., pull the latest copy maintained by HE) independent from upgrading the Hub OS (assuming compatible DB formats, etc).

The natural extension to this would be an anonymous usage database maintained by HE, where users could upload a machine-readable list of apps & devices generated by HE, along with user-inputs (ie., checkboxes for things like "works fine", or "crashes", etc). Higher weight would be given for sets of apps that have been installed for a longer time (ie., it is more meaningful if a user reports that the system is stable when the same set of apps have been in place for a month than for a day).

1 Like

I disagree on several levels:

The main objection is, I would rather have the HE folks spending their time advancing the system rather than trying to maintain other folks code. Noting what you suggest is a very time consuming task.

Second, what you are asking for is a near impossible task. Even if an application worked will with HE, that doesn't mean it will work well with HE and another application.

Third, what motivation does HE have to "manage" other folks code? AFAIK Microsoft does not have such a list and Google play can barely weed out the nefarious applications.

Just my 2¢

John

6 Likes

I disagree on several levels:

Based on your reply, I think it is more accurate to say that you misunderstand on several levels.

The main objection is, I would rather have the HE folks spending their time advancing the system rather than trying to maintain other folks code. Noting what you suggest is a very time consuming task.

I would also like to have the HE folks spending their time advancing the system rather than trying to maintain other folks code. That is why there is absolutely nothing in my proposal about maintaining other people's code. In fact, I clearly wrote that HE would not have responsibility for other people's code.

Second, what you are asking for is a near impossible task. Even if an application worked will with HE, that doesn't mean it will work well with HE and another application.

First of all, it's not "near impossible". Regression testing could determine many conflicts.

Secondly, many users already report conflicts between particular combinations of applications.

That's exactly why I'm suggesting crowd sourcing the collection of that kind of information, and not placing the task on HE to validate 3rd party code.

Third, what motivation does HE have to "manage" other folks code? AFAIK Microsoft does not have such a list and Google play can barely weed out the nefarious applications.

You clearly misunderstand. I never suggested that HE "manage" other folk's code.

Your example of Google weeding out malicious applications merely proves my point. It is only a matter of time before there are malicious apps in HE. While preventing those, or even detecting them, is outside the current scope of HE, it would be a tremendous benefit to all users if there was a mechanism for each hub to alert the user before a malicious app (or conflicting app, or outdated app, etc) was installed.

HE would serve as a repository for user-supplied reports about the quality of applications. They are already performing that function right now, through the web forums we are both using. However, the information here is disorganized, difficult to search, and not necessarily statistically significant.

HE is in a unique position, as it "owns" the Hub codebase, to voluntarily collect anonymized usage data about 3rd party applications, and then make the aggregate of those reports available to all users. The motivation for HE is that it would add to the value that HE provides (enhance the product), by giving users greater assurance that the 3rd party apps they are about to install are OK, without requiring HE to manage external code.

The other motivation is that HE would have a larger & easier to parse body of data about 3rd party apps, adding to the small & self-selecting set of complaints here. This information could help them direct their code efforts to reach the larger users base (the majority that @bravenel claim don't post here).

Just my 2¢

John

I perhaps was not clear on my statement "Maintaining other peoples code". I understood you perfectly, IMHO I look at "regressing testing", collecting users comments on code, updating a "database" when code changes etc is what I call "maintaining other peoples code"

However you look at it, man-hours will be consumed. Simply being aware when code is updated by an author is an appreciable task.

Your example of Google weeding out malicious applications merely proves my point. It is only a matter of time before there are malicious apps in HE.

Here I don't agree. My comparison was with the amount or resources available to Google vs HE. Also your statement does not cover the whole spectrum of platforms, where external programs are used to defend against malicious app/programs/ etc.

I guess in the end we will agree to disagree.

John

Testing performed by whom?

This is not in the least reliable. Many users don't know what's going on, and in many cases have misconfigured an app or driver.

These ideas definitely falls into the area that isn't going to happen:

  1. We have better things to spend our resources on.
  2. We won't accept the liability of third party code in any way, including those you are suggesting.

Same problems as cited above: resources and liability. We don't collect any data about our customers' specific usage of their hubs, and don't intend to start. We take our customers' privacy seriously, and have no plan to monitor how they use their hubs.

6 Likes

I was not and am not suggesting that HE do regression (or other) testing of 3rd party code.

I was responding to the assertion that it is "near impossible" to find conflicts between applications, and pointing out that regression testing is one method to achieve that end.

Secondly, many users already report conflicts between particular combinations of applications.

In my day job, managing a software development team, I am well aware that user feedback is usually not very high quality. My suggestion was to gather that feedback in a simple way, more structured than the free-form posts here (such as my almost-useless anecdote about the hub being slow) in order to spot trends and have a higher assurance about the quality of any conclusion.

I an not advocating basing any community rating system on free-text inputs.

I am not suggesting accepting liability for any 3rd-party code, beyond what you already hold by allowing users to install that code in the first place.

HE is in a unique position, as it "owns" the Hub codebase, to voluntarily collect anonymized usage data about 3rd party applications, and then make the aggregate of those reports available to all users.

I was never suggesting monitoring usage.

Perhaps I wasn't clear, but I was envisioning something were the user could choose to have the hub generate a local text file -- suitable for review by the end user -- that the end user could upload to HE along with highly-structured ratings (ie., a web page that displays the applications from the uploaded installation list, with a dropdown menu for the rating of each app -- 'worked', 'crashed'). The data being uploaded would be visible to the user, and nothing would be uploaded by the hub itself.

In any case, you've made it clear that the kind of guidance I envision isn't part of your business plan. No need to continue this at all.

1 Like

This still requires HE engineering staff hours, additional cloud service billable elements, ui development ect...

2 Likes

I never said it was free, but perhaps I'm thinking of something much simpler than what you've got in mind.

Really simple HTML. Node.js for a upload form. Reject anything other than the text file generated by the hub with a list of {appname, version, etc}. Display a page with the named apps, with a dropdown next to each with a list of pre-defined user ratings (N/A, Great, Good, Conflicts with something on my hub, Turned all my smart gadgets into cat toys, etc), and buttons labeled "Clear" and "Save".

Nothing else.

Map named user response choices to numeric values. Keep a running average for each app. If there are a significant number of responses for a given app (I dunno, 20? 3% of the number of Hubs, etc.... let me ask our statistician) then make that score available as a form of guidance as to whether an end-user should consider installing that app.

Put in a page of disclaimers that HE hasn't reviewed the app, the code, the user responses, etc.

If this whole thing gets popular enough that you've got to deal with sock puppets trying to influence the app ratings, that's a measure of success, and there are are lots of ways to deal with that.

Well, no matter how simple you want to make it, it still requires engineering time, and honestly there isn't much in the way of payback to us no matter the effort involved...

this is closest thing you can get here.

I guess sometime in future Hubitat will maybe implement Github integration

any rating will be based on community, Its already impossible to test all apps and drivers by one person for hubitat because its too many of them