Hubitat's button implementation

Hubitat's button implementation differs from ST's, in fact we do not have a capability button implemented in our apps.
We didn't like it, so we abandoned it altogether...

We have replaced "button", with one mandatory capability "PushableButton", and two optional add ons, HoldableButton and DoubleTapableButton


PushableButton (replaces button)
driver def:
	capability "PushableButton"
	push(<button number that was pushed>)

	NUMBER	numberOfButtons		//sendEvent(name:"numberOfButtons", value:<number of physical buttons on the device>)	
	NUMBER	pushed				//sendEvent(name:"pushed", value:<button number that was pushed>)
device list input type:
	subscribe(deviceList, "pushed", buttonHandler)		//all buttons
	subscribe(deviceList, "pushed.1", buttonHandler)	//button one only
	an event of numberOfButtons should be sent on driver updated, this allows an app to determine the the button count if desired.	


HoldableButton (used in addition to PushableButton if device supports a hold function)
driver def:
	capability "HoldableButton"
	hold(<button number that was held>)

	NUMBER	held				//sendEvent(name:"held", value:<button number that was held>)
device list input type:
	subscribe(deviceList, "held", buttonHandler)	//all buttons
	subscribe(deviceList, "held.1", buttonHandler)	//button one only
	this capability was not intended to be used separately from pushableButton, all holdableButtons are expected to include and implement pushableButton


DoubleTapableButton (used in addition to PushableButton if device supports a this function)
driver def:
	capability "DoubleTapableButton"
	doubleTap(<button number that was double tapped>)

	NUMBER	doubleTapped		//sendEvent(name:"doubleTapped", value:<button number that was double tapped>)
device list input type:
	subscribe(deviceList, "doubleTapped", buttonHandler)	//all buttons
	subscribe(deviceList, "doubleTapped.1", buttonHandler)	//button one only
	this capability was not intended to be used separately from pushableButton, all DoubleTapableButton are expected to include and implement pushableButton


ReleasableButton (used in addition to PushableButton if device supports a this function)
driver def:
	capability "ReleasableButton"
	release(<button number that was released>) 

	NUMBER	released //sendEvent(name:"released", value:<button number that was released>)
device list input type:
	subscribe(deviceList, "released", buttonHandler)	//all buttons
	subscribe(deviceList, "released.1", buttonHandler)	//button one only
	this capability was not intended to be used separately from pushableButton, all releasableButtons are expected to include and implement pushableButton

I realize it’s pedantic, but why the “able”? HoldButton, DoubleTapButton and PushButton seem more streamlined.

What if it supports triple tap like the Homeseer switches?

1 Like

Plus “Tapable” is not an English word. :wink:


I guess we’ll have to use the good ole ST workarounds till that 3xTap is added (ie map those commands to button numbers instead). I’m just glad it works with the doubleTap because the remotec could be really confusing.

In smartthings it was:
Buttons 1 through 8 (single Tap or held) - actual physical buttons
Buttons 9 through 16 (doubleTap buttons one through 8) - imaginary :wink:

Now it’s just
1 through 8 (Pushed, Held, DoubleTapped)
and can be programmed without math


I’ll skip any complaints about making up new words…

A couple of questions:

  1. Why commands? Aren’t buttons classified as just actuators, so they don’t receive commands? I haven’t gotten far enough here, but in ST, the switch capability would cover a button that can also receive commands (to control something it’s directly connected.) Maybe I’m not understanding what “commands” refers to in this context.

  2. Where is the “released” state? If a button is “holdable” as you’ve named it, or momentary, then it likely sends a message when pressed, and a different message when released. Otherwise, there needs to be a time attribute that relates to how long after the button is pressed is it considered being “held”

Advantages of having “pushed”, “held”, and “released” events is that you can set up rules that would, for example, toggle a dimmer switch to on (if it was off) on "pushed, start the dimming / increase in brightness of a light on “held”, until a “released” event, and then the change in brightness stops.


not all button devices provide release, so least common denominator applies.
there is a whole CentralSceneNotification class in zwave that a few devices support that provides a steady stream of events.
We’ll get there at some point, we’re focusing on automation right now, we can look at all this other stuff soon enough.


Lutron keypads, including Pico, do generate a release event. And in our Lutron integration we do time those for a Pico to create pushed and held. Of course, when used directly in a Lutron system these buttons do raise or lower something while pressed, and then stopped. I spent some time experimenting with doing this in Hubitat for z-wave devices, using a Pico. Unfortunately Z-Wave is just not responsive enough for this to really work as desired, so I abandoned it. It is very difficult to match what Lutron can do in their own system where they basically have zero latency in RF. Mesh networks have just enough latency for those cool things not to work well.


Well at least one of the ZigBee Xiaomi buttons I'm just started trying to get working with Hubitat sends separate pushed and released attribute values.

In the ST device handler for that button, I'm working on code for a user set countdown delay to generate the held event, and of course when they release the button, then a released event is sent. For the pushed event, that happens only if the button is released before the countdown delay. I'd really like to be able to replicate this functionality in the Hubitat device driver for the button.

I suppose I could "fake" it by sending a pushed event on a different button number when the button is released. It's just confusing semantics-wise.


Is there any way to "hide" or get rid of the button-related commands when viewing the button device?


I haven't defined them in my device driver because I don't need to use them, and would rather they not appear at all. Shouldn't commands only appear if they are actually defined in the device driver code?

Did you add “capability Button”? That will add those commands.

1 Like

No. I specifically did not use capability "Button" because Mike explains in the first post of this thread that you guys have abandoned that implementation that is used in SmartThings.

I am using capability "PushableButton" and capability "HoldableButton" and my device driver is working perfectly as best I can tell so far.

I see how having the commands available is useful, but I just don’t want them to appear in the device status page because they’re not being used in my particular driver.

Until working on this driver, I’ve only seen commands appear on the device status page when they are defined in the metadata section of the device driver. So this behavior is confusing to me, and I imagine will also be to users, as this device driver has no need to use them.

Hold and push are mandatory commands for pushable and holdable button respectivly. Why would you define these capabilities in your driver if you don’t intend to send events when these methods are executed?
Any commands associated with a capability are always displayed in the web ui.

1 Like

In that case, with all due respect, I go back to my previous unanswered question: Why does a button need commands?

Please correct me if I’m wrong, but this is the information I have to work with in understanding the Hubitat button definition:

  • The metadata{} section and the command "commandName" definition is not part of Groovy language, but rather are part of Hubitat’s Device Driver and App API.

  • Hubitat has not yet provided developer documentation for its API, so the only resource available for a similar (but not exactly matching) API is the SmartThings Developer Documentation. I am not aware of any other Groovy-based HA device API and simply looking at the Groovy programming language documentation does not help in understanding Hubitat’s Device Driver and App API.

  • SmartThings describes commands as “the things that a device can do”. Their example is for the “switch” capability, and its required on and off commands need to be linked to methods in the device driver code, which tell the device to perform some type of action.

So commands associated with a capability need to be defined to tell the device to do something.

A button can’t do anything on its own. It has to be pressed / released / actuated, and then it sends a message to the hub. It cannot receive a command from the hub to press / release / actuated itself. It is solely a trigger, as named in Rule Machine. For all of these reasons, the SmartThings Button capability definition has no associated commands. Neither does their Holdable Button capability.

Is the Hubitat Button capability only reserved for Switches with buttons? If yes, then in the Hubitat API what is the capability reserved for button devices that are simply a trigger, and do not perform any actions in and of themselves?

1 Like

You’re not thinking of virtual devices, they need commands.

In any event, just because a capability has defined commands, it doesn’t mean you need to use them.
The device interface is a configuration interface, it is not a dashboard interface.

It therefore makes little sense to spin up a derivative button capability set with no command requirement just to prevent the commands from showing up in the device configuration UI.


That's precisely what I was thinking of when I said "I see how having the commands available is useful" a few posts up.

This seems to contradict your previous reply. But fair enough, I can either put in empty hold() and push() calls, or stick in a sendEvent in each of them to make them virtual buttons.

I have to guess the latter is preferable, since the requirement of those commands in the Button capability results in a button being displayed in Apps as both an available trigger and also a controllable device. I will have to hope the lowest common denominator rule does not apply to the Hubitat's users, and they won't try to control an non-controllable button.

1 Like

To be honest, I needed a reminder from the team on how those commands became part of the defined capability, when they should really only exist in their virtual counterparts as you’ve noted.
So you’re correct about this, In any event this is something I will update in a future release.


Warning: The following is going to be extremely technical, but it’s important when considering a button model.

Zwave defines the commandsets a Device “supports” (can receive) and “controls” (can send). They are broken out on the conformance statement, but often just provided as a single list elsewhere.

Zigbee defines “in clusters” ( can receive) and “out clusters” (Can send) in a similar way. These are broken out in the device class specification, but again, often provided as a single list elsewhere.

( I have noticed that SmartThings documentation and documentation writers was often confused about this distinction, and never really broke it out consistently when discussing capabilities, particularly if they had used a pre-existing device as a model. )

Z wave complicated all of this when they introduced v3 notifications and “central scenes.” Most people assume that central scene will be a controlled command set because the end device will send the scene number to the controller. But instead it’s listed under “supported” commands because there are some configuration steps required by the controller. The same is true for V3 notifications. Consequently these are shown as “supported.”

This is paradigm drift, where something that was really simple in the initial design concept (in and out) ended up being shaped by the engineering implementation so that even experienced people sometimes have to look up whether a command is listed as supported or controlled.

Most Z wave buttons now use central scene commands. There have been changes between V1, V2, and V3 of central scenes, but in all cases the controller does send a command to the button which has to do with timing the long hold. So even though normally you don’t think of a button as receiving a command, in order to implement central scenes, the buttons actually do.

All of which is to say that, depending on which buttons the firmware designers were looking at, it may very well be that the model requires that the button be able to receive commands as well as send them.

I know we’re into super technical tiny details here, but I did want to mention it before somebody throws the baby out with the bathwater , as my great grandmother used to say. :wink:

@mike.maxwell @patrick


Thanks - as usual - for your ever-enlightening and well-researched information!

Since I'm a "ZigBee guy" when it comes to device drivers, I just spent the last hour and a half going through the Central Scene Class section (and a few other sections) of the Z-Wave Application Command Class Specification document you linked to, reviewed the most-up-to-date SmartThings Capabilities definitions on their new Devloper Portal site, read through a dozen or so of your ST forum posts on the topic of Z-Wave Central Scene Notification support in SmartThings and how ST support for Local Scene Controllers, which communicate directly to other end-devices without the need for a hub, wouldn't make sense, and finally I looked at two examples of Z-Wave devices with Central Scene Notifcation support in ST, the HomeSeer HS-WS100+/HS-WD100+ switch/dimmer and the Remotec ZRC-90 Scene Master, and delved deeply into their respective ST Device Handlers.

Out of all of that reading, I still don't see any need for a commands requirement in a capability definition for phyiscal button devices.

Capabilities assigned to devices is what allows Apps to see the device as being available as a trigger and/or an action. Commands defined in a capability puts the device in the action list. So the commands are the action the device can take.

Physical buttons, on their own, can't take any actions. Taking the Remotec ZRC-90's device handler for ST as an example, it does/did not make the ZRC-90 available to Rule Machine in the action list of devices. That's because it's not physically connected to anything (like the HomeSeer switches are), and it doesn't make much sense to duplicate all of those physical buttons as virtual buttons when the purpose of the device in the first place is to act as a physical trigger device.

Even if the ZRC-90 could receive Z-Wave commands to make changes to its Central Scene configuration, that is not considered an action that you want available in any App such as Rule Machine. It should instead be handled as a preference setting in the device driver. The ZRC-90 device handler otherwise works very similarly for example to the device driver I've built for the "original" Xioami Button device: Different types of presses of each of the physical buttons correspond to different button events sent to the hub. With a Z-Wave device that's presented as Central Scene Notifications with scene number and key attribute, while a ZigBee device sends cluster and attribute reports.

Then, in the case of the HomeSeer HS-WS100+/HS-WD100+ device handlers for ST, there seems to be a case for having button commands, but that's because it is also a swtich, and with the Central Scene Notification support, the physical buttons have the dual role of both being a direct physical trigger for the mains-powered lights, etc. that are directly connected to the HomeSeer switch/dimmer, as well as being capable of acting as triggers for Apps on a Z-Wave hub.

However, if you look at the device handler code for both of those HomeSeer devices, the commands used for all of the possible physical switch panel actions are custom-named, only in this device handler, and not part of either of SmartThings' switch or button capability definitions.

The main purpose of those custom-named commands is to link them to the "virtual" buttons presented in the device handler's user interface. They are not revealed to any Apps that adhere to the trigger and action lists model, based on capabilities. Only CoRE or webCoRE, that I'm aware of, make available device driver specific custom-named commands.

On the other hand, the HomeSeer HS-WS100+/HS-WD100+ device handlers for ST do provide Apps access to the switch and switch level capability-based commands. Those are independent of the button capability, however. So that's why the HomeSeer switch/dimmers show up in the action list. It's just not based on the button capability.

But again, even in the case of the device handler for the HomeSeer switch/dimmers, the way the Central Scene Notification support is handled is to map the different notifications to different button pressed events on the SmartThings hub (as explained on this Darwin's Den webpage):

Action Button# Button Action
Double-Tap Up 1 pressed
Double-Tap Down 2 pressed
Triple-Tap Up 3 pressed
Triple-Tap Down 4 pressed
Hold Up 5 pressed
Hold Down 6 pressed
Single-Tap Up 7 pressed
Single-Tap Down 8 pressed

I should point out that the Events sent by the HS-WS100+/HS-WD100+ device handlers don't seem to strictly follow ST's most up-to-date switch or button capability definitions, by including a type: attribute that is either physical or digital, which I see was previously supported. It appears ST is going to depreciate that use of the type: attribute.

So, to sum up, I don't see how removing the commands from the (physical) button capability definition at all affects the ability to send Central Scene Class configuration commands to Z-Wave devices with version 3 notification support. Those should be handled by device drivers as preference settings, or automatically in the device driver's code.

And, I still feel that the button definition shouldn't include commands that put physical buttons on the action list of Apps. This opens up the possibility of infinite trigger - action loops, as well as giving users' the impression that buttons on non-switch devices can be controlled, when in fact they can't. I'm still all in favor of a button implementation that delineates between physical and virtual buttons, though, or some kind of implementation that allows the use of virtual buttons to allow inter-hub operability or computer/mobile user-interface based buttons.


And here I thought you just gave up and went outside instead. :smiley: