Z wave has two completely different kinds of scenes. The original, local scenes, where the scene controller sends a command directly to the end device without telling the primary controller (the hub) about it.
And the newer "central scenes" where the scene controller sends a command to the hub with the central scene number, and then the hub does stuff with it.
Z wave introduced central scenes primarily because of the increasing use of mobile apps since if the hub Doesn't know about the instruction, the mobile app will get out of date. But they are also very useful for multiprotocol platforms like smartthings and hubitat.
So the post that you linked to actually says is that it would not make sense for smartthings to implement Local zwave scenes. it says nothing about Central scenes.
Yes, I completely got that point, and the distinction between support for Central Scene Notifications to a hub versus Central Scene Controllers being able to send scene notifications directly to other nodes (end-devices) without any hub intervention. I just wasn't finished with my post.
Nevertheless, I've added a bit more explanation to that paragraph to make sure others understand the distinction as well. So thanks for pointing it out!
Yes, well now half my day is laid to waste, and the sun looks to be yielding to rain clouds. That's the last time I'll start digging in that deep on a nice day!
As for switches go, SmartThings has made the decision to deprecate the use of the âtype: physicalâ for two reasons:
not all zwave switches report type physical
The type is almost always lost if the message is routed through a repeater. Which means type physical at the switch becomes type digital just because it passed through a repeater.
When you take those two things together, SmartThings found that different customers had very different experiences using identical code, because it depended on the exact switch and network lay out that they were using. So they removed the type physical reporting from their stock zwave switch device handlers a few months ago.
I'm not sure why I didn't notice it until today, but I observe that the hold / push command "buttons":
are no longer displayed in the Device Details page of the Hubitat web interface - which is great - while buttons with pushable / holdable capability appear to be still available to Apps as devices that can be controlled.
So I am wondering whether there have been any changes to the button capability other than the removal of the hold / push commands from the Device Details page.
Hold, Push and doubleTap are no longer commands associated with the button capability suite, so those commands will no longer show up in the driver details.
They have been added (command âpushâ, command âholdâ ect) to drivers that actually need it, IE virtual button controllers, and some of the Lutron controllers.
There have been no functional changes to this capability set.
But for device drivers that have capability "PushableButton", etc., even if they don't manually include command "push", etc., is it normal for them to show up in the list of controllable buttons?
For example, in RM -> Select Actions for True -> Control switches, buttons, capture/restore, refresh, poll -> Push a button, I am still seeing my physical buttons listed in the Select Pushable Button window, and the device driver for these buttons does not include command "push".
capabilities define what is shown in app input element of type: âcapability.xyzâ , there is no input of type command.
commands are methods that can be taken on a device, device.on(), on is a command, one capability that defines âonâ as a command is capability.switchâŚ
when a capability has associated commands, the driver ui creates a visible element, that when pushed, runs that command on the device with any parameters that may be associated with the command.
Think of Capabilities, their associated commands and their attributes as an interface agreement, as this is exactly what this is.
when you select a switch in an app, you have a rightful expectation that it will respond properly to on() and off()
Having said that, there is nothing preventing an app that has a reference to a device from attempting to run a command on that device, if the device doesnât have that specific command defined by a capability, or via command âmyCustomCommandâ, an error will be thrown in the log.
In the case of RM, think of the âPush a buttonâ selection as a static implementation of a custom command.
So it cannot be had both ways, drivers either get the command tiles as defined by a given capability as well as those defined via command âcustomâ, or they donât. if a given command isnât defined by either of those two means, the given command cannot be run from an app.
The exception to this being devices created by an app (child devices), here the app does have access to every method defined within the driver.
Alexa doesn't work with buttons at all--buttons only report information; you cannot issue a command to "act" on them like you could with a switch, bulb, outlet, etc. (It's possible that in the future, native or third-party apps may enable things like "is there motion in the kitchen?" or "when was this button pressed?", but that isn't the case now.)
To make sure I understand, instead of a button having states: pushed, held, released, double tapped, etc. we now have separate attributes that indicate which button was acted upon? This seems counter-intuitive to me.
Other devices like a motion detector doesn't have separate attributes for each of its states.
Buttons don't have states. They have events. Our button implementation uses capabilities (not attributes) to distinguish the sorts of events a button device is capable of.
There are four button capabilities: PushableButton, HoldableButton, ReleasableButton and DoubleTappableButton. These correspond to the ability of particular button devices to generate corresponding events. Not all devices can produce all of those event types. This is why we need the capabilities. Apps need to be able to discern which capabilities are valid for a given device, and hence which events it may want to subscribe to.
In each case, a button event carries the event type (pushed, held, released, doubleTapped) as the event name, and the button number as the event value.
An app can subscribe to an individual button if needed, or to all of the buttons -- in each case for the particular capability. Presumably all apps that work with buttons will want to distinguish between pushed and released, for example, and do different things in response. Simple apps might only respond to pushed -- the one capability each button driver must support (pushableButton).
EDITED TO ADD:
It is worth noting that this approach has made possible some cool things not otherwise possible. For example, using a Pico one can press and hold a button to raise a dimmer, releasing when the dimmer is at the correct level.
Yes, that's true, but a motion detector doesn't have multiple instances (normally) in one device either. If you think of a button controller having 1..N buttons (instances), it makes perfect sense.
If a certain company had thought about the reality of devices containing multiple endpoints providing the same services when the capability schema was designed we wouldn't be in this child composite pickle...
You would do something like switchOn(instance), with an event of switchOn, value instance id... or something else along these lines.
There are methods in some dimmer drivers called startLevelChange(direction), and stopLevelChange(). These get tied to button pushed and released events. This works for built-in drivers for Z-Wave and Zigbee. We have not done it for Lutron dimmers, and I'm not sure it's possible.