Extracting a token from a String Variable

In Release there is a new advanced feature in Rule Machine that allows you to extract a token from a string, including from a String Variable.

Notice that the string used above is 123-456-7890. The Delimiter is the character (or characters) that will be used to break the string into tokens; "-" above. The index selects which token we want, starting with 0 for the left-most token. So, in this case, the result stored in Charlie would be 123.

That's not very useful as shown, because obviously we know what the first token of that string is. So, let's try a more advanced version of the same thing:

Now, in this case, the String Variable Foxtrot holds a string, and we can pull the tokens out of it, where the tokens are separated by "-". This time we would expect to get 456 set into Charlie.

Here is the log from running these actions:

Advanced Uses

For those of you familiar with Groovy, this feature uses the Groovy method:

tokenize(String Delimiter)[Number Index]

Care must be taken to be sure that Index is in bounds. Were we to ask for index 3 for the string used above, the rule would throw an index-out-of-range error. There is no protection against such an error, it will crash your rule.

One advanced use of this feature is to pass multiple parameters into a rule from an endpoint trigger that sets a String Global Variable. Then the rule actions could take apart the passed in string into its token elements, and use those in a Custom Action to perform some task.

The example below shows a rule that uses this to set a lock code on a lock. Notice this rule uses another new feature to set a Number Variable from a String Variable that contains a numeric string.

Here is the URL for the endpoint:

This endpoint would create a lock code on the Garage Lock in slot number 6, code of 9876, and named "New Code".


THIS. IS. AWESOME! Thanks @bravenel!!! You just solved one of my biggest issues with HE now!


This does NOT suck, even a little bit. Complete and total lack of sucking.


@bravenel Is there a way to get a token based on a starting index and number of chars? For example if I wanted to get 'Contains" out of "ThisContainsMyToken" I could reference (4,8).

There is no index/number substring operation. You can subtract from a string, or tokenize it.

1 Like
  1. Is there a way to get the numbers of tokens?
    (In Groovy: tokenize(String Delimiter).size())

  2. I'm curios: Why tokenize and not split?
    (IMHO split would be more powerful.)

Split returns a string and tokenize returns a list. So, it depends on what you're trying to do with the result.

IMHO split returns an array (of strings).

Well, it does. These are subtle differences. See this: Groovy : tokenize() vs split() | TO THE NEW Blog

But, irrespective, in the context of RM, it doesn't really matter which is used underneath, as they're both going to return the same element given the definition.

On a related note, I noticed one cannot use a variable (number) for Index. Would be a helpful mod fwiw.

Yeah, I'll take a look at that.

No problem, next release.


OK this clearly has to be a dream or a sick joke.

User: “It would be great if the software could do X”
15 mins later from founder and top engineer: “yeah nbd we’ll take a look.”

The Hubitat team was clearly absent the day they taught all software companies how NOT to be responsive to their users. :rofl:

Gratitude & respect. Seriously.


So sorry to disappoint! :stuck_out_tongue_winking_eye:

The change is already merged into the next release --- coming soon.

1 Like

You know it took so long because I thought I should at least test it before committing.


"Test before committing"? Where's the adventure in that? :stuck_out_tongue_winking_eye:

1 Like


You guys totally do not suck! :joy:

Exactly, good example!

IMHO the "subtle" differences are very useful: Possibilty to use RegEx, delimiter with length>1, and fixed index position (very hard needed e.g. in case of transfering arguments as one string).

So you're saying that if the implementation used split instead of tokenize, you'd use a regex delimiter?

I'm just saying that split offers possibilities that are not in tokenize. :wink:

But delimiters with length > 1 (e.g. parsing sentences) and fixed index positions (even when there are empty strings) I would use right now.

Delimiters with length > 1 are allowed now.

I don't see any reason we couldn't use split() instead of tokenize(). That will allow fixed index positions with null tokens.