Node Red Room Aware Voice Control

I don't think it will do exactly what you want (that is still a little unclear), but you might want to look at It allows you to build your own skill using a node-redesque type interface including intent and slot info. I have experimented with it some and have been able to pass any information I have into node red via http calls from the skill to http nodes in Node Red.

Basically I want to grab a phrase like Alexa, turn on ESPN. Within NR, I would have the room awareness via this Alexa integration, but I also want to be able to caputure the intent using NLP and slot entity value. So, in this case, I would call an API and get something back like...

intent: "ChangeChannel",
channelName: "ESPN",
channelNumber: 206

These channel number mappings would live inside the NLP platform.

This would allow me to bypass invocation process, while leveraging NLP capabilities directly.

Even if we got this to work, I think I would still need dummy channels/switches in Alexa to avoid the "I'm sorry, I can't find that device"...

Any ideas? :slight_smile:

I think it would be more trouble than is worth, but with Voiceflow you could create an Alexa skill, that could work with node red to do this, but you would have to invoke the skill. It would be something like this:
Alexa, open myskill (something Iike ‘channel selection’ )
Which channel?
Then the skill sends the device, name to NR where you could map the channel # to the name. You could also build the mapping inside Voiceflow, but I think it is easier in NR.

Out of interest, what are you using to control the TV?

I'm not too familiar with these but isn't there a way to connect these to HE?

So guessing you have that. So I don't see why it cant be done in NR.

"Alexa, ESPN" if using the driver for HE, then doesn't this allow you to set commands in the Child?
So if "Alexa ESPN" was heard in the lounge, then you use that and switch on that switch in NR.

Or am I missing something?

This supports telnet commands to the device above, works great.


Ah so your able to do it?

I create dummy virtual switches in HE/ Alexa. I then say Alexa turn on ESPN. I hear the chime and then the Alexa NR integration determine the Sonos name I spoke to, then routes the request to the Global Cache device via IR command in the appropriate room. You will need a mapping subflow which converts ESPN to the actual channel number with loop when sending the IR command digits. I use this with AT&T TV streaming box, works great!

1 Like

I do something similar with my Harmony, its very cool :slight_smile:

Yes, but I wanted a solution with zero cloud dependency.

1 Like

I really want to use Natural Language Processing for my smart home but also bypass skill invocation.

Does Amazon sell a business level Alexa solution that would allow me to create my own bot at the root of the Alexa device? Ultimately these results would flow into Node Red with the source Alexa device / group name. I don't want to say, Alexa launch my Home Bot. I just want to say the command directly.

Alexa, change the channel to CBS. (using NLP to parse the channel name with number entity lookup, info would be passed as msg properties)


I am not 100% what you are asking but I think you are asking to be able to say "Change channel to CBS" when Alexa has to hear "Alexa, Ask Harmony (or similar) to change channel to CBS." If that's the case, a String Node could prepend "Ask Harmony to" to your command before sending on. Your dummy Routine would have to be the "Change channel to CBS" command.

If your request is more complicated than that, I will sit down and shut up. :grin:

1 Like

Have a look at Amazon developer to see if this meets your needs

I want to be able to say "Alexa, change the channel to CBS." without invocation, nor setting up hundreds of dummy routines in the Alexa app.

I would receive the following data in Node Red from the Alexa node.

msg.intent = "changeChannel"

msg.alexaSourceDevice = "Master Bedroom Echo"

msg.channelNumber = 12

Is there anyway to make create this type of experience? :slight_smile:

Tbh, i have not had a thorough look at the developer docs, but i know you can create your own skill (so no dummy routines), so i would image that, if you integrate the harmony into your own skill, it should be possible.

The whole amazon developer and AWS services is another deep rabbit hole ive only just started scratching the surface on!

I feel like the Alexa part above is your biggest hurdle. I can't come up with any logic you could implement to tell her not to go out and search the internet if your phrase is not in her "quiver" but only do that when you say specific phrases but you don't want to tell her what those phrases are.

I can't seem to get around this. My Alexa skill requires invocation always so no way to issue a direct command. We need HE to add the following, but then again if we push the command through HE, we lose room awareness since alexa doesn't support room awareness for tv, fan, etc. Currently Alexa only supports room awareness for lights.

Have you looked under the Custom Alexa Routines? They added the capability of having them issue "voice" commands - so in theory you could have Alexa tell itself to do something.

Thanks but this does not meet the variable and room awareness requirement. I do not want to create 50 routines for 50 channels. FYI - I use an IR Blaster inside Node Red for TV control.

Download the Hubitat app