Possible Local Voice Control Option?

Sorry, I don't see anything with that link...

Search for Rhasspy Home Assistant on Google lol. Idk why the link is broke.

The cert is bad on the subdomain www. Link works without www.

https://rhasspy.readthedocs.io/en/latest/

2 Likes

Good work. How well does it work in comparison to Alexa and Google Assistant? Is it slow to recognize intents?

Sorry, I haven't used Alexa or Google Assistant - and I don't have any other computers to try this on other than a Raspberry, where I understand speech has not been ported over with Mono - yet. So I don't know?? I would guess it to be quite responsive compared to others as the vocabulary is so limited. I could use some outside feedback and testing - want to try it and then you can tell me how it compares? I didn't really want to get into a BIG project with a user interface and all, but the program needs user information - like hub's url, Maker API's access toke, a wakeup phrase, a go to sleep phrase, and a few other things. I've got it working off a small text file with that info, but there's no real error checking - and a bad url is one not handled gracefully at all yet. As long as you can create a suitable text file, it should work. If youโ€™re interested, Iโ€™m not sure how to get it to you โ€“ I do have a website I could put it on??

On my system, when I'm done speaking, the delay for the action to happen is about the same as the delay in clicking a button on my dashboard - it's pretty quick.

Just saw this come my way recently.
Does any one know anything about the Candle Controller?
Candle Controller
Looks interesting, and they do speak about local voice control.

It's very kloogy feeling, but W10 has built in voice recognition under accessibility. When enabled at startup you get an applet at the top of your screen. You say "Start Listening" and then follow that with a command like "Open 'app or shortcut'", the 'app or shortcut' can be a Maker API URL that activates something. It's local voice control, not at its finest, but it is local. I shutoff my Wi-Fi to make sure the voice bit wasn't calling the mothership.

I was looking for a way to open and close the Alexa app via voice so it isn't always listening. Realized you can use it to trigger any shortcut or installed program.

It has incredible curser control and text string capabilities. Bridging it to a dashboard would take us a long way towards local voice control via a PC.

This is actually pretty damn neat. They have a "mouse grid" activated by saying "mouse grid" it divides the screen into 9 numbered squares. You speak a number and it shrinks the grid into that box. You keep doing it until the box is over the element you want to click. Say, "click" and the cursers glides over to that spot and clicks. You can navigate and use HE's built-in dashboard that way.

Totally impracticable for our needs but fun for a few minutes.

I might try making a word document full of maker API links this weekend. See how awkward it is to search and select a keyword to click.

We need the compute to recognize maybe 200 words and it only has to learn a few users voices. Google and AWS are building something that can speak every language and monitor every conversation. Our home automation goals are not so lofty. We don't need a 10,000 node data center. The Mycroft project has proven that point.

This topic was automatically closed 365 days after the last reply. New replies are no longer allowed.