Which voice assistant you recommend

This!

I do work in IT security and my sense is that if your a target for the various three letter agencies, then yes, your phone, Alexa, RCS messages, SSL traffic can ALL be broken and monitored. Even if your phone is sitting passively on your desk - That's the "CAN" part of the conversation above.

As for "DOES" noted above, my sense is that breaking good encryption is hard (by design) and even nation state actors have limited resouces to do this, so "all" conversations near ALL passive phone aren't just routinely monitored, there just isn't enough bandwidth in the world to do that. - And voice agents, aren't just streaming everything to Google, Amazon, Apple, - ALL the time, without a wake word - Folks have actually tested this, and look at traffic monitoring.

That all said, three letter agencies, do routinely electronically monitor text and email comminications (especially traffic crossing borders) and have "keyword" monitoring tools monitoring nearly ALL that traffic. If you trigger a keyword that's running in automation, then you may get a bit more "study" - Sort of like how Gmail "reads" your gmail traffic and makes suggestions about appointments for flights, etc. - Just done on a different set of keywords and done on nearly all international traffic.

But again, I don't believe large tech companies are streaming all VA sounds back to their mothership for AI training - There are real bandwitdh limits here, for that to scale - But everything you say after a wake word (even triggerd by accident), is certainly fair game, and I certainly would expect it to be used in AI training, and potentially any court cases/foul play investigations.

So CAN everything be monitored - yes - even your local rPI or HA Voice Assistant (read up on NSA tailored access programs and HW implants) - But that's DOESN'T happen often for everyone. You need to get into their crosshairs, for nation states, and why would tech companies every bother, when they can just monitor your voluntary interaction with their apps.

JMHO

7 Likes

My mistake. Thought this was lounge, not ā€œgetting startedā€. I agree with keeping the thread on topic. My comment is withdrawn. I’m here to talk about home automation, especially with Hubitat. I apologize for any comments contrary to that.

6 Likes

There is no need for a sniffer because they are not eavesdropping, they are legally recording. We give them access to whatever the device can record, which becomes their property and what they do with it is up to them.
"Samsung may record and store audio and video recordings made using it's devices, including voice assistants and video calls." (Excerpt taken from the Samsung EULA.)

2 Likes

Nah, it’s eavesdroppin.

I have a Google Home/Nest mini. I also have a rule to initialize it every 30 minutes, which might be a bit excessive. The action is simply, "initialize () on Hubitat Speaker" which is the name I gave the speaker.

1 Like

Yup - exactly the same for me. :slight_smile:

1 Like

Guilty as charged (me) as well, good to get the discussion back on track.

2 Likes

So i spent some time yesterday and today looking over and working with Home Assistants DIY voice assistant stuff. Simply put it seems promising with some inhearent issues.

What is a little depressing is that i don't think the concept is really the problem but the integration layer. I connected HA to one of two ollama instances on my server and i it works just as it would if i was using Open WebUI to have have a conversation. I loaded llama3.2 which has 3b paramaters but can fit in my tiny 8GB GPU. It worked fairly well and was sufficient for Home Automation for sure. The problem is the integration component with the intents used with have Ollama to submit commands back to HA. The Intents are basically tools that Ollama will use to create the downstream communication back to the hub.

I actually said this in a different post, but what we need is someone that undrstand how to design those intents(tools) and then It shouldn't be to bad to have it control Hubitat. It would work as good/bad as the integration manages those intents. From a conversational perspective it seems to worked really good. It was just the translate of getting the actions from the conversation to Home Assistant. That all comes down to the Voice Pipeline.

All that said i don't know if this will ever make it to Hubitat. There are allot of parts to it on HA, many of which may not work well on Hubitat. If we were to get the integration component for something like this, it would still need external hardware to be reasonably quick. Think atleast a Jetson Nano to run the small LLM, a TTS, and STT. Then whatever DIY device you have that integrates with that still needs a decent Mic and Speaker. I can see it being very possible, but it will take allot of focust to bring all the parts together.

Something else i should clarify as well is that you don't neeed a LLM and Ollama to do voice assistant tasks on Home Automation stuff. If you just want to control devices and don't mind being specific with the asks you can skip the LLM stuff all together and just use the local conversation pieces in Home Assistant. That will allow control it seems.

4 Likes

I run Josh.ai for voice. I also have hubitat and control4.

Josh.ai can do many of the things you can do in hubitat, but it's not hubitat. It also costs money and is positioned differently than hubitat.

Just throwing it out there. It's private.

2 Likes