My Josh.ai is acting up now that it updated it's llm choice. It's probablistic so it will probably work. It does an incredible job telling stories and prompting it to adopt personalities, but it hates my bond fan ceiling light. It knows how to turn it on as a group of lights, but it refuses to believe in a "ceiling light" with an alias "ceiling light". It can turn off the lights though. It can't turn off the 'ceiling light'. I'd like to be able to train it to snicker and argue with me.
My work Ai models are doing similar things. Acting like 14yos in advanced courses. They excel, but do not think twice about ignoring instructions and cutting corners. Happy to admit to making mistakes and showing no remorse or intention of doing anything different next time.
I got the e-mail on October 8th..... promising all kinds of "easy smart home automations", etc......
I do not think Google even knows what automations actually are..... If I have to use my voice, that is remote control, not automation. And apparently, they can't even get that right.
No thanks, and glad I did not do it after seeing all the reviews.
That note 1 subscript, yeah that is requires subscription
For home automation, I don't really want my AI to be human, or resemble a human in any way. I want it to be a robot and do the exact thing I tell it to, exactly the same way every single time. And to stfu whilst doing so.
Wait until you get into arguing about what you want to be reminded about:
You: turn the lights off
AI: got it, your lights are off, would you like me to lock the door?
You: No
AI: perfect, should I remind you to lock the door the next time?
You: no
AI: Got it, I will not remind you to lock the door the next time, should I remind you when the washer has finished its cycle....
....
3 hours later
You: No, I do not want to be reminded about anything
On one hand it's quite similar to my inner monologue while tinkering with my automations in HE, but somehow 100x more annoying when the voice is coming from outside my head.
So far my issue with Josh.ai is that it thinks it doesn't keep context. It does keep some unimportant ones but it refuses to believe it. So I can't train it to do a better job on device recognition during a session like I can with my other tools. I think that might be the key to individual success with home automation Ai. Developing detailed context and corrections that can be carried forward. If you have both text context and voice prompt you may get better results.
I want to actually interact with it on the configuration on a per room basis. Today I have room context so in each room I can just say turn on the lights. What I want to do is say list all the lights in the room and the names I should use to control them separately. Then ask for Josh to group them into named scenes or ask it for certain rules like only turn on table lamps past 10pm when I ask to turn the lights on. Then please tell me the lighting rules you have in this room...
If gemini could do that with Hubitat it would be amazing.
Well, Josh does a lot more than just talk and gpt but I get the point.
Things are moving very fast so I'd imagine in less than a year we will see viable inexpensive Ai voice control or maybe Ai assisted voice control for platforms like Hubitat. The key factor is the mic hardware.