Title: Free & Easy Way to Control Hubitat with Claude/ChatGPT using MCP
Hey everyone,
I've been experimenting with MCP (Model Context Protocol), the open standard that lets AI assistants connect to external tools and APIs, and wanted to share an easy, free way to get it working with Hubitat.
Once set up, you can control your smart home with natural language:
"Turn on the living room lights"
"Set the thermostat to 72 degrees"
"Lock all the doors and set the house to Away mode"
"What devices are currently on?"
"Dim the bedroom lights to 30%"
The AI figures out which Maker API endpoints to call based on what you ask. It's pretty slick.
The Setup
I put together a guide and an OpenAPI spec that covers the full Maker API:
The README has full step-by-step instructions for setting everything up: creating the Workato project, HTTP connection, API proxy, MCP server, and connecting it to your AI assistant of choice.
Why This Approach?
I wanted something I could set up quickly without running infrastructure. Workato handles all the MCP protocol details and gives you a secure hosted endpoint. The free tier is generous enough for personal/experimental use.
If anyone gives this a try, let me know how it goes. Happy to help troubleshoot. And if you find issues with the setup or want to improve the docs, PRs are welcome.
MCP (Model Context Protocol) is an open standard that lets AI assistants like Claude and ChatGPT connect to external systems and take actions on your behalf. Think of it as giving the AI "hands" to interact with tools beyond just chatting.
Without MCP, if you ask Claude "what lights are on?" it can only guess or say it doesn't know. With MCP connected to your Hubitat, it can actually call the Maker API, get the real device states, and give you an accurate answer. Same for commands: "turn off the kitchen lights" triggers a real API call to your hub.
How input works:
Right now it's text-based through whatever interface you normally use to chat with Claude or ChatGPT. So you'd type "what lights are on?" in Claude's web interface or the app, and it responds with actual data from your hub (like in my screenshot).
Voice could work if you're using a voice-enabled interface to the AI. Claude's mobile app has voice input, for example. You could also build something with the Claude or OpenAI APIs that takes voice input, converts to text, sends to the AI with MCP tools available, and speaks the response back.
Why this is useful:
The AI understands context and intent, so you don't need to know the exact commands or device names. You can ask things like:
"Is the garage door closed?"
"Set the house to away mode and make sure all the doors are locked"
"What's the temperature in the basement?"
"Turn on the porch light at sunset" (though for scheduled stuff, Rule Machine is still better)
It's also good for querying status across multiple devices at once, or asking the AI to help you think through automation ideas based on what devices you actually have (like in my other screenshot).
It's not a replacement for dashboards or Rule Machine, but it's a nice additional way to interact with your setup, especially for ad-hoc queries and commands.