Since taking a new job I have been working very heavily in data sciences and artificial intelligence. I was cobbling together some prototypes in my employer's lab and it was taking forever to get things done and I decided to leverage my home lab since I can spool things up and down very quickly.
Behold I have created Frankenstein. That is the name of the conversational AI I have created.
Perfect naming on multiple levels:
Halloween themed - spooky season appropriate Test document - literally trained on the book Frankenstein "Frankensteined" solution - pieced together from different parts (Google APIs, Qdrant, n8n, Proxmox) The irony - An AI named after a novel about creating artificial life Social context - Like the monster, it's powerful but needs careful handling
The Stack That Built "Frankenstein":
Brain: Google Gemini 2.5 Flash (or soon Ollama) Memory: Qdrant vector database (1,567 chunks of knowledge) Body: n8n workflow orchestration Heart: Proxmox LXC infrastructure Soul: Whatever I train it on
"It's alive! IT'S ALIVE!"
When I switch to local Ollama, I can truly say: "I created life... from scratch... in my own lab... with no cloud dependencies!"
That looks very nice. I have been working with Ollama and OpenWebUI in a different thread trying to see how far i could take a AI instace talking directly with the hub. I look forward to seeing how your solution moves things forward. I have a way to export a variety of hub device information to try to use with RAG.
Do y ou have a automated way to upload context for training in the Qdrant vertex DB?
Are your thoughts with the Agent to call directly to a app like Maker API, or a custom app for event processing?
My solution moving forward it to train it in recovery procedures for down services in my lab to see how far I can take n8n automation. Also I am going to stop using Google and switch to local LLM and Embeddings. I have OpenWebUI installed on a LXC with N100 CPU but it's very slow but for some of the things I am doing 7 to 10 tokens per second isn't bad. That step can wait wile my experiments continue.
I have been saving up to get a mini-pc that supports external GPU where I will host OpenWebUI / Ollama in the future just need to get something with the $2,000 or less configuration worked out.
As for automating ingestion I have another work flow that will scrape a directory off a local path where I mounted my NAS to. Any document I drop in that directory in theory will be converted to text and then run the embeddings against it.
Its a fun project but I do know as I write up recovery procedures and create vectors for actions it will be a lot of trial and error.
The flow will be a two call prompt the first call to get the procedure rank it and if viable prompt the AI to take actions. If not viable text me.
As for the makerAPI I have been thinking of what type of signals we can collect with n8n from the Hubitat, build the rules using natural language and have the AI instruct the Hubitat to do the actions. But that is for another time but it would also be a great project.