"Create an occurrence of Hubitat Room Lighting app and Rule Machine app and other Hubitat apps to turn on all the lights in a room for 60 seconds when any motion detector in the room detects motion. Turn the level to 20% when the Mode is Night, 50% when the Mode is Evening and 100% when the Mode is Morning or Day. Only turn on lights that have dimmer function. Do not turn on lights that are switched. Do not turn on lights in the Family Room if the TV in the Family Room is on. Do not turn on lights in the Bedrooms with motion between 10 pm and 7 am."
For some reason ChatGPT replied “I’m afraid I can’t do that’ and it knew your name was Dave.
Seriously, I was expecting code, not instructions.
Sure! Here's a sample occurrence of the Hubitat Room Lighting app and Rule Machine app, along with other Hubitat apps, to achieve the desired automation:
Hubitat Room Lighting app:
Create a new Room in the Hubitat Room Lighting app for the room where you want the lights to turn on when motion is detected.
Configure the motion detectors in the room to trigger the Room Lighting app when motion is detected.
Rule Machine app:
Create a new rule in the Rule Machine app to handle the logic for turning on the lights based on the Mode and time restrictions.
Set the trigger for the rule to be any motion detector in the room detecting motion.
Add a conditional statement in the rule to check the Mode (e.g., Night, Evening, Morning, Day) and adjust the light level accordingly.
Use another conditional statement to check if the TV in the Family Room is on, and if so, prevent the lights in the Family Room from turning on.
Use another conditional statement to check if the time is between 10 pm and 7 am, and if so, prevent the lights in the Bedrooms from turning on.
Other Hubitat apps:
You may need to use additional Hubitat apps, such as the Mode Manager app, to set up and manage the different Modes (e.g., Night, Evening, Morning, Day) based on time schedules.
You may also need to use the Motion Lighting app, which is a built-in app in Hubitat, to configure the motion detectors and set up the motion triggers for the Room Lighting app and Rule Machine app.
Please note that the exact steps and configuration may vary depending on your specific setup and preferences, so you may need to refer to the documentation and help resources provided by Hubitat or consult with the Hubitat community for further assistance
I haven’t tried Hubitat, but I did get ChatGPT to program a function node in Node Red:
——
Explain how to create a node-red function node that will take the average of the past three msg.payload: number values and will return that average as the output payload.
——
To create a Node-RED function node that takes the average of the past three msg.payload number values and returns that average as the output payload, follow the steps below:
Drag a function node onto your Node-RED workspace and double-click it to open the node editor.
In the node editor, enter the following code:
let values = context.get("values") || [];
values.push(msg.payload);
if (values.length > 3) {
values.shift();
}
let sum = values.reduce((acc, curr) => acc + curr, 0);
let avg = sum / values.length;
context.set("values", values);
msg.payload = avg;
return msg;
This code first retrieves the previous values array from the context object or initializes it to an empty array if it doesn't exist.
The current msg.payload value is then pushed onto the values array.
If the length of the values array is greater than 3, the oldest value is removed from the array using shift().
The remaining values array is then summed using reduce() and the average is calculated by dividing the sum by the length of the array.
The updated values array is then stored back in the context object for use in the next function call.
Finally, the output msg.payload is set to the calculated average and the message is returned.
Deploy your changes and test the function node by injecting some msg.payload values. You should see the output payload value being updated with the average of the last three input payloads.
I've definitely used it to help debug code. Pretty good at catching syntax errors. I've posted elsewhere in here a sample app code it spit out to rotate through light colors with a button press. It's also much better at commenting code than I am . You do have to be specific with the wording.
It gave instructions because you asked how to do something with built in apps. If you said "I need a custom app for Hubitat to do x,y, and z" you'd get a different response.
I've also used it to generate PowerShell for me. I have a PS script that watches my downloads folder for ZIP or STL files (formats for 3D printing models) that will unZip if necessary and move the files to a Google Drive folder.
"* You may need to use additional Hubitat apps, such as the Mode Manager app, to set up and manage the different Modes (e.g., Night, Evening, Morning, Day) based on time schedules."
No, GPT.
YOU may need to use additional Hubitat apps, such as the Mode Manager app, to set up and manage the different Modes (e.g., Night, Evening, Morning, Day) based on time schedules.
Don't forget CharGPT is based on data collected back in 2021, so any instructions or code are going to get from that period. I'm pretty sure Room Lighting wasn't around and other parts of the platform would also be quite different I expect. With code I doubt it would know the version of Groovy being run and the whitelist of methods we have access to.
All that said, on the positive side, I share the view that it can be quite useful for getting different ideas on where to look next or something to try out. So it can be used, but is not likely to hand you a full solution, some assembly and interpretation will be required.
My experience with using GPT4 to write code is mixed. It often goes in the right direction. But requires nudging to get it to the place you want. Like old-timer web searching, better prompting gets better answers.
In general knowledge the technology is improving much faster:
I imagine that coding won't be far behind for long.
I have mixed feelings about this. It is a very powerful tool, and has its uses. I personally think that if we allow this tool to simply do everything and decide everything for us, we will rot intellectually as the human race. We need to have some limits on this.
This AI was programmed by humans, and humans who have a bias or agenda. It may not be purposeful bias, but it still can come through if you ask the right (or wrong?) questions. This tool is censored against answering certain types of questions, and it is hard to get an real answer (no matter how hurtful or offensive) if you start censoring or limiting what this AI can process and output. So are we really getting the full benefit and knowledge of this AI if we are ignoring certain answers?
It will not surprise me when people start suing to remove things they don't agree with. Once that starts happing on a wide scale, this tool will be worthless.
Or the damn thing runs amok and we end up like John and Sarah Connor's version of hell.
I have mixed feelings as well. I can see it's ultimate use in highly regulated, legal or safety -related situations leading to some tough questions about potentially devastating outcomes. That doesn't mean we can't adapt as a race.
I know terminator was mostly used for comic effect, but I get the more serious undertones of that kind of reference. I think the situation is different to most of those apocalyptic movies (not that they are the yardstick to judge real world events, not until we see a real Jurassic Park ). We have moved beyond the generation who saw the introduction of the Internet and social media, so both adults and children are much better prepared, or should be, to face some of the public uses and misuses of a technology like AI. It's not the molten-morphing machine with a Guns 'n' Roses soundtrack arriving out of the blue...
I expect the areas that will need the closest focus will be those with a commercial gain (which nowadays includes political gain) or where safety is at stake. Those are less obvious and harder to spot. So where AI is used behind the scenes to drive an outcome, but the use of the technology is not easily identifiable.
One of my favorite topics is reading about technology failures. Things like Chernobyl, etc. In my list of favorites is the Therac machine. It is an interesting study in programming, human assumptions, updates to technology, and failures to recognize certain outcomes. Therac-25 - Wikipedia
If you take disasters like this and extrapolate it into using AI designing a machine, I feel like you are going to have a string of similar outcomes. It is cute and funny when AI writes a fake term paper or paints a weird picture, but something else entirely with mission critical devices and equipment.
If it wasn't so serious it would be funny.... Dilbert-like in it's accuracy to modern situations. So many of the same mistakes we see every day in various forms.... Too much I could write but don't want to think about on a Sunday night
I think it was on the FB group but someone there said chatgpt4 made a roku driver for them (obviously less detailed then @armand driver but functional). I think they just asked it to write it in groovy.
What is really scary, our Artifactory instance is now building on GPT. In the coding pipeline every bit of code is now analyzed against a best practice GPT to look for coding errors in advance. If your code is scored low it bails out of the pipeline and provides recommendations. If a developer improves the code "solutions" it then gets incorporated into the updated GPT neural net. The machines are now the coders overseers.
If you look at many of the discussions here on the Community there can be quite wildly differing opinions about various topics. While most of our discussions centre around what are relatively benign situations in automating our day-to-day, when it comes to determining whether someone should receive a certain insurance product over another, or whether they can afford a financial product, or whether the "smart" car should hit them over another pedestrian, those are more pertinent decisions that those who develop AI solutions still need time to work through (importantly they, real people....).
I know we are taking a benign use of AI and running with it into much different situations than ChatGPT is trying to service, let alone the OP. So we are very much straying from the OP's intent for this topic. But that is where some of these comments are coming from, rightly or wrongly for this conversation.
The tools have their uses. I use GitHub Copilot almost every day. Super useful, as long as you think of it as a real fancy autocomplete. It’s not going to replace me. My main job is solving problems of people and processes. Code is just a small part of that.
I did ask ChatGPT to make some virtual device drivers, as a test. It did well with a virtual switch, though that is trivial and already documented all over the place. Then I picked a capability that, as far as I can tell, no one has ever made a virtual driver for. ChatGPT put the right attributes and commands in the metadata, but it didn’t even attempt to correctly implement the commands. Just spat out an on() and an off(). The thing to remember is that it doesn’t actually “understand” anything. It’s a parlor trick. It turns out to be a valuable parlor trick, that has applicability to a wide range of problems. But not all problems. And it doesn’t know, and won’t admit, which problems it can solve and which it can’t. It WILL lie to you. And the parlor trick part is that, by responding in language, like a Star Trek computer, it’s very convincing, even when it’s hallucinating/lying. We live in interesting times.
I also have mixed feelings about it. Yes, it could be a powerful tool, yet that is also its weakness. People, being well, people (I'm using that as a catch all, I'm also including entities, like corporations, governments, etc.) will use it for their own gain and to amass power, not for the betterment or benefit of mankind as a whole. Sure, it has great potential, but AI beyond the scope of being a glorified search engine will ultimately be neutered and rendered mostly worthless so that those in who stand to make the most money without it (which is most educated professions) and those with power will seek to side line to protect and grow their personal interests (money and power).
Interesting you mentioned bias, as I didn't say anything about it, but that is also a concern. The truly frightening proposition to me is that as it evolves to a point it is no longer programmed by humans, but by other AI, and that flaw of human bias from the beginning becomes hard coded and systemic in future generations.
Oh, I bet if you look long enough, that's already started to happen. We are a sue happy society. why wouldn't someone have filed a lawsuit yet? Again, that is the type of thing that will relegate it to the realms of a glorified search engine.