At its annual developers conference this year, Apple announced a set of sweeping changes across its software platforms, introducing a whole new design language, a massive iPadOS makeover, and some key updates to macOS elements like Spotlight. What was conspicuously missing from WWDC 2025 was homeOS.
Apple was expected to make a grand reveal of its smart home-focused operating system, ahead of launching a couple of products. The first one reportedly looks like a smart display with its own speaker assembly, while the other model could even get a robotic arm. It now seems the plans for homeOS, and the smart home devices have been pushed into 2026.
But if you watched the event closely, there was a sneak peek of the future. Yes, AI would be very much part of it. And yes, it would make interacting with a smart home device a lot more meaningful and functional. Hint: Think apps, developers, AI, and voice commands. Read on:
What’s the current status?

Apple was apparently planning to introduce homeOS in March this year, but the delays with AI-related features killed those plans. “That operating system and device, however, rely heavily on the delayed Siri features. And that means they probably won’t be able to ship until the Siri upgrades are ready,” according to Bloomberg.
The outlet had previously reported that the first of the two planned device packs a screen measuring roughly six inches. It can be mounted like a wall tablet, and there’s also a range of base attachments for it, such as a speaker accessory. The idea isn’t novel, as the likes of Amazon and Google have already experimented with such products.
What was supposed to set the device apart was the deep integration with Apple’s ecosystem. Aside from serving as a central hub for controlling smart home devices, it would also enable video conferencing, and be capable of running apps such as Safari and Apple Music.
More importantly, the device is expected to rely heavily on AI chops, and that’s where Apple is currently falling behind. “The technology was part of a planned smart home hub that has now been pushed back as well, keeping Apple from moving into a new product category,” reports the outlet.

Apple is now eyeing a Spring 2026 release for the next-gen Siri features and on-device AI capabilities. What exactly is in the package for homeOS remains to be seen, but if the new Alexa+ from Amazon is anything to go by, we are in for a big leap.
Why is it the right move?
“This is a big lift,” Apple’s senior vice president of software engineering, Craig Federighi, told The Wall Street Journal when asked about Apple’s next-gen Siri plans last year. The tone hasn’t shifted a year later. “There’s no need to rush out with the wrong features and the wrong product just to be first,” he told the outlet earlier this month.
Apple won’t be the first to flag the risks, and it certainly doesn’t want to be on the receiving end of AI gaffes after the Apple Intelligence-BBC news misinterpretation fiasco. Google’s AI, despite firmly landing in the futuristic Project Astra age, still gets something as basic as the date wrong. Again and again.
Amazon claims Alexa has hundreds of millions of users, but the AI-powered Alexa+ has only managed to reach a pool of less than one percent of its audience. According to a Reuters report that cites internal sources, Alexa+ is suffering from slow responses and that it “occasionally generates inaccurate or fabricated information.”

The New York Times, after testing Alexa+, reports that a handful of its most promising features are either unavailable or “most of them are very much a work in progress.” But those are not insurmountable challenges. AI is the real issue.
AI, and its natural conversational capabilities, are still a risky affair. As per a devastating account in The New York Times, interacting with ChatGPT pushed two users on an emotional spiral, and one of them ended up dead.
It’s worth noting here that ChatGPT has been integrated within the Apple Intelligence stack to help Siri handle advanced queries. With the release of iOS 26 and the companion updates across other Apple platforms, it can now do even more.
Imagine an Apple-made smart home device running ChatGPT (with all its flaws) in your home, especially with kids and elders around. It’s safe and accurate for the most part, but there are scenarios where deep interactions have quickly turned harmful.
Apple certainly wouldn’t risk putting such a stack on a device that is always at home. Aside from those inherent risks, half-baked features and integrations would simply make the product less appealing and attract criticism.
Apple has tasted it and had to pull one of its ambitious Siri-AI ads because the tech is just not there yet, a year after showcasing it. But now that the company has set a 2026 release date, it’s plausible that the work on Siri and its AI tricks has progressed meaningfully.
A sneak peek of the future

Now, you might ask what the whole fuss around the next-gen Siri is all about. Well, Apple is tweaking the fundamental architecture of Siri and molding it to act more like a chatbot, like Gemini, ChatGPT, or Claude.
Think of it as the same magnitude of change as Google Assistant going away in favor of Gemini. Aside from handling user interactions and smart home controls, it is now integrated everywhere in apps such as Gmail, Docs, Maps, and even external apps like Spotify.
Siri doesn’t offer that. Yet. It could change soon, and we already got a glimpse of it at WWDC. That secret sauce is on-device Apple Intelligence foundation model. In a nutshell, developers will be able to build on-device AI experiences within their apps.
The best part? Free AI inference. Moreover, the on-device AI capabilities can be integrated within apps using merely a few lines of code. In a nutshell, apps will gain more conversational and visual capabilities powered by AI.

And since all it happens on-device, user data privacy is not compromised. Simply put, the apps powered by Apple’s AI models are going to be smarter and more intuitive. So, how does it benefit Apple’s smart home display?
As per Bloomberg, the device will integrate tightly with the iPhone and will even enable Handoff for seamlessly transferring a task from its screen to the smartphone in your hand. Overall, it seems Apple wants to keep its AI stack, apps, and the integrations ready so that when the device launches, it doesn’t run into the current limitations.
It will allow natural language conversations and let users perform tasks across different apps with voice commands that don’t sound like a maths formula. A few more months in development sounds like the right approach at this point.