Skip to main content

Should your own AI rat you out? It’s complicated, says the man building it

Kuna System Interview Home AI police robot
Guiseppe Cacace/AFP/Getty Images
Depending on who you ask, the future of artificial intelligence is either something to be excited about, or fearful of. Elon Musk suggests their ever-growing intelligence will put them at odds with humanity itself, while those who are more optimistic, like Mark Zuckerberg, think AI can help us live more fruitful, efficient lives.

Like most technology, the type of AI we end up with will depend on the people creating them. If developed with privacy and end-user control in mind, we could end up with a firmer grasp of how AI operates.

Kuna Systems is one firm looking into that possibility. The smart security camera and cloud backup provider is starting to experiment with artificial intelligence, and that’s lead to some interesting moral quandaries, which it’s in the process of solving.

Digital Trends spoke with Haomiao Huang, Kuna’s CTO, and picked his brains about the kind of problems that can be faced by developing advanced artificial intelligence. He told us that, with the right mindset, we can retain control over AI while still seeing the benefits they offer.

How AI can improve already smart technology

Modern AI, though commonplace, is limited. We see it in chat bots, image recognition systems, fraud prevention checks, voice assistants. While useful, it’s all pedestrian compared to the kind of intelligence we’re used to seeing in movies and TV shows. Soon, AI could make our already smart devices smarter, removing the need for humans to manually control our technology.

IoT devices — in particular, connected security cameras — are some of the most widely hacked devices in the world.

“What [Kuna] makes is a preventative security system,” Huang told us. “Instead of waiting until someone has broken a window or door, we allow our customers to respond before a crime has taken place.” He went on, explaining that, “a traditional security system is a responsive tool to a crime, but we’re moving into the realm of preventing a crime before it happens. The system can see and respond to a crime and prevent it from happening in the first place.”

Kuna Systems’ cameras require a measure of artificial intelligence to make that possible. They must interpret what the camera feeds are picking up, and then respond accordingly.

“We already have a system in place that can detect whether that’s a person, or a car, how many people, and so on. One of the capabilities we’re working on is detecting suspicious behaviors,” Huang continued. “It’s a pretty common tactic of thieves to ring the front door and if nobody answers, go to the back door and try and find a way in. The [AI] system we’re designing will be able to recognize that and register it as a priority, and then send an alert to our customers, or even potentially call the police.”

Today, such decisions are made with humans involved. The owner receives an alert that an “event” has taken place when someone, or something, trips the camera feed. They can then look at the live stream and respond accordingly. An advanced AI could automate this, responding faster than a human ever could, and do so when there’s no one around to check the camera feed.

“I used to be really worried about locking up my bike, but soon you’re going to be able to leave your bike by your house without locking it up, because the camera will cover it and will be able to check to see if the person taking it is authorized to do so,” Huang continued. “From there, it doesn’t make sense to steal things anymore, because you’re going to get caught and in the future, the items themselves will know whether you’re allowed to use them.”

This is similar to the work Microsoft has been doing with AI in various workplace scenarios. At Build 2017, the company showed an AI  concept capable of spotting spillages, warning of workers using tools they aren’t trained for, and even noting those exceeding recommended activity levels after a life-changing operation.

Having an AI keep an eye on us all has a myriad of benefits, but even with Huang’s rosy idea of the future of AIs, he and Kuna understand that there is danger in giving an AI too much control.

The moral implications of an AI in charge

Describing the authorization and oversight capabilities of future AI smart cameras as a “beautiful case,” where property crime is effectively eliminated, Huang held up a dystopian mirror to that same scenario, and showed what a murky world such technology could create.

How can artificial intelligence make decisions based in the realm of morality, and have implications that an AI could never understand? Autonomous vehicles, for example, face the “trolly problem.” Should a car swerve off the road to avoid a family crossing the street, if it will endanger the lives of the passengers?

The world envisioned by Kuna would expand the issue into nearly every part of our lives.

Kuna System Home AI
Kuna AI can already differentiate between humans and other sources of motion, like cars and birds. Now they’re focused on teaching it to recognize criminal behavioral patterns to alert you before the crime even happens. Image used with permission by copyright holder

“With smart cameras, if the AI recognizes a crime being committed against the owner, then it’s obvious what it should do,” Huang said. “But if it recognizes a crime that the owner is committing, what should it do then? I think most people would agree, if you commit a bad crime, then it should be reported and you should get in trouble for it,” said Huang. “But there’s a gray area of small crimes. Say your camera catches you watering your lawn when you shouldn’t be, is that really something that should be reported? Probably not. If your security system sees you murdering someone, then it probably should.”

Even then, the concept of an AI security system that turns in its owner is sure to make some people uncomfortable. Security that is always on, always watching, puts society at risk of eliminating privacy altogether. And privacy isn’t the only issue that all-seeing, all-powerful AIs could bring to the table. They could also be co-opted for nefarious purposes.

IoT devices — in particular, connected security cameras — are some of the most widely hacked devices in the world, finding themselves enlisted for denial of service attacks in the millions. That problem would only be compounded if those products had capable artificial intelligences of their own, that could be tricked into performing their functions not at the behest of their owner, but at the whims of whoever infiltrated the device.

Giving owners the AI leash

For Huang, these problems can only be resolved by keeping the humans who own AI devices in charge of those devices. While AI can remove the need for regular human interaction, they should never eliminate human oversight.

“[It’s important to keep] the home owner involved in the loop […] It’s not just a convenience of product features, but a moral responsibility aspect of it,” he said. “Who does the responsibility actually lie with?”

“If they’re buying for it and paying for it, then they’re the one who gets to decide what the AI is going to do.”

Giving owners the option to modify behavior of the AI they own is one possible solution. When you buy a driverless car, you could decide how it should act in certain scenarios. Do you want your car to prioritize you and your loved ones when your safety and that of a stranger’s must be weighed by the algorithm? What happens when the AI must decide between your safety, and that of a group of jaywalking children?

When you buy a smart camera, you could decide if you want it to report crimes to the police, or only to you. You could set your preferences for crimes committed on your property, or on the street opposite. You could decide what scale of crimes it should report, and which ones it shouldn’t.

It could be that governments or developers mandate serious crimes like murder or assault are reported regardless of preference, of course. That sort of system is already in place in certain human-driven institutions, Huang points out. “School counselors are legally obligated to report abuse,” he said — so it may be that AI-powered devices have similar obligations. That’s an issue society, as a whole, will need to decide.

“Ultimately the decisions [these products] make come indirectly from the society they were built in and the company they were built by,” Huang said. “What we need to think about is giving that kind of authority to the users. If they’re buying for it and paying for it, then they’re the one who gets to decide what the AI is going to do in these sorts of situations.”

Despite this progressive outlook, Huang admits that Kuna could do better, and is keen to introduce more user control as AI becomes a more important facet of the service his company offers. Hopefully, others will do the same.

“When it’s automated, you explain to the user what it’s going to do and why it’s going to do it,” he said. “That’s just good design.”

Jon Martindale
Jon Martindale is a freelance evergreen writer and occasional section coordinator, covering how to guides, best-of lists, and…
ChatGPT’s new Pro subscription will cost you $200 per month
glasses and chatgpt

Sam Altman and team kicked off the company's "12 Days of OpenAI" event Thursday with a live stream to debut the fully functional version of its 01 reasoning model, as well as a new subscription tier called ChatGPT Pro. But to gain unlimited access to these new features and capabilities, you're going to need to shell out an exorbitant $200 per month.

The 01 model, originally codenamed Project Strawberry, was first released in September as a preview, alongside a lighter-weight o1-mini model, to ChatGPT-Plus subscribers. o1, as a reasoning model, differs from standard LLMs in that it is capable of fact-checking itself before returning its generated response to the user. This helps such models reduce their propensity to hallucinate answers but comes at the cost of a longer inference period and slower response.

Read more
Surface Pro alternative: This Asus Chromebook is another $70 off today
A man holding the Asus Chromebook CM3001 Laptop.

While fast and powerful CPUs and GPUs go a long way with a desktop or laptop, not every PC needs to be a workhorse. Some folks only need a computer for basic web browsing or watching the occasional HD movie or show. That’s why we’re always on the lookout for great Chromebook deals. These Chrome OS machines are just strong enough to deliver a notch above the basics, and today, we found an excellent discount on an Asus Chromebook. For a limited time, when you purchase the Asus Chromebook CM3001 Laptop at Best Buy, you’ll only pay $230. At full price, this model sells for $300.

Why you should buy the Asus CM3001 Laptop
From its convenient 2-in-1 design (check out our list of the best 2-in-1 deals) to its beautiful 10.5-inch 1920 x 1200 touchscreen (WUXGA), the CM30 is a laptop you’ll have zero issues taking just about anywhere. Its light form factor is a huge plus, and when closed, the CM30 is only 0.67 inches thick! And while we’re not dealing with Intel or AMD for internals, the onboard MediaTek Kompanio 520 CPU runs and smooth and efficient ship. It's also a great Surface Pro alternative, for those tiring of the Windows way.

Read more
Get Copilot+ features for less with this Asus laptop deal
An Asus ProArt P16 laptop on a white background.

One of the best laptop deals right now is perfect for anyone who is seeking a Copilot PC. If you’re looking to enjoy AI features, check out the Asus ProArt P16 laptop which is $200 off at Best Buy. The laptop normally costs $1,900 but right now, you can buy it for $1,700. A high-end productivity-focused laptop which also packs a punch for some gaming too, this is an ideal workhorse of a PC. Here’s all you need to know about it alongside some insight into the wonders of Copilot.

Why you should buy the Asus ProArt P16 laptop
Asus features in our look at the best laptop brands thanks to the company being great at developing all-rounder laptops. The Asus ProArt P16 laptop is one such highlight. It has an AMD Ryzen AI 9 HX 370 CPU, 32GB of memory, 1TB of SSD storage, and an Nvidia GeForce RTX 4060 GPU.

Read more