Steve Wozniak has been sharing his thoughts about the new wave of AI-powered tools that have gained so much attention in recent months.
Speaking to the BBC this week, the Apple co-founder said he fears that the technology will be increasingly used by cybercriminals to make online scams more convincing and therefore harder to spot.
OpenAI’s ChatGPT chatbot and Google’s Bard equivalent are among a growing number of generative AI tools that are capable of conversing in written form in a natural, human-like way. They’re so powerful that a recent report by Goldman Sachs suggested the technology will impact an estimated 300 million workplace roles in the coming years, though it added that many of these jobs will probably be assisted by the technology rather than replaced.
Considering an altogether more unpleasant side to the technology, Wozniak said: “AI is so intelligent it’s open to the bad players, the ones that want to trick you about who they are.”
In an interview with CNN last week, Wozniak offered similar views but said he hoped AI will be trained to spot scams that deploy the very same technology, and then alert the target to take appropriate action to protect themselves.
But it’s not just email scams that can be turbocharged by AI. A recent Washington Post report revealed how criminals are already deploying AI technology to clone a person’s voice using just a short sample of their speech. They then use a fake but highly convincing voice in a phone scam to trick a relative or friend into handing over money.
In his interview with the BBC, Wozniak also called for the new AI technology to be regulated to ensure that its creators stay within certain boundaries.
Wozniak was one of around 1,000 technology experts who put their name to a letter in March calling for a six-month pause on the development of some AI tools so that a set of guidelines for their safe deployment can be drawn up. Elon Musk was another of the letter’s signatories.
The tech engineer who built the first Apple computer with Steve Jobs five decades ago told the BBC he wants regulation to target major tech companies that “feel they can kind of get away with anything,” though at the same time, he pondered whether the regulation would be effective, adding: “I think the forces that drive for money usually win out, which is sort of sad.”
- These ingenious ideas could help make AI a little less evil
- ChatGPT: How to use the AI chatbot that’s changing everything
- Lawyer says sorry for fake court citations created by ChatGPT
- Google’s ChatGPT rival just launched in search. Here’s how to try it
- This new Photoshop tool could bring AI magic to your images