Skip to main content

European Union issues guidance on how to not violate the AI Act’s ‘prohibited use’ section

European Union
European Union

Companies worldwide now officially required to comply with the European Union’s expansive AI Act, which seeks to mitigate many of the potential harms posed by the new technology. The EU Commission on Tuesday issued additional guidance on how firms can ensure their generative models measure up to the Union’s requirements and remain clear of the Act’s “unacceptable risk” category for AI use cases, which are now banned within the economic territory.

The AI Act was voted into law in March, 2024, however, the first compliance deadline came and passed just a few days ago on February 2, 2025.

The EU has banned eight uses of AI specifically:

Recommended Videos
  1. Harmful AI-based manipulation and deception
  2. Harmful AI-based exploitation of vulnerabilities
  3. Social scoring
  4. Individual criminal offence risk assessment or prediction
  5. Untargeted scraping of the internet or CCTV material to create or expand facial recognition databases
  6. Emotion recognition in workplaces and education institutions
  7. Biometric categorisation to deduce certain protected characteristics
  8. Real-time remote biometric identification for law enforcement purposes in publicly accessible spaces

Companies found in violation of the prohibited use cases could face fines totaling 7% of their global turnover (or €35 million, whichever is greater). This is only the first of many similar compliance deadlines that will be enforced in the coming months and years, as the technology evolves.

While the Commission does concede that these guidelines are, in and of themselves, not legally binding, it does note in its announcement post that “the guidelines are designed to ensure the consistent, effective, and uniform application of the AI Act across the European Union.”

“The guidelines provide legal explanations and practical examples to help stakeholders understand and comply with the AI Act’s requirements,” the Commission added. Don’t expect violators to be dragged into court in the immediate future, however. The AI Act’s rules are being implemented gradually over the next two years with the final the final phase occurring on August 2, 2026.

Andrew Tarantola
Andrew Tarantola is a journalist with more than a decade reporting on emerging technologies ranging from robotics and machine…
Copilot: how to use Microsoft’s own version of ChatGPT
Microsoft's AI Copilot being used in various Microsoft Office apps.

ChatGPT isn’t the only AI chatbot in town. One direct competitor is Microsoft’s Copilot (formerly Bing Chat), and if you’ve never used it before, you should definitely give it a try. As part of a greater suite of Microsoft tools, Copilot can be integrated into your smartphone, tablet, and desktop experience, thanks to a Copilot sidebar in Microsoft Edge. 

Like any good AI chatbot, Copilot’s abilities are constantly evolving, so you can always expect something new from this generative learning professional. Today though, we’re giving a crash course on where to find Copilot, how to download it, and how you can use the amazing bot. 
How to get Microsoft Copilot
Microsoft Copilot comes to Bing and Edge. Microsoft

Read more
Amazon expands use of generative AI to summarize product reviews
An AI-generated review highlight on Amazon's website.

Amazon is rolling out the use of generative-AI technology to summarize customer product reviews on its shopping site.

It follows several months of testing the feature, which is designed to help speed up the shopping experience for those who don’t want to spend a long time trawling through endless reviews.

Read more
Hackers are using AI to create vicious malware, says FBI
A hacker typing on an Apple MacBook laptop while holding a phone. Both devices show code on their screens.

The FBI has warned that hackers are running wild with generative artificial intelligence (AI) tools like ChatGPT, quickly creating malicious code and launching cybercrime sprees that would have taken far more effort in the past.

The FBI detailed its concerns on a call with journalists and explained that AI chatbots have fuelled all kinds of illicit activity, from scammers and fraudsters perfecting their techniques to terrorists consulting the tools on how to launch more damaging chemical attacks.

Read more