The AI Act, regulation on AI, establishes 4 levels of risk of AI applications:

  1. Unacceptable risk. Prohibited applications: Subliminal techniques or social scoring systems or remote biometric identification used by public authorities.
  2. High risk. For applications related to transportation, education, employment and welfare, among others. Before placing a high-risk AI system on the market or in service in the EU, companies must conduct a “compliance assessment.”
  3. Limited risk. AI systems that meet specific transparency requirements: the use of a chatbot must be made explicit.
  4. Minimal risk. Spam filters, AI-enabled video games, and inventory management systems.