The AI Act, regulation on AI, establishes 4 levels of risk of AI applications:
- Unacceptable risk. Prohibited applications: Subliminal techniques or social scoring systems or remote biometric identification used by public authorities.
- High risk. For applications related to transportation, education, employment and welfare, among others. Before placing a high-risk AI system on the market or in service in the EU, companies must conduct a “compliance assessment.”
- Limited risk. AI systems that meet specific transparency requirements: the use of a chatbot must be made explicit.
- Minimal risk. Spam filters, AI-enabled video games, and inventory management systems.

