In truth, even on the Internet, on social media and in AI models there are forms of content control, almost always automated and only rarely moderated by human supervisors, that can be likened to the concept of censorship. For example, sexually explicit content is often filtered and censored by search engines but also by social media. However, these forms of control can be easily circumvented and it is therefore possible to access content that is extremely morally or legally critical. The web exposes users and especially minors to content that is potentially dangerous for their development. Not to be underestimated are incitement to violence, various forms of apologia for crime, but also pornographic content. Hence the need that to in some way manage and control internet access, without leaving it up to the purely spontaneous enjoyment of users or platforms.
A relevant educational methodology for gaining critical awareness of existing systems of control and censorship is to compare different AI models, interrogated with respect to sensitive issues, such as topics related to political dissent in Chinese AI systems, or issues of political correctness and woke culture in American platforms.

