By looking beyond the limitations of generative and generalist AI, we can begin to identify the next trends. In a sense, it is about moving from words to deeds, designing and developing alternative solutions that are more specialized, reliable, and useful.

The next AI agents will not just answer questions, but will evolve to manage structured processes in both personal and business settings, triggering actions in both digital and physical settings, in this case with robotic technologies.

Again starting with language patterns, we begin to go beyond simply answering a question.

Task execution thus becomes the new frontier.

It involves enabling tools, e.g., Google searches, interactions with external APIs, and then getting to complex, operational, commercial or business processes.

The subsequent development of robotic tools and sensors will enable AI technologies to “embody” themselves and perform actions in the physical world as well, always maintaining verbal interaction. In some tools such as Alexa we can recognize the vanguard of increasingly complex, in some cases even anthropomorphic, domestic robots enabled by voice interaction.

In contrast, in industrial settings, which themselves involve complex and highly diverse operations, AI Models are not always driven by voice interactions, but rather by parametric interfaces and Digital Twin Tridimensional.

The interaction between 3D models and AI models is an important application scenario for industry, urban planning, construction and product marketing.