The research community (and society in general) has already realised that the current centralised approach to AI is not an acceptable and sustainable model in the long run. The vision of SAI is towards a decentralised “collective” of local machine-learning-based AI components, interpreting data and interacting according to human-centric design principles, where explainability is guaranteed both at the local and collective level.
Read MoreIn SAI each individual is associated with their own “Personal AI Valet” (PAIV), which acts as the individual’s proxy in a complex ecosystem of interacting PAIVs. PAIVs process individuals’ data via explainable AI models tailored to the specific characteristics of their human twins.
PAIVs interact with each other, to build global decentralised AI models and come up with collective explainable decisions starting from the local (i.e., individual) models. These interactions and AI algorithms are human-centric, i.e., driven by quantifiable models of the individual and social behaviour of their human users.
The SAI design principles are validated on three concrete use cases: private traffic management,
opinion diffusion/fake news detection in social media, and pandemic tracking and control. Real-life and synthetic
datasets will be exploited.
Project PI