FAITH PROJECT
Fostering Artificial Intelligence Trust for Humans towards the optimization of trustworthiness through large-scale pilots in critical domains
The increasing need for trustworthy AI systems across various application domains has become urgent, given AI’s critical role in the ongoing digital transformation to address socio-economic needs. Despite existing recommendations and standards, many AI practitioners continue to prioritize system performance, often neglecting to verify and quantify core attributes of trustworthiness such as traceability, robustness, security, transparency, and usability. Furthermore, trustworthiness is not assessed throughout the entire AI system lifecycle, leading to a fragmented view of AI risks. The lack of a unified ecosystem for assessing AI trustworthiness across critical sectors hinders the development and implementation of a robust framework for trustworthy AI.
To address these gaps, the FAITH initiative will develop and validate a human-centric AI trustworthiness optimization ecosystem, enabling the measurement, optimization, and mitigation of risks associated with AI adoption in critical domains like robotics, education, media, transportation, healthcare, active aging, and industrial processes. The project will adopt a dynamic risk management approach, following EU guidelines, and deliver tools to be used across different countries and settings. It will also engage diverse stakeholder communities, producing seven sector-specific reports on trustworthiness to accelerate AI adoption.
FAITH aims to create a trustworthiness assessment framework (FAITH AI_TAF) that integrates EU regulations and international standards. This framework will be validated through seven large-scale pilots in critical domains. It will help assess generic and domain-specific trustworthiness threats, contributing to the development of a domain-independent, risk-management-driven framework for evaluating AI trustworthiness. The project brings together innovators from industry, research, and academia, with the goal of addressing AI trustworthiness holistically across all stages of the system lifecycle.
UNIPI will be responsible for replication pilots for the use of artificial intelligence in the biomedical field. In particular, UNIPI will test and evaluate an artificial intelligence tool for prostate segmentation.