INESC TEC aims to make Artificial Intelligence more explainable, transparent and reliable

The TRUST-AI project, coordinated by INESC TEC’s Centre for Industrial Engineering and Management (CEGI) seeks to explain AI systems, making them more transparent and reliable.

Artificial Intelligence (AI) is progressively becoming the basis of decision-making processes. But can we trust what mathematical models tell us to do? How can doctors rely on a system that tells them the right time to operate on a patient with a rare tumour? How can a retailer be sure that the algorithm did not favour a supplier over one of the competitors? And what about the consumers? Don’t they have the right to know how the energy consumption forecasting models decide how much they pay?

“Since Artificial Intelligence techniques are gradually influencing decisions, their reasoning goes through stages that are too abstract and complex to be understood by users. We are talking about ‘black boxes’, which find excellent factual solutions, without being able to explain how they get them. This raises ethical questions as AI influences more and more decisions. Understanding the selection of a certain option brings confidence and helps improving the decision-making process. This project will develop AI solutions that are more transparent, fair and explainable and, therefore, more suitable”, said Gonçalo Figueira, CEGI researcher and project coordinator.

The approach consists of making AI and humans work together towards better solutions (that is, models that are effective, understandable and generalizable), through the use of symbolic models and learning algorithms explainable by the project, as well as through the adoption of an human-centred empirical learning process, which integrates cognition, machine learning and human-machine interaction.

The project will adopt a human-centred empirical learning process.

Project will be applied in three case studies

The result will be a transparent, smart, reliable and unbiased tool, applied to three case studies – in the fields of healthcare (treatment of tumours), online retail (to select delivery times for orders) and energy (to support the forecasting of buildings). However, the project may also be applicable to other sectors: banking, insurance, industry and public administration.

In addition to INESC TEC (coordinator), the TRUST-AI project (Transparent, Reliable and Unbiased Smart Tool for AI) comprehends LTPLabs and five other partners, from five different countries: Tartu Ulikool (Estonia), Institut National De Recherche Eninformatique Et Automatique (France), Stichting Nederlandse Wetenschappelijk Onderzoek Instituten (Netherlands), Applied Industrial Technologies (Cyprus) and TAZI Bilisim Teknolojileri AS (Turkey).

The project received a €4M budget through the European Union’s research and innovation programme, Horizon 2020, under the agreement no. 952060.

The INESC TEC researcher mentioned in this news piece is associated with INESC TEC.

Next Post
PHP Code Snippets Powered By : XYZScripts.com
EnglishPortugal