CAGING – Causality-driven Generative Models for Privacy-preserving Case-based Explanations – is the name of the new project led by INESC TEC, with the involvement of the Netherlands Cancer Institute (NKI). Funded by the Foundation for Science and Technology (FCT), this project aims to create a solution that allows explaining the deep learning models used to support medical diagnosis, ensuring the privacy of patients’ data.
The main goal of the project is to contribute to a better understanding of the behaviour of deep learning models used to support medical diagnosis. CAGING will feature the development of a set of innovative solutions, proposing a balance between the need for accurate diagnosis and the importance of protecting patients’ privacy.
“The difficulties in interpreting deep learning methods has prevented their adoption in diagnosis support systems”, said Luís Teixeira. According to the INESC TEC researcher, the generation of explanations associated with the models’ decision is, in this context, an important tool that can improve the accuracy and reliability of the diagnosis, e.g., the diagnosis of respiratory diseases from images of pulmonary radiographies.
Hence, the CAGING project proposes the development of causal techniques for preserving privacy, generating explanations based on examples and ensuring the protection of patients’ data. “We aim to separate medical and identity elements, preserving all clinical information and protecting patients’ privacy”, explained the researcher.
With a budget of approximately €50K, the project is led by INESC TEC, with the participation of NKI researchers. CAGING started in February 2023, and is expected to be completed in July 2024.
The researcher mentioned in this news piece is associated with INESC TEC and UP-FEUP.