Trust, the soul of the European approach to data and AI

By Vasco Dias, Data Protection Officer

Inspired by Daniel Vasconcelos’ (Technology Licensing Office) recent comments regarding the importance of trust in society and in business, I’d like to add “my two cents” in the subject, by pointing out the relevance of said principle in the context of a new European approach to data economics and artificial intelligence.

Once again, the current pandemic has drawn our attention to the importance of the timely access to data, for the progress of science and the design of informed and effective public policies. Moreover, there are very few people who still doubt the advantages and benefits that we can reap from the fast breakthroughs in the fields of artificial intelligence, machine learning and big data over recent years. However, said progress also requires us to ponder on the ethical, legal and social consequences of certain uses of said technologies.

The debate is associated with the goal of developing and consolidating a single digital and data market, which can only be created and consolidated within the EU if said processes are carried out according to the public interest and European values of respect for the freedom and fundamental rights of citizens. In this sense, transparency, trust, explainability and reliability are key elements in data processing, corresponding to fundamental ethical values and principles that all ought to comply with for the success of a European artificial intelligence that is both sustainable and designed to serve people.

One can find this European vision on the subject in recent documents of the greatest importance, like the “European Commission’s White Paper on Artificial Intelligence – A European approach to excellence and trust”, or the communication “A European Strategy for Data”. These documents emphasise the need for the EU to achieve the ambition to position itself as an example of a society empowered to make better data-based decisions – both in business and in the public sector – “based on a solid legal framework in terms of protection data, fundamental rights, security and cybersecurity”.

In addition, I’d like to point out the successful work of the High-Level Expert Group on Artificial Intelligence, appointed by the European Commission, leading to an important set of “Ethics Guidelines for Trustworthy Artificial Intelligence”. The latter focus on ensuring, through technical and non-technical resources, that the development, deployment and use of AI systems meet the “seven key requirements: 1) Human agency and oversight; 2) technical robustness and safety; 3) privacy and data governance; 4) transparency; 5) diversity, non-discrimination and fairness; 6) societal and environmental well-being; 7) accountability”.

INESC TEC soon joined the pilot-implementation of these guidelines. However, the Institute’s involvement in this important debate is also present in the code of ethics elaborated in the meantime, the activities of the Data Protection team or the research activities within the scope of important European projects – coordinated by/with INESC TEC’s participation. The projects Human AI and TRUST AI serve as fine examples – with their actual names showing their scope and objectives.

Next Post
PHP Code Snippets Powered By : XYZScripts.com
EnglishPortugal