Technology and Law: building a virtuous relationship

I was invited to write about the relationship between Technology and Law, for which I relied on the contribution and suggestions from our colleagues at INESC TEC Legal Support Service (AJ). My approach to this issue follows two symmetrical outlooks: on the one hand, how technology can influence or contribute to legal activities; and, on the other hand, how law can help to address the R&D challenges. In both perspectives, we may perceive this relationship as virtuous – in the sense of providing more benefits than difficulties, fostering scientific and technical progress – or as conflictive, threatening or restricting said progress.

Due the unavoidable current debate on the immense potential of Artificial Intelligence (AI), but also its limits and dangers, we could not fail to address AI as a technology that can be applied to the world of law and legal work. From the outset, the mere possibility of being applied to the decision-making process, which is the core of Justice and the judicial power, has sparked heated debates within the legal community and beyond. Can justice be “fairer and more impartial” if the decision-making process resorts to AI methods, and decisions are no longer “polluted” by the prejudices and biases (even if unconscious) of the human persons who are judges? Or, in a more restrained way, will AI only be useful to better substantiate and explain a court decision to the ordinary citizen, especially to those directly affected by said decision, with the required transparency? It could also provide greater coherence of decisions in similar cases. In other words, “the algorithm is used to ensure that, once a certain causal relationship is verified, the decision must always be the same. Obviously, it is a very fallible method with associated risks; but, when applicable to more routine tasks and decision-making processes, it can become quite beneficial. The fact that it is being applied to injunctions, for example, is not by mere chance” (Rita Barros, AJ). “Conversely, in this domain, there’s also the risk of accentuating the same biases and social inequalities, in a subliminal way; or even of putting the value of coherence before the creative dimension, the normative sense and, ultimately, the independence of decisions” (Vasco Dias, AJ).

Another area in which organisations, especially larger ones, have been investing recently, is the use of AI in the field of compliance, given the abundance of legislation and regulations applicable to various matters in the organisations’ day-to-day activities. “Many companies use computer systems according to their policy, which assess their compatibility with applicable regulations and legal standards on a case-by-case basis, depending on the activity carried out by each service/sector” (Rita Barros, AJ).

The growing openness of the world of Justice to technologies that facilitate or automate certain time-consuming, tiresome, and still largely manual tasks is exemplified by the initiative “Challenges of Justice – Govtech Justice”, which aims to contribute to the development of innovative technology solutions that address the concrete needs of Justice services to improve their response to citizens and businesses. It’s important to mention that INESC TEC is already a partner in at least one of the selected projects.

However, sweet and sour go together, and it turns out that, as we said, the algorithms are neither neutral nor transparent, but are susceptible to biases of those who program them – in addition to the dangers of manipulation, whose consequences are unimaginable. The awareness of such dangers and the increasing difficulty in anticipating them and correctly assessing their risks may explain “the unusual proposal to suspend, for six months, certain research in AI (training of more advanced models than GPT 4), in the open letter signed by personalities like Elon Musk, Yuval Harari, etc. This was followed by several other open letters (like the one by KU Leuven), following a suicide which occurred in Belgium, after the use of an application based on ChatGPT” (Vasco Rosa Dias, AJ).

The proposal for a European Union (EU) Regulation on Artificial Intelligence (“AI Act”), following several ethical commitments and other tools that were insufficient to protect against the dangers of AI, is based on a risk analysis approach and aims to ensure a high level of protection of fundamental rights. It states that “the use of AI with its specific characteristics (e.g. opacity, complexity, dependency on data, autonomous behaviour) can adversely affect a number of fundamental rights enshrined in the EU Charter of Fundamental Rights (‘the Charter’)” – namely human dignity (Article 1), respect for private and family life, and protection of personal data  (Articles 7 and 8), non-discrimination (Article 21) and equality between men and women (Article 23), among many others.

“(…) Obligations relating to ex ante testing, risk management and human oversight will also facilitate respect for other fundamental rights by minimising the risk of wrong or biased AI-assisted decisions in critical areas like education and training, employment, essential services, preservation of public order and the judicial system “.

In this sense, we perceive law as a regulatory instrument for technology, which always happens when technology, despite its advantages and extraordinary advances, entails high risks, both for people and their rights, for the environment, and for other major values that society seeks to preserve. In short, law – through legislation, regulation, doctrine, and relevant jurisprudence – aims to safeguard higher values that, in some way, may be threatened by a given technology, seeking to promote an adequate balance between them.

As for the outlook on how law can help to address and even advance the challenges of research and development, what seems to be clearer and more concretely associated with technological development is the “discussion of the so-called ‘regulatory sandboxes’ for the development of technology” (Vasco Rosa Dias, AJ). As an example, the creation in Portugal of the so called “Technological Free Zones (ZLT)”: “physical environments designed for testing, geographically delimited, in a real or near real way, of innovative technology and tech-based products, services, and processes, with direct and permanent control by the competent regulatory authorities, particularly in terms of testing, provision of information, guidelines and recommendations, corresponding to the concept of regulatory sandboxes”. The first approved ZLT, called “Infante D. Henrique”, was proposed by the Navy, it’s located in a restricted area in the Tróia region, and it focuses on “testing, in the open sea and under real conditions, unmanned security and defence systems and other technologies in subsurface, surface (terrestrial and water) and aerial environments. Due to the geophysical characteristics of the site, this ZLT will also allow access and the study of the deep sea, which will be leveraged with the installation of an artificial island”. INESC TEC, through its Centre for Robotics and Autonomous Systems (CRAS), has already been benefiting from this ZLT – and is currently advancing the testing of a cable, part of an ongoing project.

Another well-known case in this area is that of Revolut, one of the most valuable FinTechs in Europe, which benefited from the creation of a regulatory sandbox by the British Financial Authority, in 2015. This is one of the matters that the European legislators must address concerning the future AI Regulation.

According to these examples, which are far from exhausting the possibilities of a relationship between technology and law, I believe that we can view it as a virtuous relationship, in the sense that it provides more benefits than difficulties, while fostering scientific and technical progress. However, I also believe that legal experts must endeavour to better understand the technology and the specificities of the research, development, and innovation activities, while seeking to find the most appropriate legal, institutional, and contractual frameworks for the issues, assuming the mission of bridging the two worlds, and making them mutually intelligible. This way, the relationship will not become conflictual, and the actors of the science and technology domain will not perceive Law as an obstacle to scientific and technical progress. INESC TEC’s Legal Support Service has always sought to carry out this mission as best as possible.

Maria da Graça Barbosa, Member of the Board of Directors

Next Post
PHP Code Snippets Powered By : XYZScripts.com
EnglishPortugal