INESC TEC researcher collaborated with a German group to study the personification of Large Language Models

What happens when you ask Large Language Models, e.g., ChatGPT, to take on a certain role in different contexts? Isabel Rio-Torto, a researcher at INESC TEC, joined the Explainable Machine Learning (EML) group at the University of Tuebingen (Germany) for a study that concluded that the personification of models has an impact on performance, but can also show bias. The study was accepted in the 2023 edition of NeurIPS – Conference on Neural Information Processing Systems.

Have you ever provided any guideline to ChatGTP before it replies to a question? Or asked it to play a certain role? For example, imagine you used the following prompt: “If you were a four-year-old, how would you describe a bicycle?” What do you think the answer would be? Would age instructions influence the description provided by the model?

A team of researchers from the University of Tuebingen – joined by INESC TEC researcher Isabel Rio-Torto – carried out a study to understand how the representation of roles can affect the behaviour of Large Language Models (LLMs). “We assess whether LLMs can take on different roles when generating text, according to a certain context. In this sense, and using a predefined prompt, we asked the models to take on different personas, before solving vision and language tasks”, explained the researcher.

The study considered two Large Language Models – Vicuna and ChatGPT – and assessed whether the models could personify the behaviour of people of different ages, with different areas of expertise. In addition, the team also sought to analyse the existence of gender and ethnic bias. “We found that impersonation can improve the performance of models. In other words, when instructed to reply as an ornithologist, the model described birds better than another model instructed to “act” as an automobile expert. Moreover, personification can also reveal bias, because when we instructed the LLM to reply as if it were a man, we found that it described cars better than the LLM instructed to reply as a woman”.

Isabel Rio-Torto collaborated, as a Visiting PhD Student, with the Group, led by Professor Zeynep Akata

According to Isabel Rio-Torto, the conclusions of the study “In-Context Impersonation Reveals Large Language Models’ Strengths and Biases”, “demonstrate that personification in different contexts, i.e., guiding LLMs to take on different duties, can alter their performance and reveal their bias”. The paper was accepted as Spotlight at the 37th edition of NeurIPS, an international conference on neural information processing systems – which, this year, took place in December, in the United States of America. The results of the research are available on GitHub.

“I worked on this paper while collaborating, as a Visiting PhD Student, with the Explainable Machine Learning (EML) Group, led by Professor Zeynep Akata, at the University of Tuebingen (Germany). When I joined the project, the other authors had already started working on exploring the strengths and biases of LLMs, and I had the opportunity to help with code development and testing for the language-based reasoning tasks. It was a very challenging project, which allowed me to learn a lot. In addition, we managed to get the NeurIPS to accept the work as Spotlight!”, she concluded.

It’s worth mentioning that other INESC TEC researchers are exploring the topic of algorithm bias. One of the examples is the paper “Compressed Models Decompress Race Biases: What Quantized Models Forget for Fair Face Recognition” by Pedro Neto, Eduarda Caldeira, Jaime Cardoso and Ana Sequeira, researchers at INESC TEC, which was acknowledged at the 2023 edition of the International Conference of the Biometrics Special Interest Group (BIOSIG).

Next Post
PHP Code Snippets Powered By : XYZScripts.com
EnglishPortugal