INESC TEC is running a series of workshops to share experiences on using tools to navigate a rapidly evolving landscape. The sessions will take place until February 2026.
Researchers across all fields have been using Generative AI (GenAI) tools to increase the effectiveness and efficiency of the research process. But behaviours vary widely, ranging from intensive use at (almost) every stage of research, to minimal use due to concerns about academic integrity or fear of peer rejection. This was the situation in 2023, when large language models (LLMs) were already beginning to be used in research workflows, changing routines and prompting discussions in the corridors of research institutes.
The figures come from a survey with more than 700 researchers carried out by the platforms ResearchGate and Academia.edu, which painted the picture of a domain where integrity and transparency are key values. For this reason, and because the impact of GenAI on research and scientific processes “requires reflection and careful consideration”, INESC TEC has launched a set of literacy and training initiatives in AI.
The first step was developing guidelines for the responsible use of these tools in research. This was followed by the launch of the AI Tertúlias, led by researchers in the field of Artificial Intelligence. With an open theme, the initiative has already held a well-attended session, “which created high expectations for those to come”, explains Gabriel David, member of the INESC TEC Board of Directors.
Meanwhile, the Digital Transformation Working Group, after clarifying current principles of responsible use through the new Guidelines, has promoted a cycle of workshops exploring the potential of multiple GenAI tools, discussing how they can support and improve different stages of the research process in a rapidly evolving context.
“The scientific landscape is changing quickly; journals and conferences are raising acceptance standards, placing greater value on transparency, openness and methodological accuracy. Generative AI represents a crucial opportunity not only to keep up with this evolution, but also to remain at the forefront of research,” said Yassine Baghoussi, INESC TEC AI researcher and one of the organisers of the workshop series Using AI as Your Research Assistant.
The current context of Open Science and GenAI shaped the first of four sessions, running until February 2026. With responsible use, sharing and collaboration at the core, this session focused on the use of GenAI as a lever in the early stages of generating research ideas, and in supporting literature search and review.
A central idea was that AI should not be perceived as a substitute for the researcher, but as a tool to increase productivity – reinforcing the essential need to verify everything that is generated. One good practice: allocate 30 to 40% of the time saved by using AI to verification. This is because if a “hallucination”- a coherent and grammatically correct but factually incorrect answer – makes its way into a paper, it can undermine years of trust built within the scientific community.
“The strong turnout at the first session confirmed both the timeliness and the relevance of this initiative, as well as its alignment with similar efforts across Europe. We hope future sessions can create even more opportunities for participants to share their experiences, enriching the debate at a moment marked by diverse and abundant new offerings,” said Lia Patrício and Gabriel David, from the group leading digital transformation at INESC TEC.
This series complements the guidelines for the responsible use of GenAI, released in April 2025. Based on the orientations of the European Research Area, the Board approved a set of “general and dynamic” guidelines, giving researchers a reference for how to benefit from GenAI while recognising that they remain fully responsible for their research, and should use GenAI “with critical judgement and transparency, respecting privacy, confidentiality and intellectual property.”
“GenAI tools are constantly evolving, with new forms of use continuously emerging. Researchers should remain up to date in terms of learning and best practices and share their knowledge with others or with relevant stakeholders,” according to said document.
It also stresses that researchers must maintain a critical approach to the use of GenAI-generated results and specify which tools were used throughout research activities. For example: “Interpreting data analyses, conducting literature reviews, identifying research gaps, formulating research objectives, developing hypotheses, etc., may have a substantial impact and should therefore be explicitly reported.”
Privacy issues are also significant, and the text highlights the duty to protect unpublished or confidential work and to avoid uploading it to online AI systems, where it may be used for other purposes. The next workshops (with the second one taking place on 11 December) will focus on the subsequent stages of the research process.

News, current topics, curiosities and so much more about INESC TEC and its community!