As a machine learning (ML) technique for training Artificial Intelligence (AI) models, federated learning aggregates contributions from multiple locally trained models on a central server – with each participating client training their model. This enables distributed ML training through the exchange of model updates, rather than raw data, unlike earlier approaches that relied on data sharing, thus posing greater privacy risks. This question led INESC TEC researcher Catarina Gomes to explore this issue during her master’s thesis, supervised by João Vilela (Faculty of Sciences of the University of Porto and INESC TEC) and Ricardo Mendes (Huawei). The work received the IEEE Portugal Outstanding Master Thesis Award.
Federated learning models are susceptible to different types of attacks, including passive attacks, which have been widely explored in the literature. In such cases, the attacker (typically the server) observes the model’s behaviour and exploits the fact that it tends to be more confident when predicting data it has seen during training. Generally, these attacks are effective against models with poor generalisation – i.e., models that perform very well on training data but fail to generalise to unseen test data.
In her work, Catarina Gomes proposes a new approach based on active attacks, showing that it is possible for an attacker to infer sensitive attributes even from models with strong generalisation (models that are often assumed to be secure because they are resistant to the most studied attacks). This contribution demonstrates that commonly used mitigation techniques “are not enough to protect the privacy of participants in federated learning systems”.
According to the researcher, “the results – particularly the fact that models with strong generalisation remain vulnerable – highlight the need for further research into the vulnerabilities exploited by different types of attacks, to understand why they remain effective despite the application of mitigation techniques designed to address similar risks”. This line of work is already being extended in her PhD research, where she is also exploring privacy risks associated with replacing real data with synthetic data as a mitigation strategy.
Among the mitigation strategies analysed in the award-winning work are “mechanisms for detecting suspicious changes, such as sudden drops in model performance or significant distortions in model parameters”, representing an innovative mitigation approach. “Because these strategies only act when suspicious behaviour is detected, they preserve the model’s utility under normal conditions,” she explained.
In practical terms, attacks on federated learning models can be used to extract sensitive attributes from the training data of any participant in the system. For example, in the context of predicting user responses to app permission requests on smartphones – the use case explored in the study – such attacks could disclose information about a user’s semantic location, such as whether they are at home, based on the phone’s state and the user’s response behaviour.
According to Catarina Gomes, these attacks are particularly concerning in common federated learning scenarios, since the server “is responsible for updating the global model using securely aggregated contributions from each participant, meaning that privacy can be compromised without violating the protocol itself”. She also mentioned that “the attacker model requires substantial prior knowledge to achieve the demonstrated level of performance”.
These findings can now inform future research, particularly by showing that techniques aimed at improving generalisation are not sufficient to guarantee the privacy of training data in federated learning systems. As such, it is necessary to monitor the behaviour of all participating entities to mitigate risks developing from the orchestration of external actors.
While acknowledging that “people are generally aware that AI services involve risks, as they understand their data is used to train these models”, Catarina Gomes also emphasised a “lack of awareness of the concrete impact this can have on data privacy and what can be inferred about individuals simply through their interaction with such tools”.
The researcher mentioned in this news piece is associated with INESC TEC and the Faculty of Sciences of the University of Porto.

News, current topics, curiosities and so much more about INESC TEC and its community!