ELUDING SECURE AGGREGATION IN FEDERATED LEARNING VIA MODEL INCONSISTENCY

Date: 
Thursday, October 27, 2022
Location: 
Hybrid. Physical location: Aula T1, Piano terra, Edificio E, Viale Regina Elena, 295, 00161, Roma
Time: 
2:00 PM - 4:00 PM

Title: Eluding Secure Aggregation in Federated Learning via Model Inconsistency. 

Speaker: Danilo Francati, Postdoctoral Researcher at Aarhus University, Denmark.

Affiliation: Aarhus University.

Description:

Secure aggregation is a cryptographic protocol that securely computes the aggregation of its inputs. It is pivotal in keeping model updates private in federated learning. Indeed, the use of secure aggregation prevents the server from learning the value and the source of the individual model updates provided by the users, hampering inference and data attribution attacks.

In this talk, Danilo Francati, PhD, will give us an overview of his paper that has recently been accepted at the ACM CCS 2022 conference in Los Angeles.

He will show that a malicious server can easily elude secure aggregation as if the latter were not in place. In particular, he will present an attack strategy that allows the server to infer information on individual private training datasets, independently of the number of users participating in the secure aggregation.

He will detail how this represents a concrete threat in large-scale, real-world federated learning applications. In fact, the attack strategy is generic and equally effective regardless of the secure aggregation protocol used. It exploits a vulnerability of the federated learning protocol caused by incorrect usage of secure aggregation and lack of parameter validation. This demonstrates that current implementations of federated learning with secure aggregation offer only a “false sense of security”.

Registration: Participation is free. However, registration is required on Eventbrite at the following links: "Eluding Secure Aggregation in Federated Learning via Model Inconsistency".