Rethinking Privacy Protection in Federated Learning in the Face of Model Inversion Attacks
Wenjing Lou, Virginia Tech, United States
Securing Ultra-Large Scale Infrastructures: Challenges and Opportunities
Awais Rashid, University of Bristol, United Kingdom
Turing's Echo on Deceptive Machines: The Challenge of Distinguishing Human and AI Creations
Ahmad-Reza Sadeghi, Technical University of Darmstadt, Germany
Rethinking Privacy Protection in Federated Learning in the Face of Model Inversion Attacks
Wenjing Lou
Virginia Tech
United States
Brief Bio
Wenjing Lou is the W. C. English Endowed Professor of Computer Science at Virginia Tech and a Fellow of the IEEE and ACM. Her research interests cover many topics in the cybersecurity field, with her current research interest focusing on security and privacy problems in wireless networks, blockchain, trustworthy machine learning, and Internet of Things (IoT) systems. Prof. Lou is a highly cited researcher by the Web of Science Group. She received the Virginia Tech Alumni Award for Research Excellence in 2018, the highest university-level faculty research award. She received the INFOCOM Test-of-Time paper award in 2020. She is the TPC chair for IEEE INFOCOM 2019 and ACM WiSec 2020. She was the Steering Committee Chair for IEEE CNS conference from 2013 to 2020. She is currently the vice chair of IEEE INFOCOM and a steering committee member of IEEE CNS. She served as a program director at the US National Science Foundation (NSF) from 2014 to 2017.
Abstract
The current success of machine learning has largely depended on centralized learning, which pools data from multiple sources to a central location. This presents significant challenges in domains like healthcare where patient data is often siloed across multiple institutions, and strict privacy regulations prevent centralized data sharing. Federated learning, a distributed learning paradigm allowing institutions to collaboratively train models without moving data across institutional boundaries, is thus highly advantageous due to its ability to maintain data locality and address legal and ethical barriers to data sharing. However, recent research has shown that federated learning is susceptible to privacy attacks, such as data reconstruction and membership inference, where sensitive information can be inferred from model updates.
In this talk, we will explore privacy challenges in federated learning by introducing a sophisticated model inversion attack called scale-MIA. This attack efficiently reconstructs clients’ training samples from aggregated model updates in federated learning and undermines the effectiveness of secure aggregation protocols. We will also discuss the impact of such attacks and explore emerging solutions to enhance privacy in federated learning systems.
Securing Ultra-Large Scale Infrastructures: Challenges and Opportunities
Awais Rashid
University of Bristol
United Kingdom
Brief Bio
Awais Rashid is Professor of Cyber Security at University of Bristol where he heads the Cyber Security Group. He is editor-in-chief and principal investigator for CyBOK. He is also Director of the EPSRC Centre for Doctoral Training in Trust, Identity, Privacy and Security in Large-Scale Infrastructures and Director of the National Research Centre on Privacy, Harm Reduction and Adversarial Influence Online (REPHRAIN). His research interests are in security of cyber-physical systems, software security and human factors. He leads projects as part of the UK Research Institute on Trustworthy Interconnected Cyber-Physical Systems (RITICS), UK Research Institute on Sociotechnical Cyber Security (RISCS), the Digital Security by Design Hub+ (Discribe) and the PETRAS National Centre of Excellence in Cyber Security of IoT.
Abstract
Digital infrastructures are seeing convergence and connectivity at unprecedented scale. This is true for both current critical national infrastructures, such as water, power, and emerging future systems that are highly cyber-physical in nature with complex intersections between humans and technologies, e.g., smart cities, intelligent transportation, high-value manufacturing and Industry 4.0. Diverse legacy and non-legacy software systems underpinned by heterogeneous hardware compose on-the-fly to deliver services to millions of users with varying requirements and unpredictable actions. This complexity is compounded by intricate and complicated supply-chains with many digital assets and services outsourced to third parties. The reality is that, at any particular point in time, there will be untrusted, partially-trusted or compromised elements across the infrastructure. This poses a range of fundamental questions: How does one measure the security state of such infrastructures? What are the complexities of managing security in a landscape shaped by the often competing demands of a variety of stakeholders? How does one secure infrastructures of such complexity or conduct incident response in such ultra-large-scale settings? In this keynote, I will discuss insights from a multi-year programme of research investigating these issues and the challenges to addressing them.
Turing's Echo on Deceptive Machines: The Challenge of Distinguishing Human and AI Creations
Ahmad-Reza Sadeghi
Technical University of Darmstadt
Germany
Brief Bio
Ahmad-Reza Sadeghi is a professor of Computer Science and the head of the System Security Lab at the Technical University of Darmstadt, Germany. He has led several Collaborative Research Labs with Intel since 2012 and Huawei since 2019.
He has studied Mechanical and Electrical Engineering and holds a Ph.D. in Computer Science from the University of Saarland, Germany. Before academia, he worked in the R&D of IT enterprises, including Ericsson Telecommunications. He has continuously contributed to the field of security and privacy research. He was Editor-In-Chief of IEEE Security and Privacy Magazine and had been serving on the editorial board of ACM TODAES, ACM TIOT, and ACM DTRAP.br>
He received the renowned German "Karl Heinz Beckurts" award for his influential research on Trusted and Trustworthy Computing. This award honors excellent scientific achievements that have significantly impacted industrial innovations in Germany. In 2018, he received the ACM SIGSAC Outstanding Contributions Award for dedicated research, education, and management leadership in the security community and pioneering contributions in content protection, mobile security, and hardware-assisted security. In 2021, he was honored with the Intel Academic Leadership Award at the USENIX Security Conference for his influential research on cybersecurity, particularly hardware-assisted security. In 2022, he received the prestigious European Research Council (ERC) Advanced Grant. In 2024, he received the DAC (Design Automation Conference) Service Award.
Abstract
As generative AI models evolve, distinguishing between human-generated and AI-generated content is becoming increasingly challenging, threatening trust across various domains such as misinformation in media, political campaigns, legal accountability, scientific integrity, and cybersecurity. Distinguishing between machine and human outputs will become vital because, in the dystopian future, machines will potentially rise against humans in various ways.
This talk explores methods and technologies for identifying the origin of content, focusing on audio and text. We briefly review the existing models and their limitations in detecting subtle differences between human-generated and AI-generated content. Our approach integrates physical principles, such as the Micro Doppler Effect, into machine learning frameworks, enriching models with prior knowledge and reducing bias in detection. We conclude by outlining key challenges and future directions in this rapidly evolving domain.