This four-volume set LNCS 16053-16056 constitutes the refereed proceedings of the 30th European Symposium on Research in Computer Security, ESORICS 2025, held in Toulouse, France, during September 22–24, 2025.
The 100 full papers presented in these proceedings were carefully reviewed and selected from 600 submissions. They were organized in topical sections as follows:
AI and Data-Centric Security, Systems and Hardware Security, Privacy, Cryptography and Secure Protocol Design, Blockchain and Financial Security, Privacy Policy and Identity Management, Adversarial and Backdoor Defenses.
Edited by:
Vincent Nicomette, Abdelmalek Benzekri, Nora Boulahia-Cuppens, Jaideep Vaidya Imprint: Springer Nature Switzerland AG Country of Publication: Switzerland Dimensions:
Height: 235mm,
Width: 155mm,
ISBN:9783032078834 ISBN 10: 3032078830 Series:Lecture Notes in Computer Science Pages: 510 Publication Date:13 October 2025 Audience:
College/higher education
,
Professional and scholarly
,
Further / Higher Education
,
Undergraduate
Format:Paperback Publisher's Status: Active
.- Time-Distributed Backdoor Attacks on Federated Spiking Learning. .- TATA: Benchmark NIDS Test Sets Assessment and Targeted Augmentation. .- Abuse-Resistant Evaluation of AI-as-a-Service via Function-Hiding Homomorphic Signatures. .- PriSM: A Privacy-friendly Support vector Machine. .- Towards Context-Aware Log Anomaly Detection Using Fine-Tuned Large Language Models. .- PROTEAN: Federated Intrusion Detection in Non-IID Environments through Prototype-Based Knowledge Sharing. .- KeTS: Kernel-based Trust Segmentation against Model Poisoning Attacks. .- Machine Learning Vulnerabilities in 6G: Adversarial Attacks and Their Impact on Channel Gain Prediction and Resource Allocation in UC-CF-mMIMO. .- FuncVul: An Effective Function Level Vulnerability Detection Model using LLM and Code Chunk. .- LUMIA: Linear probing for Unimodal and MultiModal Membership Inference Attacks leveraging internal LLM states. .- Membership Privacy Evaluation in Deep Spiking Neural Networks. .- DUMB and DUMBer: Is Adversarial Training Worth It in the Real World?. .- Countering Jailbreak Attacks with Two-Axis Pre-Detection and Conditional Warning Wrappers. .- How Dataset Diversity Affects Generalization in ML-based NIDS. .- Llama-based source code vulnerability detection: Prompt engineering vs Finetuning. .- DBBA: Diffusion-based Backdoor Attacks on Open-set Face Recognition Models. .- Evaluation of Autonomous Intrusion Response Agents In Adversarial and Normal Scenarios. .- Trigger-Based Fragile Model Watermarking for Image Transformation Networks. .- Let the Noise Speak: Harnessing Noise for a Unified Defense Against Adversarial and Backdoor Attacks. .- On the Adversarial Robustness of Graph Neural Networks with Graph Reduction. .- SecureT2I: No More Unauthorized Manipulation on AI Generated Images from Prompts. .- GANSec: Enhancing Supervised Wireless Anomaly Detection Robustness through Tailored Conditional GAN Augmentation. .- Fine-Grained Data Poisoning Attack to Local Differential Privacy Protocols for Key-Value Data. .- The DCR Delusion: Measuring the Privacy Risk of Synthetic Data. .- StructTransform: A Scalable Attack Surface for Safety-Aligned Large Language Models.