NCSA Colloquiums

View Full Calendar

October NCSA Colloquium: Will Lai, Poisoning of Networks with Adversarial Particle Swarm Optimization

Event Type
Conference/Workshop
Sponsor
National Center for Supercomputing Applications
Location
NCSA Building, 1205 W. Clark St., Urbana IL 61801 Room 1040
Virtual
Join online
Date
Oct 29, 2025   2:00 pm  
Registration
Join via Zoom at the scheduled time—no registration required.
Views
41


The National Center for Supercomputing Applications (NCSA) is hosting its monthly colloquium series and invites everyone to participate in the October session. 

This month's event will be led by William K. M. Lai, Assistant Research Professor in Molecular Biology, Genetics, and Computational Biology at Cornell. William directs both the Cornell Center for Vertebrate Genomics and the Cornell EpiGenomics Core Facility. His research group integrates novel genomic assays and custom bioinformatic algorithms, often leveraging machine learning and explainable AI, to dissect how protein-DNA interactions vary across cell states and contribute to disease.

Title: Poisoning of networks with adversarial particle swarm optimization

Abstract:
We present Adversarial Particle Swarm Optimization (APSO) as a black-box optimization-based approach for crafting poisoned data. Poisoned data is a significant threat to the reliability and security of deep learning systems. Unlike prior poisoning methods, APSO leverages swarm intelligence to efficiently search for perturbations that reduce target model performance without any a priori knowledge of training data or access to the model weights and biases. We evaluate APSO on models trained to classify MNIST, CIFAR-10, and AudioMNIST using a variety of architectures and adversarial hardening techniques and demonstrate APSO’s ability to quantify model resilience to adversarial attack. We also investigate the feasibility of black-box transfer attacks wherein attacks performed on one model transfer their poisoning to another. The results highlight a critical concern: the transferability of poisoned data, which poses a potential risk for real-world AI systems, where poisoned data can propagate undetected across diverse models and architectures. These findings underscore the need for enhanced defense mechanisms to safeguard against adversarial threats in multi-model environment.

link for robots only