website picture

Hi! I am Wenxiao Wang (汪文潇), a Computer Science Ph.D. student at University of Maryland, working with Prof. Soheil Feizi. I received my B.S. degree in Computer Science from Yao Class, Tsinghua University in 2020.

I was a research intern at Sony AI (summer 2023), working with Dr. Weiming Zhuang and Dr. Lingjuan Lyu in Privacy-Preserving Machine Learning (PPML) team; I was a research intern at Bytedance (summer 2022), working with Dr. Linjie Yang, Dr. Heng Wang and Dr. Yu Tian; a research assistant at IIIS, Tsinghua University (2020-2021), working with Prof. Hang Zhao in his MARS Lab; a visiting student researcher at UC Berkeley (2019), working with Dr. Xinyun Chen, Prof. Ruoxi Jia and Prof. Dawn Song; an intern in Bytedance AI Lab (2018), working with Dr. Yi He and Prof. Lei Li.

Preprints

On Practical Aspects of Aggregation Defenses against Data Poisoning Attacks
Wenxiao Wang and Soheil Feizi
[arxiv]

Can AI-Generated Text be Reliably Detected?
Vinu Sankar Sadasivan, Aounon Kumar, Sriram Balasubramanian, Wenxiao Wang and Soheil Feizi
Media Coverage: [Washington Post] [Wired] [New Scientist] [The Register] [TechSpot] [UMD Science]
[arxiv]

Publications

Robustness of AI-Image Detectors: Fundamental Limits and Practical Attacks
Mehrdad Saberi, Vinu Sankar Sadasivan, Keivan Rezaei, Aounon Kumar, Atoosa Chegini, Wenxiao Wang and Soheil Feizi
Media Coverage: [Wired] [MIT Tech Review] [Bloomberg News] [The Register]
International Conference on Learning Representations (ICLR), 2024.
[paper]

DRSM: De-Randomized Smoothing on Malware Classifier Providing Certified Robustness
Shoumik Saha, Wenxiao Wang, Yigitcan Kaya, Soheil Feizi and Tudor Dumitras
International Conference on Learning Representations (ICLR), 2024.
[paper]

Temporal Robustness against Data Poisoning
Wenxiao Wang and Soheil Feizi
Conference on Neural Information Processing Systems (NeurIPS), 2023.
[paper]

Spuriosity Rankings: Sorting Data for Spurious Correlation Robustness
Mazda Moayeri, Wenxiao Wang, Sahil Singla and Soheil Feizi
Conference on Neural Information Processing Systems (NeurIPS), 2023. [spotlight]
[paper]

Lethal Dose Conjecture on Data Poisoning
Wenxiao Wang, Alexander Levine and Soheil Feizi
Conference on Neural Information Processing Systems (NeurIPS), 2022.
[paper] [code]

Improved Certified Defenses against Data Poisoning with (Deterministic) Finite Aggregation
Wenxiao Wang, Alexander Levine and Soheil Feizi
International Conference on Machine Learning (ICML), 2022.
[paper] [code]

On Feature Decorrelation in Self-Supervised Learning
Tianyu Hua*, Wenxiao Wang*, Zihui Xue, Sucheng Ren, Yue Wang, Hang Zhao (*equal contribution)
International Conference on Computer Vision (ICCV), 2021. [oral]
[paper] [code]

DPlis: Boosting Utility of Differentially Private Deep Learning via Randomized Smoothing
Wenxiao Wang, Tianhao Wang, Lun Wang, Nanqing Luo, Pan Zhou, Dawn Song, Ruoxi Jia
Privacy Enhancing Technologies Symposium (PETS), 2021.
[paper] [code]

REFIT: A Unified Watermark Removal Framework For Deep Learning Systems With Limited Data
Xinyun Chen*, Wenxiao Wang*, Yiming Ding, Chris Bender, Ruoxi Jia, Bo Li, Dawn Song (*equal contribution)
ACM Asia Conference on Computer and Communications Security (AsiaCCS), 2021.
[paper] [code]

The Secret Revealer: Generative Model Inversion Attacks Against Deep Neural Networks
Yuheng Zhang*, Ruoxi Jia*, Hengzhi Pei, Wenxiao Wang, Bo Li, Dawn Song (*equal contribution)
Conference on Computer Vision and Pattern Recognition (CVPR), 2020. [oral]
[paper] [code]

Leveraging Unlabeled Data for Watermark Removal of Deep Neural Networks
Xinyun Chen*, Wenxiao Wang*, Yiming Ding, Chris Bender, Ruoxi Jia, Bo Li, Dawn Song (*equal contribution)
ICML Workshop on Security and Privacy of Machine Learning, 2019.
[paper]

Talks

  • Temporal Robustness against Data Poisoning, AI TIME Youth PhD Talk, November 2023.
  • Lethal Dose Conjecture: From Few-shot Learning to Potentially Nearly Optimal Defenses against Data Poisoning, TMLR Group, Hong Kong Baptist University, December 2022.
  • Lethal Dose Conjecture on Data Poisoning, AI TIME Youth PhD Talk, November 2022.
  • Improved Certified Defenses against Data Poisoning with (Deterministic) Finite Aggregation, AI TIME Youth PhD Talk, August 2022.

Services

Program Committee / Reviewer of:

  • TPAMI
  • NeurIPS 2022, 2023
  • ICML 2022, 2023 (Outstanding Reviewer in 2022)
  • ICLR 2023
  • ICCV 2023
  • Workshop on Adversarial Robustness In the Real World (ECCV 2022,ICCV 2021)
  • Workshop on Socially Responsible Machine Learning (ICML 2021)
  • Workshop on Adversarial Machine Learning in Real-World Computer Vision Systems and Online Challenges (CVPR 2021)
  • Workshop on Security and Safety in Machine Learning Systems (ICLR 2021)