[5] Model Inversion Attacks on Vision-Language Models: Do They Leak What They Learn?
Ngoc-Bao Nguyen, Sy-Tuyen Ho, Koh Jun Hao, Ngai-Man Cheung.
Research Question: To what extent do LVLMs leak sensitive information of visual training data? How can we design model inversion attacks tailored for LVLMs?
Arxiv'25
[paper]
[4] Revisiting Model Inversion Evaluation: From Misleading Standards to Reliable Privacy Assessment.
Sy-Tuyen Ho, Koh Jun Hao, Ngoc-Bao Nguyen, Alexander Binder, Ngai-Man Cheung.
Research Question: Why shouldn’t we rely on the most commonly used framework for computing attack success rates in MI research, and how can we compute them faithfully?
Arxiv'25
[paper]
[benchmark+code]
[3] Vision Transformer Neural Architecture Search for Out-of-Distribution Generalization: Benchmark and Insights.
Sy-Tuyen Ho*, Tuan Van Vo*, Somayeh Ebrahimkhani*, Ngai-Man Cheung. (* joint first authors)
Research Question: How do ViT architectural attributes affect OoD generalization, and why is the embedding dimension a key factor in optimizing it?
NeurIPS'24 (Main Track)
[paper]
[benchmark+code]
[2] On the Vulnerability of Skip Connections to Model Inversion Attacks.
Koh Jun Hao*, Sy-Tuyen Ho*, Ngoc-Bao Nguyen, Ngai-man Cheung. (* joint first authors)
Research Question: What is the impact of a common DNN architectural module—skip connections—on model inversion attacks, and how can we leverage this understanding to design MI-resilient architectures?
ECCV'24
[paper]
[code]
[1] Model Inversion Robustness: Can Transfer Learning Help?
Sy-Tuyen Ho, Koh Jun Hao, Keshigeyan Chandrasegaran, Ngoc-Bao Nguyen, Ngai-man Cheung.
Research Question: Why and how does Transfer Learning prevent data leakage in Model Inversion Attack?
CVPR'24
[paper]
[code]