Guojun Zhang
Guojun Zhang
News ♣
Education ♣
Awards ♣
Publications ♣
Academic Services ♣
Misc ♣
Contact
I am a senior researcher and the tech lead for federated learning at Huawei Noah Ark's Lab in Montréal. I obtained my Ph.D [thesis] from the David R. Cheriton School of Computer Science at the University of Waterloo, also as a student affiliate of the Vector Institute. My PhD supervisors were Prof. Pascal Poupart and Prof. Yaoliang Yu.
My current research focus is on federated learning and representation learning.
I obtained my master from
the Perimeter Institute, and I was fortunate to work with Prof. Freddy Cachazo on theoretical physics.
Research Interests: foundation models, transfer learning, federated learning, minimax optimization
I am always aiming for elegant formulations to understand real problems. As A. Einstein once said, "Everything should be made as simple as possible, but no simpler." Simplicity is the greatest beauty.
Hiring: If you are interested in internship or a full-time position on federated learning and/or transfer learning at Huawei Noah's Ark Lab Montréal, send your resume to my Huawei email.
Contact: "firstname"."lastname"@uwaterloo.ca; "firstname"."lastname"@huawei.com
News
- 2023.08 I will serve as an Area Chair for AISTATS 2024.
- 2023.08 Our paper ''Private GANs, Revisited'' has been accepted to TPDP 2023 and TMLR.
- 2023.07 Our paper ''Understanding Hessian Alignment for Domain Generalization'' has been accepted to ICCV 2023.
- 2022.12 Our paper ''Proportional Fairness in Federated Learning'' has been accepted at TMLR.
- 2022.11 Two papers to present at NeurIPS 2022: our JMLR paper at the Journal-to-Conference Track and our paper ''Private GANs, Revisited'' at the SyntheticData4ML Workshop.
- 2022.11 Reviewing for ICLR 2023.
- 2022.10 Reviewing for SyntheticData4ML Workshop in NeurIPS 2022.
- 2022.08 I will serve as an Area Chair for AISTATS 2023.
- 2022.07 I will serve as a Session Chair in ICML 2022.
- 2022.05 I will give an invited talk at 2022 Optimization Days organized by HEC Montréal.
- 2022.04 Our paper ''Federated Learning Meets Multi-objective Optimization'' is accepted at IEEE Transactions on Network Science and Engineering!
- 2022.03 I will serve as a Program Committee in the FL-IJCAI workshop 2022.
- 2022.01 Our paper ''Domain Adversarial Training: A Game Perspective'' has been accepted at ICLR 2022.
- 2022.01 Our paper ''Optimality and Stability in Non-convex Smooth Games'' has been accepted to Journal of Machine Learning Research.
- 2021.11 Our paper ''f-Mutual Information Contrastive Learning'' has been accepted as an oral to NeurIPS 2021 workshop on self-supervised learning!
- 2021.09 Our paper ''Quantifying and Improving Transferability in Domain Generalization'' is accepted at NeurIPS 2021! Thanks to all my collaborators!
- 2021.09 Excited to start my new job as a senior researcher at Huawei Montreal!
- 2021.07 Our paper ''Newton-type Methods for Minimax Optimization'' is accepted at the ICML 2021 workshop ''Beyond first-order methods in ML systems''
- 2021.07 Completed my defence! Thanks to all committee members!
- 2021.05 Our paper ''f-Domain Adversarial Learning: Theory and Algorithms'' is accepted at ICML 2021
- 2021.05 Happy to serve as reviewers for CoRL 2021 and NeurIPS 2021
- 2020.08 Attending CIFAR Deep Learning + Reinforcement Learning Summer School
- 2020.05 I'm excited to join Prof. Sanja Fidler's group to do a summer intern at NVIDIA, Toronto.
- 2020.04 Attending ICLR 2020 and presenting my work on bilinear zero-sum games
Education
- 2017.09-2021.08 Ph.D. of Computer Science, University of Waterloo, Waterloo, Canada. Supervisors: Pascal Poupart and Yaoliang Yu
- 2015.08-2016.06 Master of Physics, University of Waterloo/Perimeter Institute, Waterloo, Canada. Supervisor: Freddy Cachazo
- 2011.09-2015.07 Bachelor of Physics, University of Science and Technology of China, Hefei, China. GPA—4.13/4.3
Awards
Selected Publications
Transfer Learning (Domain Generalization/Adaptation):
- 2023.07 (DG) Sobhan Hemati*, Guojun Zhang*, Amir Estiri, Xi Chen. Understanding Hessian Alignment for Domain Generalization . ICCV 2023.
[arxiv][code]
- 2022.11 (DA) Ehsan Imani, Guojun Zhang, Jun Luo, Pascal Poupart, Yangchen Pan. Label Alignment Regularization for Distribution Shift. arXiv preprint arXiv:2211.14960.
[arxiv]
- 2022.01 (DA) David Acnua, Marc Law, Guojun Zhang, Sanja Fidler. Domain Adversarial Training: A Game Perspective. ICLR 2022.
[ICLR][arxiv]
- 2021.06 (DG) Guojun Zhang, Han Zhao, Yaoliang Yu and Pascal Poupart. Quantifying and Improving Transferability in Domain Generalization. NeurIPS 2021.
[NeurIPS][arxiv][code][video]
- 2020.10 (DA) David Acuna, Guojun Zhang, Marc Law and Sanja Fidler. f-Domain Adversarial Learning: Theory and Algorithms. ICML 2021 (spotlight).
[ICML][arxiv][code]
Federated learning and privacy:
- 2023.08 Guojun Zhang, Mahdi Beitollahi, Alex Bie, Xi Chen. Normalization Is All You Need: Understanding Layer-Normalized Federated Learning under Extreme Label Shift.
[arxiv]
- 2022.11 Alex Bie, Gautam Kamath, Guojun Zhang. Private GANs, Revisited. TMLR 2023 ; NeurIPS 2022 SyntheticData4ML workshop; TPDP 2023.
[NeurIPS workshop][arxiv]
- 2022.06 Artur Back de Luca*, Guojun Zhang*, Xi Chen, Yaoliang Yu. Mitigating Data Heterogeneity in Federated Learning with Data Augmentation.
[arxiv][code]
- 2022.02 Guojun Zhang, Saber Malekmohammadi, Xi Chen and Yaoliang Yu. Proportional Fairness in Federated Learning. TMLR 2023.
[TMLR][arxiv][slides][code]
- 2020.06 Zeou Hu, Kiarash Shaloudegi, Guojun Zhang and Yaoliang Yu. Federated Learning meets Multi-objective Optimization. IEEE Transactions on Network Science and Engineering 2022.
[IEEE TNSE][arxiv][code]
Contrastive Learning:
- 2021.12 Guojun Zhang*, Yiwei Lu*, Sun Sun, Hongyu Guo and Yaoliang Yu. f-Mutual Information Contrastive Learning. NeurIPS 2021 workshop on self-supervised learning (contributed talk).
[NeurIPS workshop][poster]
General:
- 2023.03 Vahid Partovi Nia, Guojun Zhang, Ivan Kobyzev, Michael R Metel, Xinlin Li, Ke Sun, Sobhan Hemati, Masoud Asgharian, Linglong Kong, Wulong Liu, Boxing Chen, Mathematical Challenges in Deep Learning.
[arxiv]
Minimax Optimization and Smooth Games:
- 2022.01 Guojun Zhang, Pascal Poupart and Yaoliang Yu. Optimality and Stability in Non-Convex Smooth Games. JMLR 2022.
[JMLR][arxiv][bib]
- 2020.06 Guojun Zhang, Kaiwen Wu, Pascal Poupart and Yaoliang Yu. Newton-type Methods for Minimax Optimization. ICML 2021 workshop for ''Beyond first-order methods in ML systems.''
[arxiv][workshop][code]
- 2019.08 Guojun Zhang and Yaoliang Yu. Convergence of Gradient Methods on Bilinear Zero-Sum Games.
ICLR 2020, also presented at NeurIPS workshop SGO&ML 2019
[ICLR][arxiv][workshop][code]
Theoretical Physics:
- 2017.05 Sebastian Mizera and Guojun Zhang (α-β order). String-theoretical Deformation of the Parke-Taylor Factor. Phys. Rev. D 96 (2017) no.6, 066016.
[PRD][arxiv]
- 2016.12 Humberto Gomez, Sebastian Mizera and Guojun Zhang (α-β order). CHY Loop Integrands from Holomorphic Forms. JHEP 1703 (2017) 092.
[JHEP][arxiv]
- 2016.09 Freddy Cachazo, Sebastian Mizera and Guojun Zhang (α-β order). Scattering Equations: Real Solutions and Particles on a Line. JHEP 1703 (2017) 151.
[JHEP][arxiv]
- 2015.05 Xin Wang, Guojun Zhang and Min-xin Huang, New Exact Quantization Condition for Toric Calabi-Yau Geometries. Phys. Rev. Lett. 115, 121601 (2015).
[PRL][arxiv]
Academic Services
Session Chair: ICML 2022
Area Chair: AISTATS 2023-2024
PC Member: IJCAI
Conference Reviewer: NeurIPS, ICML, ICLR, AISTATS, CoRL, AAAI
Program Committee: FL-IJCAI 2022
Journal Reviewer: Journal of Scientific Computing, TMLR, ACM Transactions on Intelligent Systems and Technology
Workshop Reviewer: SyntheticData4ML Workshop NeurIPS 2022.
Misc
I like writing poems, reading, working out and traveling.
I enjoy exploring different cultures. For example, I like visiting ancient churches all over the world. Japanese culture is also one of my favorites. Life is like sakura, short with beauty.
Some good references:
Google scholar ♣
LinkedIn ♣
Twitter ♣
Facebook ♣
Github
Guojun Zhang
7101 Avenue du Parc, Montréal,
QC, Canada