George Lan

A. Russell Chandler III Chair and
Professor


Contact

 Groseclose 412
  Contact
  • Guanghui Lan Google Scholar

Education

  • Ph.D. Operations Research (2009), Georgia Institute of Technology
  • M.S. Industrial Engineering (2004), University of Louisville
  • M.S. Mechanical Engineering (1999), Shanghai Jiao Tong University
  • B.S. Mechanical Engineering (1996), Xiangtan University

Expertise

  • Optimization
  • Machine Learning/Intelligence

About

Guanghui (George) Lan is an A. Russell Chandler III Chair and Professor in the H. Milton Stewart School of Industrial and Systems Engineering at Georgia Tech. He also serves as Associate Director of Machine Learning and Data Science of the Center for Machine Learning.

Dr. Lan's research interests lie in theory, algorithms, and applications of stochastic optimization and nonlinear programming. Most of his current research concerns the design of efficient algorithms with strong theoretical performance guarantees and superior practical performance for solving challenging optimization problems. Dr. Lan is actively pursuing the application of stochastic and nonlinear optimization models/algorithms in machine learning/intelligence.

Dr. Lan received his Ph.D. from Georgia Tech in 2009 and served as a faculty member in the Department of Industrial and Systems Engineering at the University of Florida from 2009-2015. His research has been supported by the National Science Foundation (NSF), the Office of Naval Research (ONR), Army Research Office, USDA, AFOSR, and AHA.

His academic honors include an INFORMS Frederick W. Lanchester Prize (2023), INFORMS Computing Society Prize (2022), NSF CAREER Award (2013), first place in the INFORMS JFIG Paper Competition (2012), finalist in the Mathematical Optimization Society Tucker Prize (2012), second place in the INFORMS George Nicholson prize (2008), and first place in the INFORMS Computing Society Student Paper competition (2008).

Dr. Lan serves as an associate editor for Computational Optimization and Applications (2014-present), Mathematical Programming (2016-present), SIAM Journal on Optimization (2016-present) and Operations Research (2023-present), and recently as an co-Area editor for Mathematics of Operations Research (2026-present).

 

Research

My research focuses on building foundational optimization methods for machine learning and intelligent decision-making systems, with an emphasis on first-order and stochastic algorithms. I have also developed zeroth-order and higher-order methods in settings where they offer distinct advantages. Recent interests include optimization-based foundations for reinforcement learning, parameter-free and adaptive algorithms, and risk-averse optimization. Looking ahead, I aim to develop unified theoretical frameworks that connect optimization with modern machine learning, with the goal of enabling scalable, reliable, and trustworthy intelligent systems grounded in rigorous mathematical principles.

Teaching

I have extensive teaching experience at the undergraduate, master’s, and Ph.D. levels, guided by a strong commitment to integrating research with teaching and to preparing students for both scholarly and practical challenges. My teaching emphasizes foundational understanding, algorithmic thinking, and hands-on engagement with modern optimization and machine learning problems. At the Ph.D. level, I have taught ISyE 6663 (Nonlinear Optimization) and developed two advanced doctoral courses—ISyE 7683 and ISyE 8803—on Optimization for Machine Learning and Optimization for Reinforcement Learning, respectively. At the master’s level, I have taught ISyE 6740 (Computational Data Analysis / Machine Learning) multiple times; this core course serves our highly regarded master’s program and attracts Ph.D. students from across engineering. At the undergraduate level, I have taught ISyE 4133 (Advanced Optimization) and have served for several years as a senior design advisor, mentoring students as they transition from coursework to independent problem solving.

Awards and Honors

  • INFORMS Frederick W. Lanchester Prize
  • INFORMS Computing Society Prize (with S. Ghadimi and H. Zhang)
  • National Science Foundation CAREER Award
  • INFORMS Junior Faculty Interest Group Paper Competition, First Place
  • Mathematical Optimization Society Tucker Prize, Finalist
  • INFORMS George Nicholson Prize Paper Competition, Second Place
  • INFORMS Computing Society Student Paper Award, First Place

Representative Publications

G. Lan, First-order and Stochastic Optimization Methods for Machine Learning, Springer-Nature, 2020, ISBN: 978-3-030-39567-4.

G. Lan, “An Optimal Method for Stochastic Composite Optimization”, Mathematical Programming, v.133, pp.365-397, 2012.

S. Ghadimi and G. Lan, “Stochastic First- and Zeroth-order Methods for Nonconvex Stochastic Programming”, SIAM Journal on Optimization, v.23(4), 2341-2368, 2013.

G. Lan, “Bundle-level Type Methods Uniformly Optimal for Smooth and Nonsmooth Convex Optimization”, Mathematical Programming, v.149 (1):1-45, 2015.

G. Lan, “Gradient Sliding for Composite Optimization”, Mathematical Programming, v.159(1), pp 201-235, 2016.

G. Lan and Y. Zhou, “An Optimal Randomized Incremental Gradient Method”, Mathematical Programming, v.171 (1-2), pp 167-215, 2018.

G. Lan, S. Lee and Y. Zhou, “Communication-efficient Algorithms for Decentralized and Stochastic Optimization”, Mathematical Programming, v. 180, pp 237-284, 2020.

G. Lan and Z. Zhou, “Dynamic Stochastic Approximation for Multi-stage Stochastic Optimization”, Mathematical Programming, v.187, pp.487-532, 2021.

G. Lan, “Complexity of Stochastic Dual Dynamic Programming”, Mathematical Programming, v.191 (2), 717-754, 2022.

D. Boob, Q. Deng and G. Lan, “Stochastic First-order Methods for Convex and Nonconvex Functional Constrained Optimization”, Mathematical Programming, v. 197 (1), 215-279, 2023.

G. Lan, “Policy Mirror Descent for Reinforcement Learning: Linear Convergence, New Sampling Complexity, and Generalized Problem Classes”, Mathematical Programming, v. 198 (1), 1059-1106, 2023.

Z. Jia, G. Lan and Z. Zhang, “Nearly Optimal Risk Lp Minimization” arXiv preprint arXiv:2407.15368, December 2024, submitted to Mathematical Programming.