On Highly Parameterized Controls and Fusion of Generative Diffusions

The Lehigh ISE Department, is pleased to announce that Professor Jose Blanchet, The William M. Keck Faculty Scholar and Professor of Management Science and Engineering from Stanford University, will give a Spencer C. Schantz Lecture "On Highly Parameterized Controls and Fusion of Generative Diffusions", on Wednesday, August 14 2024, from 9:00 a.m. to 10:00 a.m. in Rauch Business Center RB 184 (Perella Auditorium), 621 Taylor Street, Bethlehem PA 18015.

 

 

 

Abstract:

We discuss two recent projects which touch on first order methods in connection with two very active research areas in operations research and artificial intelligence.
 
The first one involves the design of efficient gradient estimators for dynamic optimization problems based on highly parameterized controls. The motivation is the application of stochastic gradient descent for the numerical solution of stochastic control problems using neural networks. Our estimator has at least a linear speed-up in the dimension of the parameter space compared to infinitesimal perturbation analysis and it can be applied on situations in which the likelihood ratio estimator may not be applicable (e.g. If the diffusion matrix depends on the parameter of interest). We show very substantial gains in high-dimensional control problems based on experiments. 
 
The second result involves the development of an efficient approach for merging diffusion-based generative models. We assume the existence of several auxiliary models that have been trained with abundance of data. These models are assumed to contain features that, combined, can be useful to enhance the training of a generative diffusion model for a target distribution (with limited data). We merge the models using a Kullback-Leibler (KL) Barycenter given set of weights representing the importance of the auxiliaries. In turn, we optimize the weights to improve the overall performance of the fused model in order to fit the target. While the double optimization problem (KL Barycenter and optimizing over weights) is challenging to solve, we show that diffusion based generative modeling significantly reduce the complexity of the overall optimization problem, making the approach practical. This approach also provides a mechanistic interpretation of popular fine-tuning approaches used in the literature.
 
The results are based on two papers, the first one (on gradient estimators) with Peter Glynn and Shengbo Wang, and the second one (on fusion) with Hao Liu, Nian Si, and Tony Ye.
 
Bio:
Jose Blanchet is a Professor of Management Science and Engineering (MS&E) at Stanford University. Before joining MS&E, he was a professor at Columbia University in the Departments of Industrial Engineering and Operations Research, and Statistics (2008-2017). Prior to that, he was a professor in the Statistics Department at Harvard University (2004-2008). In 2010, he received the Presidential Early Career Award for Scientists and Engineers. Jose is the co-winner of the 2010 Erlang Prize, awarded every two years by the INFORMS Applied Probability Society. Several of his papers have been recognized by the biennial Best Publication Award given by the INFORMS Applied Probability Society (2007, 2023). His work has also received the Outstanding Simulation Publication Award from the INFORMS Simulation Society (2021) and other best publication awards from the Operations Management (2019) and Revenue Management Societies (2021) at INFORMS. Previously, he worked as an analyst at Protego Financial Advisors, a leading investment bank in Mexico. His research interests include Applied Probability, Stochastic Optimization, and Monte Carlo methods. He is the Area Editor of Stochastic Models in Mathematics of Operations Research and has served on the editorial boards of Advances in Applied Probability, Bernoulli, Extremes, Insurance: Mathematics and Economics, Journal of Applied Probability, Queueing Systems: Theory and Applications, and Stochastic Systems, among others.