Colloquium, Department of Mathematics and Statistics
Colloquium, Department of Mathematics and Statistics
Colloquium Lectures
  • Home

Contact Me

Duan Chen

Semester

  • Fall 2022
  • Past Talks
  • Spring 2022

Links

  • Dept Site
Assistant Professor, Department of Mathematics and Statistics
AUTHOR

Qingning Zhou

Friday, December 3, 2021, 11:15am-12:15pm via Zoom

November 25, 2021 by Qingning Zhou
Categories: Spring 2022

Speaker: Dr. Carlos Lamarche from the University of Kentucky

Date and Time: Friday, December 3, 2021, 11:15am-12:15pm via Zoom. Please contact Qingning Zhou to obtain the Zoom link.

Title: Wild Bootstrap Inference for Penalized Quantile Regression for Longitudinal Data

Abstract: The existing theory of penalized quantile regression for longitudinal data has focused primarily on point estimation. In this work, we investigate statistical inference. We propose a wild residual bootstrap procedure and show that it is asymptotically valid for approximating the distribution of the penalized estimator. The model puts no restrictions on individual effects, and the estimator achieves consistency by letting the shrinkage decay in importance asymptotically. The new method is easy to implement and simulation studies show that it has accurate small sample behavior in comparison with existing procedures. Finally, we illustrate the new approach using U.S. Census data to estimate a model that includes more than eighty thousand parameters.

Friday, November 19, 2021, 11am-12pm via Zoom

November 13, 2021 by Qingning Zhou
Categories: Spring 2022

Speaker: Dr. Molei Tao from Georgia Institute of Technology

Date and Time: Friday, November 19, 2021, 11am-12pm via Zoom. Please contact Qingning Zhou to obtain the Zoom link.

Title: Examples of interactions between dynamics and machine learning

Abstract: This talk will report some of our progress in showing how dynamics can be a useful mathematical tool for machine learning. Three demonstrations will be given, namely, how dynamics help design (and analyze) optimization algorithms, how dynamics help quantitatively understand nontrivial observations in deep learning practices, and how deep learning can in turn help dynamics (or more broadly put, AI for sciences). More precisely, in part 1 (dynamics for algorithm): I will talk about how to add momentum to gradient descent on a class of manifolds known as Lie groups. The treatment will be based on geometric mechanics and dynamics in continuous and discrete time, and it will lead to accelerated optimization. Part 2 (dynamics for understanding deep learning) will be on how large learning rates could deterministically lead to escapes from local minima, which is an alternative mechanism to commonly known noisy escapes due to stochastic gradients. If time permits, I will also talk about another example, on an implicit regularization effect of large learning rates (which we term as `balancing’). Part 3 (AI for sciences) will be on data-driven prediction of mechanical dynamics, for which I will demonstrate one strong benefit of having physics hard-wired into deep learning models (more precisely, how to obtain symplectic predictions, and how that generically enables accurate long-time predictions).

Friday, November 12, 2021, 1pm-2pm Hybrid (in-person at Fretwell 315 and via Zoom)

November 09, 2021 by Qingning Zhou
Categories: Spring 2022

Speaker: Dr. Elizabeth Newman from Emory University

Date and Time: Friday, November 12, 2021, 1pm-2pm Hybrid (in-person at Fretwell 315 and via Zoom). Please contact Qingning Zhou to obtain the Zoom link.

Title: How to Train Better: Exploiting the Separability of Deep Neural Networks

Abstract: You would be hard-pressed to find anyone who hasn’t heard the hype about deep neural networks (DNNs). These high-dimensional function approximators, composed of simple layers parameterized by weights, have shown their success in countless applications. What the hype-sters won’t tell you is this: DNNs are challenging to train. Typically, the training problem is posed as a stochastic optimization problem with respect to the DNN weights. With millions of weights, a non-convex and non-smooth objective function, and many hyperparameters to tune, solving the training problem well is no easy task. In this talk, our goal is simple: we want to make DNN training easier. To this end, we will exploit the separability of commonly-used DNN architectures; that is, the weights of the final layer of the DNN are applied linearly. We will leverage this linearity using two different approaches. First, we will approximate the stochastic optimization problem via a sample average approximation (SAA). In this setting, we can eliminate the linear weights through partial optimization, a method affectionately known as Variable Projection (VarPro). Second, in the stochastic approximation (SA) setting, we will consider a powerful iterative sampling approach to update the linear weights, which notably incorporates automatic regularization parameter selection methods. Throughout the talk, we will demonstrate the efficacy of these two approaches to exploit separability using numerical examples.

Thursday, November 11, 2021, 10am-11am via Zoom

November 05, 2021 by Qingning Zhou
Categories: Spring 2022

Speaker: Dr. Quoc Tran-Dinh from the University of North Carolina at Chapel Hill

Date and Time: Thursday, November 11, 2021, 10am-11am via Zoom. Please contact Qingning Zhou to obtain the Zoom link.

Title: Randomized Douglas-Rachford Splitting Algorithms for Composite Optimization in Federated Learning

Abstract: In this talk, we present two randomized Douglas-Rachford splitting algorithms to solve a class of composite nonconvex finite-sum optimization problems arising from federated learning. Our algorithms rely on a combination of three main techniques: Douglas-Rachford splitting scheme, randomized block-coordinate technique, and asynchronous strategy. We show that our algorithms achieve the best-known communication complexity bounds under standard assumptions in the nonconvex setting, while allow one to inexactly updating local models with only a subset of users each round, and handle nonsmooth convex regularizers. Our second algorithm can be implemented in an asynchronous mode using a general probabilistic model to capture different computational architectures. We illustrate our algorithms with many numerical examples and show that the new algorithms have a promising performance compared to common existing methods. This talk is based on the collaboration with Nhan Pham (UNC), Lam M. Nguyen (IBM), and Dzung Phan (IBM). Our paper is available at: https://arxiv.org/abs/2103.03452.

Thursday, November 4, 2021, 10am-11am via Zoom

October 30, 2021 by Qingning Zhou
Categories: Spring 2022

Speaker: Dr. Viet Ha Hoang from Nanyang Technological University

Date and Time: Thursday, November 4, 2021, 10am-11am via Zoom. Please contact Qingning Zhou to obtain the Zoom link.

Title: Multilevel Markov Chain Monte Carlo Methods for Bayesian Inversion of Partial Differential Equations

Abstract: The Bayesian approach to inverse problems, in which the posterior probability distribution on an unknown field is sampled for the purposes of computing posterior expectations of quantities of interest, is starting to become computationally feasible for partial differential equation (PDE)inverse problems. Balancing the sources of error arising from finite-dimensional approximation of the unknown field, the solution of the forward PDE and the sampling of the probability space under the posterior distribution are essential for the design of efficient computational methods.We study Bayesian inversion for a model elliptic PDE with an unknown diffusion coefficient. We consider both the case where the PDE is uniformly elliptic with respect to all the realizations, and the case where uniform ellipticity does not hold, i.e. the coefficient can get arbitrarily close to 0 and arbitrarily large as in the log-normal model. We provide complexity analysis of Markov chain Monte Carlo (MCMC) methods for numerical evaluation of expectations with respect to the posterior measure, in particular bounds on the overall work required to achieve a prescribed error level. We first bound the computational complexity of `plain’ MCMC where a large number of realizations of the forward equation is solved with equally high accuracy. The work versus accuracy bounds show that the complexity of this approach can be quite prohibitive. We then present a novel multi-level Markov chain Monte Carlo strategy which utilizes sampling from a multi-level discretization of the posterior and the forward PDE. The strategy achieves an optimal complexity level that is equivalent to that for performing only one step on the plain MCMC. The optimal accuracy and complexity are mathematically rigorously proven. Numerical results confirm our analysis. This is a joint work with Jia Hao Quek (NTU, Singapore), Christoph Schwab (ETH,Switzerland) and Andrew Stuart (Caltech, US).

Friday, October 29, 2021, 11am-12pm via Zoom

October 23, 2021 by Qingning Zhou
Categories: Spring 2022

Speaker: Dr. Andy Ai Ni from the Ohio State University

Date and Time: Friday, October 29, 2021, 11am-12pm via Zoom. Please contact Qingning Zhou to obtain the Zoom link.

Title: Contrast Weighted Learning for Robust Optimal Treatment Regimen Estimation

Abstract: Personalized medicine aims to tailor medical decisions based on patient-specific characteristics. Advances in data capturing techniques such as electronic health records dramatically increase the availability of comprehensive patient profiles, promoting the rapid development of optimal treatment regimen (OTR) estimation methods. An archetypal OTR estimation approach is the outcome weighted learning (OWL), where OTR is determined under a weighted classification framework with clinical outcomes as the weights. Although OWL has been extensively studied and extended, existing methods are susceptible to the irregularity of outcome distributions such as outliers and heavy tails. Methods that involve modeling of the outcome are also sensitive to model misspecification. We propose a contrast weighted learning (CWL) framework that exploits the flexibility and robustness of contrast functions to enable robust OTR estimation for a wide range of clinical outcomes. The novel value function in CWL only depends on the pair-wise contrast of clinical outcomes between patients irrespective of their distribution features and supports. The Fisher consistency and convergence rate of the estimated decision rule via CWL are established. We illustrate the superiority of the proposed method under finite samples using comprehensive simulation studies with ill-distributed continuous outcomes and ordinal outcomes. We apply the CWL method to two real datasets from clinical trials on idiopathic pulmonary fibrosis and COVID-19 to demonstrate its real-world application.

Friday, October 22, 2021, 11am-12pm via Zoom

October 15, 2021 by Qingning Zhou
Categories: Spring 2022

Speaker: Dr. Fei Lu from Johns Hopkins University

Date and Time: Friday, October 22, 2021, 11am-12pm via Zoom. Please contact Qingning Zhou to obtain the Zoom link.

Title: A statistical learning perspective of model reduction

Abstract: Stochastic closure models aim to make timely predictions with uncertainty quantified. We discuss the statistical learning framework that achieves this goal by accounting for the effects of the unresolved scales. A fundamental idea is the approximation of the discrete-time flow map for the dynamics of the resolved variables. The flow map is an infinite-dimensional functional of the history of resolved scales, as suggested by the Mori-Zwanzig formalism. Thus its inference faces the curse of dimensionality. We investigate a semi-parametric approach that derives parametric models from numerical approximations of the full model. We show that this approach leads to effective reduced models for deterministic and stochastic PDEs, such as the Kuramoto-Sivashisky equation and the viscous stochastic Burgers equations. In particular, we highlight the shift from the classical numerical methods (such as the nonlinear Galerkin method) to statistical learning, and discuss space-time reduction.

Friday, October 15, 2021, 11:15am-12:15pm via Zoom

October 09, 2021 by Qingning Zhou
Categories: Spring 2022

Speaker: Dr. Xuerong Wen from Missouri University of Science and Technology

Date and Time: Friday, October 15, 2021, 11:15am-12:15pm via Zoom. Please contact Qingning Zhou to obtain the Zoom link.

Title: Variable dependent partial dimension reduction

Abstract: Sufficient dimension reduction reduces the dimension of a regression model without loss of information by replacing the original predictor with its lower-dimensional linear combinations. Partial (sufficient) dimension reduction arises when the predictors naturally fall into two sets X and W, and pursues a partial dimension reduction of X. Though partial dimension reduction is a very general problem, only very few research results are available when W is continuous. To the best of our knowledge, these methods generally perform poorly when X and $\W$ are related, furthermore, none can deal with the situation where the reduced lower-dimensional subspace of $\X$ varies with W. To address such issue, we in this paper propose a novel variable dependent partial dimension reduction framework and adapt classical sufficient dimension reduction methods into this general paradigm. The asymptotic consistency of our method is investigated. Extensive numerical studies and real data analysis show that our Variable Dependent Partial Dimension Reduction method has superior performance comparing to the existing methods.

Thursday, October 7, 2021, 11am-12pm via Zoom

October 01, 2021 by Qingning Zhou
Categories: Spring 2022

Speaker: Dr. Toan Nguyen from the Penn State University

Date and Time: Thursday, October 7, 2021, 11am-12pm via Zoom. Please contact Qingning Zhou to obtain the Zoom link.

Title: Landau damping in plasma physics

Abstract: After a quick overview on the classical notion of Landau damping discovered by Landau in 1946, the colloquium will highlight recent mathematical advances on understanding the damping and the large time behavior of a plasma modeled by Vlasov-Poisson and Vlasov-Poisson-Landau systems, including (1) an elementary proof of nonlinear Landau damping for analytic and Gevrey data (joint work with E. Grenier from ENS Lyon and I. Rodnianski from Princeton) and (2) nonlinear Landau damping in the weakly collisional regime for a threshold of initial data with Sobolev regularity (joint work with S. Chaturvedi and J. Luk from Stanford).

Friday, October 1, 2021, 11am-12pm via Zoom

September 24, 2021 by Qingning Zhou
Categories: Spring 2022

Speaker: Dr. Haiying Wang from the University of Connecticut

Date and Time: Friday, October 1, 2021, 11am-12pm via Zoom. Please contact Qingning Zhou to obtain the Zoom link.

Title: Nonuniform Negative Sampling and Log Odds Correction with Rare Events Data

Abstract: We investigate the issue of parameter estimation with nonuniform negative sampling for imbalanced data. We first prove that, with imbalanced data, the available information about unknown parameters is only tied to the relatively small number of positive instances, which justifies the usage of negative sampling. However, if the negative instances are subsampled to the same level of the positive cases, there is information loss. To maintain more information, we derive the asymptotic distribution of a general inverse probability weighted (IPW) estimator and obtain the optimal sampling probability that minimizes its variance. To further improve the estimation efficiency over the IPW method, we propose a likelihood-based estimator by correcting log odds for the sampled data and prove that the improved estimator has the smallest asymptotic variance among a large class of estimators. It is also more robust to pilot misspecification. We validate our approach on simulated data as well as a real click-through rate dataset with more than 0.3 trillion instances, collected over a period of a month. Both theoretical and empirical results demonstrate the effectiveness of our method.

« Older Posts
Skip to toolbar
  • Log In