Colloquium, Department of Mathematics and Statistics
Colloquium, Department of Mathematics and Statistics
Colloquium Lectures
  • Home

Contact Me

Duan Chen

Semester

  • Fall 2022
  • Past Talks
  • Spring 2022

Links

  • Dept Site

Spring 2022

Friday, February 11, 2022, 9:00-10:00 via Zoom

February 08, 2022 by William Brian
Categories: Spring 2022

Speaker: Dr. Luo Ye from Xiamen University

Date and Time: Friday, February 11, 2022, 9:00-10:00 via Zoom. Please contact Will Brian to obtain the Zoom link.

Title: Tropical convexity analysis and some applications

Abstract:  The tropical semiring is an idempotent semiring where  the usual operations of addition and multiplication are replaced by operations of minimum/maximum and addition respectively. Tropical geometry is a theory of geometry over the tropical semiring which has rich combinatorial features and can be described as a degenerated version of algebraic geometry over the field of complex numbers under Maslov dequantization or over a non-archimedean field under the valuation map. The  features of “linear combinations” in tropical geometry can be captured by the notion of tropical convexity.   In this talk, I will introduce a general theory of tropical convexity analysis based on the so-called “B-pseudonorms” on tropical projective spaces, and show some subsequent results, e.g., a tropical version of Mazur’s Theorem on closed tropical convex hulls and a fixed point theorem for tropical projections. Two applications will also be presented. The first is to establish a connection between tropical projections and  reduced divisors on (metric) graphs, and the second is to construct  min-max-plus neural networks, a new type of artificial neural networks.





Skip to toolbar

Friday, February 4, 2022, 11:00-12:00 via Zoom

February 01, 2022 by William Brian
Categories: Spring 2022

Speaker: Dr. Yuexiao Dong from Temple University

Date and Time: Friday, February 4, 2022, 11:00-12:00 via Zoom. Please contact Will Brian to obtain the Zoom link.

Title: Testing the linear mean and constant variance conditions in sufficient dimension reduction

Abstract:  Sufficient dimension reduction (SDR) methods characterize the relationship between the response and the predictors through a few linear combinations of the predictors. Sliced inverse regression and sliced average variance estimation are among the most popular SDR methods as they do not involve multi-dimensional smoothing and are easy to implement. However, these inverse regression-based methods require the linear conditional mean (LCM) and(or) the constant conditional variance (CCV) assumption. We propose novel tests to check the validity of the LCM and the CCV conditions through the martingale difference divergence. Extensive simulation studies and a real data application are performed to demonstrate the effectiveness of our proposed tests.





Friday, January 28, 2022, 11:00-12:00 via Zoom

January 24, 2022 by William Brian
Categories: Spring 2022

Speaker: Dr. Eshita Mazumdar from Ahmedabad University.

Date and Time: Friday, January 28, 2022, 11:00-12:00 via Zoom. Please contact Will Brian to obtain the Zoom link.

Title: Zero-sum problems

Abstract:  Zero-sum problems are basically combinatorial in nature. It deals with the condition which ensures that a given sequence over a finite group has a zero-sum subsequence with some prescribed property. There are many invariants associated with zero-sum problems. One of such invariants is the Davenport Constant.  The original motivation for introducing the Davenport Constant was to study the problem of non-unique factorization domain over number fields. The precise value of this group invariant for any finite abelian group is still unknown. In this talk I will discuss an extremal problem related to Weighted Davenport Constant and introduce several exciting combinatorial results for finite abelian groups. Also, characteristics of these constants on restricted sequences will be discussed. If time permits I will talk about an ongoing project where we introduced a new group invariant which is a natural generalization of the Davenport Constant.


			
						
		

Friday, January 21, 2022, 12:00-1:00 via Zoom

January 17, 2022 by William Brian
Categories: Spring 2022

Speaker: Dr. Steven Clontz, from the University of South Alabama

Date and Time: Friday, January 21, 2022, 12:00-1:00 via Zoom. Please contact Will Brian to obtain the Zoom link.

Title: Games Topologists Play

Abstract: Several ideas from topology and set theory may be characterized by considering two-player infinite-length games. During each round n ∈ {0,1,2, . . . }, suppose Player 1 makes a move an (perhaps choosing an open cover of a given regular space), followed by Player 2 making a move bn (perhaps choosing a finite subcollection from 1’s chosen cover); the winner of such a game is determined by the sequence of moves ( a0,b0,a1,b1, . . . ) (perhaps Player 2 wins if their choices form a cover).

The topological game specified above is known as Menger’s game, and Player 2 has an unbeatable strategy that only uses information limited to the round number and the most recent move of Player 1 in this game if and only if the given regular space is σ-compact. In this talk, we will explore various results of this flavor found in the literature, including an interesting game-theoretic proof appropriate for undergraduates that the real numbers are uncountable.

Friday, December 3, 2021, 11:15am-12:15pm via Zoom

November 25, 2021 by Qingning Zhou
Categories: Spring 2022

Speaker: Dr. Carlos Lamarche from the University of Kentucky

Date and Time: Friday, December 3, 2021, 11:15am-12:15pm via Zoom. Please contact Qingning Zhou to obtain the Zoom link.

Title: Wild Bootstrap Inference for Penalized Quantile Regression for Longitudinal Data

Abstract: The existing theory of penalized quantile regression for longitudinal data has focused primarily on point estimation. In this work, we investigate statistical inference. We propose a wild residual bootstrap procedure and show that it is asymptotically valid for approximating the distribution of the penalized estimator. The model puts no restrictions on individual effects, and the estimator achieves consistency by letting the shrinkage decay in importance asymptotically. The new method is easy to implement and simulation studies show that it has accurate small sample behavior in comparison with existing procedures. Finally, we illustrate the new approach using U.S. Census data to estimate a model that includes more than eighty thousand parameters.

Friday, November 19, 2021, 11am-12pm via Zoom

November 13, 2021 by Qingning Zhou
Categories: Spring 2022

Speaker: Dr. Molei Tao from Georgia Institute of Technology

Date and Time: Friday, November 19, 2021, 11am-12pm via Zoom. Please contact Qingning Zhou to obtain the Zoom link.

Title: Examples of interactions between dynamics and machine learning

Abstract: This talk will report some of our progress in showing how dynamics can be a useful mathematical tool for machine learning. Three demonstrations will be given, namely, how dynamics help design (and analyze) optimization algorithms, how dynamics help quantitatively understand nontrivial observations in deep learning practices, and how deep learning can in turn help dynamics (or more broadly put, AI for sciences). More precisely, in part 1 (dynamics for algorithm): I will talk about how to add momentum to gradient descent on a class of manifolds known as Lie groups. The treatment will be based on geometric mechanics and dynamics in continuous and discrete time, and it will lead to accelerated optimization. Part 2 (dynamics for understanding deep learning) will be on how large learning rates could deterministically lead to escapes from local minima, which is an alternative mechanism to commonly known noisy escapes due to stochastic gradients. If time permits, I will also talk about another example, on an implicit regularization effect of large learning rates (which we term as `balancing’). Part 3 (AI for sciences) will be on data-driven prediction of mechanical dynamics, for which I will demonstrate one strong benefit of having physics hard-wired into deep learning models (more precisely, how to obtain symplectic predictions, and how that generically enables accurate long-time predictions).

Friday, November 12, 2021, 1pm-2pm Hybrid (in-person at Fretwell 315 and via Zoom)

November 09, 2021 by Qingning Zhou
Categories: Spring 2022

Speaker: Dr. Elizabeth Newman from Emory University

Date and Time: Friday, November 12, 2021, 1pm-2pm Hybrid (in-person at Fretwell 315 and via Zoom). Please contact Qingning Zhou to obtain the Zoom link.

Title: How to Train Better: Exploiting the Separability of Deep Neural Networks

Abstract: You would be hard-pressed to find anyone who hasn’t heard the hype about deep neural networks (DNNs). These high-dimensional function approximators, composed of simple layers parameterized by weights, have shown their success in countless applications. What the hype-sters won’t tell you is this: DNNs are challenging to train. Typically, the training problem is posed as a stochastic optimization problem with respect to the DNN weights. With millions of weights, a non-convex and non-smooth objective function, and many hyperparameters to tune, solving the training problem well is no easy task. In this talk, our goal is simple: we want to make DNN training easier. To this end, we will exploit the separability of commonly-used DNN architectures; that is, the weights of the final layer of the DNN are applied linearly. We will leverage this linearity using two different approaches. First, we will approximate the stochastic optimization problem via a sample average approximation (SAA). In this setting, we can eliminate the linear weights through partial optimization, a method affectionately known as Variable Projection (VarPro). Second, in the stochastic approximation (SA) setting, we will consider a powerful iterative sampling approach to update the linear weights, which notably incorporates automatic regularization parameter selection methods. Throughout the talk, we will demonstrate the efficacy of these two approaches to exploit separability using numerical examples.

Thursday, November 11, 2021, 10am-11am via Zoom

November 05, 2021 by Qingning Zhou
Categories: Spring 2022

Speaker: Dr. Quoc Tran-Dinh from the University of North Carolina at Chapel Hill

Date and Time: Thursday, November 11, 2021, 10am-11am via Zoom. Please contact Qingning Zhou to obtain the Zoom link.

Title: Randomized Douglas-Rachford Splitting Algorithms for Composite Optimization in Federated Learning

Abstract: In this talk, we present two randomized Douglas-Rachford splitting algorithms to solve a class of composite nonconvex finite-sum optimization problems arising from federated learning. Our algorithms rely on a combination of three main techniques: Douglas-Rachford splitting scheme, randomized block-coordinate technique, and asynchronous strategy. We show that our algorithms achieve the best-known communication complexity bounds under standard assumptions in the nonconvex setting, while allow one to inexactly updating local models with only a subset of users each round, and handle nonsmooth convex regularizers. Our second algorithm can be implemented in an asynchronous mode using a general probabilistic model to capture different computational architectures. We illustrate our algorithms with many numerical examples and show that the new algorithms have a promising performance compared to common existing methods. This talk is based on the collaboration with Nhan Pham (UNC), Lam M. Nguyen (IBM), and Dzung Phan (IBM). Our paper is available at: https://arxiv.org/abs/2103.03452.

Thursday, November 4, 2021, 10am-11am via Zoom

October 30, 2021 by Qingning Zhou
Categories: Spring 2022

Speaker: Dr. Viet Ha Hoang from Nanyang Technological University

Date and Time: Thursday, November 4, 2021, 10am-11am via Zoom. Please contact Qingning Zhou to obtain the Zoom link.

Title: Multilevel Markov Chain Monte Carlo Methods for Bayesian Inversion of Partial Differential Equations

Abstract: The Bayesian approach to inverse problems, in which the posterior probability distribution on an unknown field is sampled for the purposes of computing posterior expectations of quantities of interest, is starting to become computationally feasible for partial differential equation (PDE)inverse problems. Balancing the sources of error arising from finite-dimensional approximation of the unknown field, the solution of the forward PDE and the sampling of the probability space under the posterior distribution are essential for the design of efficient computational methods.We study Bayesian inversion for a model elliptic PDE with an unknown diffusion coefficient. We consider both the case where the PDE is uniformly elliptic with respect to all the realizations, and the case where uniform ellipticity does not hold, i.e. the coefficient can get arbitrarily close to 0 and arbitrarily large as in the log-normal model. We provide complexity analysis of Markov chain Monte Carlo (MCMC) methods for numerical evaluation of expectations with respect to the posterior measure, in particular bounds on the overall work required to achieve a prescribed error level. We first bound the computational complexity of `plain’ MCMC where a large number of realizations of the forward equation is solved with equally high accuracy. The work versus accuracy bounds show that the complexity of this approach can be quite prohibitive. We then present a novel multi-level Markov chain Monte Carlo strategy which utilizes sampling from a multi-level discretization of the posterior and the forward PDE. The strategy achieves an optimal complexity level that is equivalent to that for performing only one step on the plain MCMC. The optimal accuracy and complexity are mathematically rigorously proven. Numerical results confirm our analysis. This is a joint work with Jia Hao Quek (NTU, Singapore), Christoph Schwab (ETH,Switzerland) and Andrew Stuart (Caltech, US).

Friday, October 29, 2021, 11am-12pm via Zoom

October 23, 2021 by Qingning Zhou
Categories: Spring 2022

Speaker: Dr. Andy Ai Ni from the Ohio State University

Date and Time: Friday, October 29, 2021, 11am-12pm via Zoom. Please contact Qingning Zhou to obtain the Zoom link.

Title: Contrast Weighted Learning for Robust Optimal Treatment Regimen Estimation

Abstract: Personalized medicine aims to tailor medical decisions based on patient-specific characteristics. Advances in data capturing techniques such as electronic health records dramatically increase the availability of comprehensive patient profiles, promoting the rapid development of optimal treatment regimen (OTR) estimation methods. An archetypal OTR estimation approach is the outcome weighted learning (OWL), where OTR is determined under a weighted classification framework with clinical outcomes as the weights. Although OWL has been extensively studied and extended, existing methods are susceptible to the irregularity of outcome distributions such as outliers and heavy tails. Methods that involve modeling of the outcome are also sensitive to model misspecification. We propose a contrast weighted learning (CWL) framework that exploits the flexibility and robustness of contrast functions to enable robust OTR estimation for a wide range of clinical outcomes. The novel value function in CWL only depends on the pair-wise contrast of clinical outcomes between patients irrespective of their distribution features and supports. The Fisher consistency and convergence rate of the estimated decision rule via CWL are established. We illustrate the superiority of the proposed method under finite samples using comprehensive simulation studies with ill-distributed continuous outcomes and ordinal outcomes. We apply the CWL method to two real datasets from clinical trials on idiopathic pulmonary fibrosis and COVID-19 to demonstrate its real-world application.

« Older Posts
Newer Posts »
Skip to toolbar
  • Log In