Year | Month |
2024 | Jan |
2023 | Jan Feb Mar Apr May Jun Jul Aug Oct Nov Dec |
2022 | Jan Feb Mar Apr May Jun Jul Aug Oct Nov Dec |
2021 | Feb Mar Apr May Jun Jul Aug Sep Oct Nov Dec |
2020 | Jan May Jun Jul Aug Sep Oct Nov Dec |
2019 | Jan Feb Mar Apr May Jun Jul Aug Oct Nov |
Title: | Discerning the linear convergence of ADMM for structured convex optimization through the lens of variational analysis: part I |
Speaker: | Dr Zhang Jin, Department of Mathematics, Hong Kong Baptist University |
Time/Place: | 10:30 - 12:00 FSC1217, Fong Shu Chuen Library, HSH Campus, Hong Kong Baptist University |
Abstract: | Despite the rich literature, the linear convergence of alternating direction method of multipliers (ADMM) has not been fully understood even for the convex case. For example, the linear convergence of ADMM can be empirically observed in a wide range of applications, while existing theoretical results seem to be too stringent to be satisfied or too ambiguous to be checked and thus why the ADMM performs linear convergence for these applications still seems to be unclear. In this paper, we systematically study the linear convergence of ADMM in the context of convex optimization through the lens of variaitonal analysis. We show that the linear convergence of ADMM can be guaranteed without the strong convexity of objective functions together with the full rank assumption of the coefficient matrices, or the full polyhedricity assumption of their subdifferential; and it is possible to discern the linear convergence for various concrete applications, especially for some representative models arising in statistical learning. We use some variational analysis techniques sophisticatedly; and our analysis is conducted in the most general proximal version of ADMM with Fortin and Glowinski's larger step size so that all major variants of the ADMM known in the literature are covered. We also deepen our discussion via the dual perspective and show, as byproducts, how to discern the linear convergence of other methods which are highly relevant to various variants of the ADMM, including the Douglas-Rachford splitting method in the general operator form and the primal-dual hybrid gradient method for saddle-point problems. |
Title: | Discerning the linear convergence of ADMM for structured convex optimization through the lens of variational analysis: part II |
Speaker: | Dr Zhang Jin |
Time/Place: | 10:30 - 12:00 FSC1217, Fong Shu Chuen Library, HSH Campus, Hong Kong Baptist University |
Abstract: | Despite the rich literature, the linear convergence of alternating direction method of multipliers (ADMM) has not been fully understood even for the convex case. For example, the linear convergence of ADMM can be empirically observed in a wide range of applications, while existing theoretical results seem to be too stringent to be satisfied or too ambiguous to be checked and thus why the ADMM performs linear convergence for these applications still seems to be unclear. In this paper, we systematically study the linear convergence of ADMM in the context of convex optimization through the lens of variaitonal analysis. We show that the linear convergence of ADMM can be guaranteed without the strong convexity of objective functions together with the full rank assumption of the coefficient matrices, or the full polyhedricity assumption of their subdifferential; and it is possible to discern the linear convergence for various concrete applications, especially for some representative models arising in statistical learning. We use some variational analysis techniques sophisticatedly; and our analysis is conducted in the most general proximal version of ADMM with Fortin and Glowinski's larger step size so that all major variants of the ADMM known in the literature are covered. We also deepen our discussion via the dual perspective and show, as byproducts, how to discern the linear convergence of other methods which are highly relevant to various variants of the ADMM, including the Douglas-Rachford splitting method in the general operator form and the primal-dual hybrid gradient method for saddle-point problems. |
Title: | Preconditioning for symmetrized Toeplitz systems |
Speaker: | Mr Sean Hon, University of Oxford, United Kingdom |
Time/Place: | 14:30 - 15:30 FSC1217, Fong Shu Chuen Library, HSH Campus, Hong Kong Baptist University |
Abstract: | Preconditioning for Toeplitz systems has been well developed over the past few decades. For (real) symmetric Toeplitz systems, descriptive convergence bounds of the conjugate gradient (CG) method can be largely obtained. Yet, as for nonsymmetric Toeplitz systems, the CG method fails to apply directly and most work in the literature has been focused on their normal equations systems. In this work, without normalizing the given Toeplitz system, we symmetrize it by using a simple permutation matrix in order to establish theoretic convergence guarantees of the minimal residual (MINRES) method. Furthermore, considering the well-conditioned Toeplitz matrix case, we prove that a suitable absolute value circulant matrix as a preconditioner for the symmetrized matrix is able to ensure rapid convergence. While in the ill-conditioned case where circulant preconditioners are known to be suboptimal, we propose a band Toeplitz preconditioner and show that the preconditioned matrix has clustered spectra. An extension of our results to the block Toeplitz case is also discussed. Numerical examples are provided to illustrate the effectiveness of our proposed preconditioning approach. At last, the crucial asymptotic spectral distribution of symmetrized Toeplitz matrices that allows precise convergence analysis is given, which justifies the use of MINRES in this strategy. |
Title: | Multiline Queues With Spectral Parameters |
Speaker: | Dr Travis Scrimshaw, School of Mathematics and Physics, University of Queensland, Australia |
Time/Place: | 14:30 - 15:30 FSC1217, Fong Shu Chuen Library, HSH Campus, Hong Kong Baptist University |
Abstract: | The (multispecies) Totally Asymmetric Simple Exclusion Process (TASEP) is a Markov chain on a 1D lattice that can be used to model cars moving through traffic. When we consider this process on a ring, a solution for the steady-state distribution was given by Ferrari and Martin using multiline queues (MLQs), a sequence of subsets of {1,2,...,n}, which are called queues. By generalizing MLQs to functions on words the commutativity conjecture of Arita et al. roughly says that for a fixed word u and set of sizes, the number of MLQs that result in u when applied to the word 11...1 does not depend on the order chosen for the sizes of the queues. In this talk, we refine this conjecture by putting a weighting on MLQs, which we call spectral parameters, and show that with this weighting, the resulting sum does not depend on the order chosen. Our proof uses the relation between MLQs, tensor products of Kirillov-Reshetikhin crystals, and (combinatorial) R-matrices using a corner transfer matrix given by Kuniba, Maruyama, and Okado. This is joint work with Erik Aas and Darij Grinberg. |
We organize conferences and workshops every year. Hope we can see you in future.
Learn MoreProf. M. Cheng, Dr. Y. S. Hon, Dr. K. F. Lam, Prof. L. Ling, Dr. T. Tong and Prof. L. Zhu have been awarded research grants by Hong Kong Research Grant Council (RGC) — congratulations!
Learn MoreFollow HKBU Math