Colloquium/Seminar

YearMonth
2019 Jan   Feb   Mar   Apr  
2018 Jan   Feb   Mar   Apr   May   Jun   Jul   Aug   Oct   Nov   Dec  
2017 Jan   Feb   Mar   Apr   May   Jun   Jul   Aug   Oct   Nov   Dec  
2016 Jan   Feb   Mar   Apr   May   Jun   Jul   Aug   Oct   Nov   Dec  
2015 Jan   Feb   Mar   Apr   May   Jun   Aug   Sep   Oct   Nov   Dec  
2014 Jan   Feb   Mar   Apr   May   Jun   Jul   Aug   Sep   Oct   Nov   Dec  
2013 Jan   Feb   Mar   Apr   May   Jun   Aug   Sep   Nov   Dec  
2012 Jan   Feb   Apr   May   Jun   Jul   Aug   Sep   Nov   Dec  
2011 Jan   Feb   Mar   Apr   May   Jun   Jul   Aug   Sep   Oct   Nov   Dec  
2010 Jan   Feb   Mar   Apr   May   Jun   Sep   Oct   Nov   Dec  
2009 Jan   Feb   Mar   Apr   May   Jun   Jul   Aug   Sep   Oct   Nov   Dec  
2008 Jan   Feb   Mar   Apr   May   Jun   Jul   Aug   Sep   Oct   Nov   Dec  
2007 Jan   Feb   Mar   Apr   May   Jun   Jul   Aug   Sep   Oct   Nov   Dec  
2006 Jan   Feb   Mar   Apr   May   Jun   Jul   Sep   Oct   Nov   Dec  
2005 Jan   Feb   Mar   Apr   May   Jun   Jul   Aug   Sep   Oct   Nov   Dec  
2004 Jan   Feb   Mar   Apr   May   Aug   Sep   Oct   Nov   Dec  

Coming event(s)


  • Monday, 25th March, 2019

    Title: Sparse Grid Meets Random Hashing: Learning High Dimensional Functions of Few Variables
    Speaker: Prof Ming YUAN, Statistics Department, Columbia University, New York, USA
    Time/Place: 15:00  -  16:00
    FSC1217, Fong Shu Chuen Library, HSH Campus, Hong Kong Baptist University
    Abstract: We investigate the optimal sample complexity of recovering a general high dimensional smooth and sparse function based on point queries. Our result provides a precise characterization of the potential loss in information when restricting to point queries as opposed to the more general linear queries, as well as the benefit of adaption. In addition, we also developed a general framework for function approximation to mitigate the curse of dimensionality that can also be easily adapted to incorporate further structure such as mixed smoothness.


  • Thursday, 28th March, 2019

    Title: A stochastic semismooth Newton method for nonconvex nonsmooth optimization
    Speaker: Dr Andre Milzarek, Beijing International Center for Mathematical Research, Peking University, Beijing, China
    Time/Place: 10:30  -  11:30
    FSC1217, Fong Shu Chuen Library, HSH Campus, Hong Kong Baptist University
    Abstract: We present a globalized stochastic semismooth Newton method for solving optimization problems involving smooth nonconvex and nonsmooth convex terms in the objective function. The class of problems that can be solved within our algorithmic framework comprises a large variety of important applications such as expected and empirical risk minimization, structured dictionary learning, and other problems arising in machine learning, statistics, or image processing. We assume that only noisy gradient and Hessian information of the smooth part of the objective function is available via calling stochastic first- and second-order oracles. Our approach utilizes approximate second order information and stochastic semismooth Newton steps for a prox-type fixed-point equation to accelerate the basic stochastic proximal gradient method for convex composite programming. Inexact growth conditions are incorporated to monitor the quality and acceptance of the Newton steps. We prove that the proposed algorithm converges globally to stationary points in expectation and almost surely. Moreover, under standard assumptions, the method can be shown to locally turn into a pure semismooth Newton method and fast local convergence can be established with high probability. Finally, we provide numerical experiments demonstrating the efficiency of the stochastic semismooth Newton method.


  • Wednesday, 17th April, 2019

    Title: Is There a Market for Modified Moments?
    Speaker: Prof Martin Gutknecht, Department of Mathematics, ETH Zurich, Switzerland
    Time/Place: 11:30  -  12:30
    FSC1217, Fong Shu Chuen Library, HSH Campus, Hong Kong Baptist University
    Abstract: What the engineers call ‘partial realization’ is known to mathematicians and physicists as (matrix) Pad´e approximation at ∞ of the transfer function H(s) = C(sI − A)−1B of a linear time-invariant system. The approach is also known as moment (or Markov parameter) matching. Traditionally, this matching has been achieved by solving a Hankel or block Hankel system of linear equations, typically achieved by fast recursive algorithms. But already in the mid-50s Rutishauser became aware of the ill-conditioning of this moment matching, which hampered his qd algorithm. So, for computing the continued fraction of a scalar transfer function H, he suggested to apply the Lanczos algorithm, and for computing the poles of H he would subsequently apply the progressive form of his qd algorithm, which is the same as applying his LR algorithm to the tridiagonal matrix of the Lanczos recurrence coefficients. So, in the scalar case, the computation of Pad´e approximations at ∞ was introduced nearly 40 years before Pad´e-via-Lanczos (PVL) became widely used in the control community following the publications of Feldmann and Freund (1994, 1995). In the 1970s and 1980s such applications of the Lanczos algorithm were also much promoted by G.H. Golub and W.B. Gragg. However, all these algorithms can break down if A is not Hpd. Another efficient but unstable alternative to solving a Hankel system for moment matching had been known long before: the Chebyshev algorithm (1859), which, in fact, can also be viewed as a fast Hankel solver providing the recursions of the corresponding orthogonal polynomials. In the 1960s Gautschi linked the instability of the Chebyshev algorithm to the ill-conditioning of the moment matching, and he also showed that the so-called modified Chebyshev algorithm of Sack and Donovan (1972) and Wheeler (1974) may behave much better. However, also the modified Chebyshev algorithm can break down in the same way the nonsymmetric Lanczos algorithm can break down, because it produces the same continued fraction and the same Pad´e approximants. In 1990, Golub and Gutknecht came up with a version of this modified Chebyshev algorithm that could overcome breakdowns in exact arithmetic. However, unlike the look-ahead Lanczos algorithm this ‘reliable’ or ‘non-generic’ modified Chebyshev algorithm does not remain stable in the case of a near-breakdowns in finite-precision arithmetic, and its extension to a look-ahead algorithm is not at all straightforward. The first aim of our renewed interest in this area was to fill this long-standing gap. Achieving it turned out to be a bit tricky, but simpler than expected. The resulting look-ahead modified Chebyshev algorithm generates (in the scalar, SISO case) the same sequence of block tridiagonal upper Hessenberg matrices as the look-ahead Lanczos algorithm. These matrices are Petrov–Galerkin projections of A. Other challenges remain: what about the MIMO case? What about rational interpolation instead of Pad´e approximation at ∞? Moreover: what about applications? Golub’s main intention, realized in the PhD thesis of Mark Kent (1989), was to use the modified Chebyshev algorithm for first approximating the extremal eigenvalues of an spd matrix A and then to use this information for determining the parameters of the Chebyshev iteration, a classical Krylov solver for matrices with positive real spectrum. Today, computing approximate eigenvalues with the modified Chebyshev algorithm may still be competitive, in particular since the algorithm is naturally communication-avoiding and not restricted to the Hermitian case. Can it be made more stable than Lanczos? As we indicated, this modified Chebyshev algorithm can also be applied for model reduction by partial realization. Yet other applications may get into the focus. For example, using the resulting spectral information for the augmentation and deflation of Krylov subspaces.