Colloquium/Seminar

YearMonth
2019 Jan   Feb   Mar   Apr   May   Jun   Jul   Aug  
2018 Jan   Feb   Mar   Apr   May   Jun   Jul   Aug   Oct   Nov   Dec  
2017 Jan   Feb   Mar   Apr   May   Jun   Jul   Aug   Oct   Nov   Dec  
2016 Jan   Feb   Mar   Apr   May   Jun   Jul   Aug   Oct   Nov   Dec  
2015 Jan   Feb   Mar   Apr   May   Jun   Aug   Sep   Oct   Nov   Dec  
2014 Jan   Feb   Mar   Apr   May   Jun   Jul   Aug   Sep   Oct   Nov   Dec  
2013 Jan   Feb   Mar   Apr   May   Jun   Aug   Sep   Nov   Dec  
2012 Jan   Feb   Apr   May   Jun   Jul   Aug   Sep   Nov   Dec  
2011 Jan   Feb   Mar   Apr   May   Jun   Jul   Aug   Sep   Oct   Nov   Dec  
2010 Jan   Feb   Mar   Apr   May   Jun   Sep   Oct   Nov   Dec  
2009 Jan   Feb   Mar   Apr   May   Jun   Jul   Aug   Sep   Oct   Nov   Dec  
2008 Jan   Feb   Mar   Apr   May   Jun   Jul   Aug   Sep   Oct   Nov   Dec  
2007 Jan   Feb   Mar   Apr   May   Jun   Jul   Aug   Sep   Oct   Nov   Dec  
2006 Jan   Feb   Mar   Apr   May   Jun   Jul   Sep   Oct   Nov   Dec  
2005 Jan   Feb   Mar   Apr   May   Jun   Jul   Aug   Sep   Oct   Nov   Dec  
2004 Jan   Feb   Mar   Apr   May   Aug   Sep   Oct   Nov   Dec  

Event(s) on April 2019


  • Thursday, 18th April, 2019

    Title: Is There a Market for Modified Moments?
    Speaker: Prof Martin Gutknecht, Department of Mathematics, ETH Zurich, Switzerland
    Time/Place: 10:30  -  11:30
    FSC1217, Fong Shu Chuen Library, HSH Campus, Hong Kong Baptist University
    Abstract: What the engineers call ‘partial realization’ is known to mathematicians and physicists as (matrix) Pad´e approximation at ∞ of the transfer function H(s) = C(sI − A)−1B of a linear time-invariant system. The approach is also known as moment (or Markov parameter) matching. Traditionally, this matching has been achieved by solving a Hankel or block Hankel system of linear equations, typically achieved by fast recursive algorithms. But already in the mid-50s Rutishauser became aware of the ill-conditioning of this moment matching, which hampered his qd algorithm. So, for computing the continued fraction of a scalar transfer function H, he suggested to apply the Lanczos algorithm, and for computing the poles of H he would subsequently apply the progressive form of his qd algorithm, which is the same as applying his LR algorithm to the tridiagonal matrix of the Lanczos recurrence coefficients. So, in the scalar case, the computation of Pad´e approximations at ∞ was introduced nearly 40 years before Pad´e-via-Lanczos (PVL) became widely used in the control community following the publications of Feldmann and Freund (1994, 1995). In the 1970s and 1980s such applications of the Lanczos algorithm were also much promoted by G.H. Golub and W.B. Gragg. However, all these algorithms can break down if A is not Hpd. Another efficient but unstable alternative to solving a Hankel system for moment matching had been known long before: the Chebyshev algorithm (1859), which, in fact, can also be viewed as a fast Hankel solver providing the recursions of the corresponding orthogonal polynomials. In the 1960s Gautschi linked the instability of the Chebyshev algorithm to the ill-conditioning of the moment matching, and he also showed that the so-called modified Chebyshev algorithm of Sack and Donovan (1972) and Wheeler (1974) may behave much better. However, also the modified Chebyshev algorithm can break down in the same way the nonsymmetric Lanczos algorithm can break down, because it produces the same continued fraction and the same Pad´e approximants. In 1990, Golub and Gutknecht came up with a version of this modified Chebyshev algorithm that could overcome breakdowns in exact arithmetic. However, unlike the look-ahead Lanczos algorithm this ‘reliable’ or ‘non-generic’ modified Chebyshev algorithm does not remain stable in the case of a near-breakdowns in finite-precision arithmetic, and its extension to a look-ahead algorithm is not at all straightforward. The first aim of our renewed interest in this area was to fill this long-standing gap. Achieving it turned out to be a bit tricky, but simpler than expected. The resulting look-ahead modified Chebyshev algorithm generates (in the scalar, SISO case) the same sequence of block tridiagonal upper Hessenberg matrices as the look-ahead Lanczos algorithm. These matrices are Petrov–Galerkin projections of A. Other challenges remain: what about the MIMO case? What about rational interpolation instead of Pad´e approximation at ∞? Moreover: what about applications? Golub’s main intention, realized in the PhD thesis of Mark Kent (1989), was to use the modified Chebyshev algorithm for first approximating the extremal eigenvalues of an spd matrix A and then to use this information for determining the parameters of the Chebyshev iteration, a classical Krylov solver for matrices with positive real spectrum. Today, computing approximate eigenvalues with the modified Chebyshev algorithm may still be competitive, in particular since the algorithm is naturally communication-avoiding and not restricted to the Hermitian case. Can it be made more stable than Lanczos? As we indicated, this modified Chebyshev algorithm can also be applied for model reduction by partial realization. Yet other applications may get into the focus. For example, using the resulting spectral information for the augmentation and deflation of Krylov subspaces.


  • Monday, 29th April, 2019

    Title: Learning Causality and Learning Based on Causality
    Speaker: Dr Zhang Kun, Department of Philosophy, Carnegie Mellon University , Pittsburgh, US
    Time/Place: 14:00  -  15:00
    FSC1111, Fong Shu Chuen Library, HSH Campus, Hong Kong Baptist University
    Abstract: Can we find causal structure by statistical analysis of observational data? How can we find the causal direction between two variables? How can we make optimal predictions in the presence of distribution shift? We are often faced with such causal modeling or prediction problems. Recently, with the rapid accumulation of huge volumes of data, both causal discovery, i.e., learning causal information from purely observational data, and machine learning are seeing exciting opportunities as well as great challenges. This talk will be focused on recent advances in causal discovery and how causal information facilitates understanding and solving certain problems of learning from heterogeneous data. In particular, I will talk about basic approaches to causal discovery and address practical issues including data heterogeneity and the presence of measurement error. Finally, I will discuss why and how underlying causal knowledge helps in learning from heterogeneous data when the i.i.d. assumption is dropped, with transfer learning as a particular example.