|2021||Feb Mar Apr May Jun Jul Aug|
|2020||Jan May Jun Jul Aug Sep Oct Nov Dec|
|2019||Jan Feb Mar Apr May Jun Jul Aug Oct Nov|
|2018||Jan Feb Mar Apr May Jun Jul Aug Oct Nov Dec|
|2017||Jan Feb Mar Apr May Jun Jul Aug Oct Nov Dec|
Event(s) on April 2019
- Thursday, 18th April, 2019
Title: Is There a Market for Modified Moments? Speaker: Prof Martin Gutknecht, Department of Mathematics, ETH Zurich, Switzerland Time/Place: 10:30 - 11:30
FSC1217, Fong Shu Chuen Library, HSH Campus, Hong Kong Baptist University
Abstract: What the engineers call ‘partial realization’ is known to mathematicians and physicists as (matrix) Pad´e approximation at ∞ of the transfer function H(s) = C(sI − A)−1B of a linear time-invariant system. The approach is also known as moment (or Markov parameter) matching. Traditionally, this matching has been achieved by solving a Hankel or block Hankel system of linear equations, typically achieved by fast recursive algorithms. But already in the mid-50s Rutishauser became aware of the ill-conditioning of this moment matching, which hampered his qd algorithm. So, for computing the continued fraction of a scalar transfer function H, he suggested to apply the Lanczos algorithm, and for computing the poles of H he would subsequently apply the progressive form of his qd algorithm, which is the same as applying his LR algorithm to the tridiagonal matrix of the Lanczos recurrence coeﬃcients. So, in the scalar case, the computation of Pad´e approximations at ∞ was introduced nearly 40 years before Pad´e-via-Lanczos (PVL) became widely used in the control community following the publications of Feldmann and Freund (1994, 1995). In the 1970s and 1980s such applications of the Lanczos algorithm were also much promoted by G.H. Golub and W.B. Gragg. However, all these algorithms can break down if A is not Hpd. Another eﬃcient but unstable alternative to solving a Hankel system for moment matching had been known long before: the Chebyshev algorithm (1859), which, in fact, can also be viewed as a fast Hankel solver providing the recursions of the corresponding orthogonal polynomials. In the 1960s Gautschi linked the instability of the Chebyshev algorithm to the ill-conditioning of the moment matching, and he also showed that the so-called modiﬁed Chebyshev algorithm of Sack and Donovan (1972) and Wheeler (1974) may behave much better. However, also the modiﬁed Chebyshev algorithm can break down in the same way the nonsymmetric Lanczos algorithm can break down, because it produces the same continued fraction and the same Pad´e approximants. In 1990, Golub and Gutknecht came up with a version of this modiﬁed Chebyshev algorithm that could overcome breakdowns in exact arithmetic. However, unlike the look-ahead Lanczos algorithm this ‘reliable’ or ‘non-generic’ modiﬁed Chebyshev algorithm does not remain stable in the case of a near-breakdowns in ﬁnite-precision arithmetic, and its extension to a look-ahead algorithm is not at all straightforward. The ﬁrst aim of our renewed interest in this area was to ﬁll this long-standing gap. Achieving it turned out to be a bit tricky, but simpler than expected. The resulting look-ahead modiﬁed Chebyshev algorithm generates (in the scalar, SISO case) the same sequence of block tridiagonal upper Hessenberg matrices as the look-ahead Lanczos algorithm. These matrices are Petrov–Galerkin projections of A. Other challenges remain: what about the MIMO case? What about rational interpolation instead of Pad´e approximation at ∞? Moreover: what about applications? Golub’s main intention, realized in the PhD thesis of Mark Kent (1989), was to use the modiﬁed Chebyshev algorithm for ﬁrst approximating the extremal eigenvalues of an spd matrix A and then to use this information for determining the parameters of the Chebyshev iteration, a classical Krylov solver for matrices with positive real spectrum. Today, computing approximate eigenvalues with the modiﬁed Chebyshev algorithm may still be competitive, in particular since the algorithm is naturally communication-avoiding and not restricted to the Hermitian case. Can it be made more stable than Lanczos? As we indicated, this modiﬁed Chebyshev algorithm can also be applied for model reduction by partial realization. Yet other applications may get into the focus. For example, using the resulting spectral information for the augmentation and deﬂation of Krylov subspaces.
- Monday, 29th April, 2019
Title: Learning Causality and Learning Based on Causality Speaker: Dr Zhang Kun, Department of Philosophy, Carnegie Mellon University , Pittsburgh, US Time/Place: 14:00 - 15:00
FSC1111, Fong Shu Chuen Library, HSH Campus, Hong Kong Baptist University
Abstract: Can we find causal structure by statistical analysis of observational data? How can we find the causal direction between two variables? How can we make optimal predictions in the presence of distribution shift? We are often faced with such causal modeling or prediction problems. Recently, with the rapid accumulation of huge volumes of data, both causal discovery, i.e., learning causal information from purely observational data, and machine learning are seeing exciting opportunities as well as great challenges. This talk will be focused on recent advances in causal discovery and how causal information facilitates understanding and solving certain problems of learning from heterogeneous data. In particular, I will talk about basic approaches to causal discovery and address practical issues including data heterogeneity and the presence of measurement error. Finally, I will discuss why and how underlying causal knowledge helps in learning from heterogeneous data when the i.i.d. assumption is dropped, with transfer learning as a particular example.