Abstract: 
What the engineers call ‘partial realization’ is known to
mathematicians and physicists as (matrix)
Pad´e approximation at ∞ of the transfer function H(s) =
C(sI − A)−1B of a linear timeinvariant
system. The approach is also known as moment (or Markov parameter)
matching. Traditionally,
this matching has been achieved by solving a Hankel or block
Hankel system of linear equations,
typically achieved by fast recursive algorithms. But already
in the mid50s Rutishauser became
aware of the illconditioning of this moment matching, which
hampered his qd algorithm. So, for
computing the continued fraction of a scalar transfer function
H, he suggested to apply the Lanczos
algorithm, and for computing the poles of H he would subsequently
apply the progressive form of
his qd algorithm, which is the same as applying his LR algorithm
to the tridiagonal matrix of the
Lanczos recurrence coeﬃcients. So, in the scalar case, the
computation of Pad´e approximations
at ∞ was introduced nearly 40 years before Pad´eviaLanczos
(PVL) became widely used in the control community following the
publications of Feldmann and Freund (1994, 1995). In the 1970s
and 1980s such applications of the Lanczos algorithm were also
much promoted by G.H. Golub and W.B. Gragg. However, all these
algorithms can break down if A is not Hpd.
Another eﬃcient but unstable alternative to solving a Hankel
system for moment matching had
been known long before: the Chebyshev algorithm (1859), which,
in fact, can also be viewed as
a fast Hankel solver providing the recursions of the corresponding
orthogonal polynomials. In the
1960s Gautschi linked the instability of the Chebyshev algorithm
to the illconditioning of the
moment matching, and he also showed that the socalled modiﬁed
Chebyshev algorithm of Sack
and Donovan (1972) and Wheeler (1974) may behave much better.
However, also the modiﬁed
Chebyshev algorithm can break down in the same way the nonsymmetric
Lanczos algorithm can
break down, because it produces the same continued fraction
and the same Pad´e approximants.
In 1990, Golub and Gutknecht came up with a version of this
modiﬁed Chebyshev algorithm that
could overcome breakdowns in exact arithmetic. However, unlike
the lookahead Lanczos algorithm this ‘reliable’ or ‘nongeneric’
modiﬁed Chebyshev algorithm does not remain stable in the case
of a nearbreakdowns in ﬁniteprecision arithmetic, and its
extension to a lookahead algorithm is not at all straightforward.
The ﬁrst aim of our renewed interest in this area was to ﬁll
this longstanding gap. Achieving it turned out to be a bit tricky,
but simpler than expected. The resulting lookahead modiﬁed
Chebyshev algorithm generates (in the scalar, SISO case) the
same sequence of block tridiagonal upper Hessenberg matrices
as the lookahead Lanczos algorithm. These matrices are Petrov–Galerkin
projections of A.
Other challenges remain: what about the MIMO case? What about
rational interpolation instead of Pad´e approximation at ∞?
Moreover: what about applications? Golub’s main intention,
realized in the PhD thesis of Mark Kent (1989), was to use the
modiﬁed Chebyshev algorithm for ﬁrst approximating the extremal
eigenvalues of an spd matrix A and then to use this information
for determining the parameters of the Chebyshev iteration, a
classical Krylov solver for matrices with positive real spectrum.
Today, computing approximate eigenvalues with the modiﬁed Chebyshev
algorithm may still be competitive, in particular since the algorithm
is naturally communicationavoiding and not restricted to the
Hermitian case. Can it be made more stable than Lanczos? As we
indicated, this modiﬁed Chebyshev algorithm can also be applied
for model reduction by partial realization. Yet other applications
may get into the focus. For example, using the resulting spectral
information for the augmentation and deﬂation of Krylov subspaces.
