Accepted Minisymposia
 
 
Advances in High-Performance Sparse Matrix Computation (4 talks)
Mathias Jacquelin, Esmond Ng

Summary: This minisymposium will focus on advances in the solution of sparse systems of linear equations. We will consider both recent work in direct methods and iterative methods, with an emphasis on performance and scalability on current and future computer architectures. We will also discuss the roles of such solvers in the solution of large-scale scientific problems.

  • Towards highly scalable asynchronous sparse solvers for symmetric matrices
    Mathias Jacquelin
  • Fine-grained Parallel Incomplete LU Factorization
    Edmond Chow
  • Approximate sparse matrix factorization using low-rank compression
    Pieter Ghysels
  • Hiding latencies and avoid communications in Krylov solvers
    Wim Vanroose
Advances in preconditioning and iterative methods (4 talks)
Jennifer Pestana, Alison Ramage

As mathematical models become increasingly complex, efficiently solving large sparse linear systems remains a key concern in many applications. Iterative solvers are often the method of choice, in which case effective preconditioners are usually required. In this minisymposium, speakers will present recent advances in iterative methods and preconditioners.

  • Symmetrizing nonsymmetric Toeplitz matrices in fractional diffusion problems
    Jennifer Pestana
  • An Approximate Inverse-Based Preconditioner for Incompressible Magnetohydrodynamics
    Michael Wathen
  • Commutator Based Preconditioning for Incompressible Two-Phase Flow
    Niall Bootland
  • The efficient solution of linear algebra subproblems arising in optimization methods
    Tyrone Rees
Asynchronous Iterative Methods in Numerical Linear Algebra and Optimization (4 talks)
Hartwig Anzt, Edmond Chow, Daniel B. Szyld

In asynchronous iterative methods, a processing unit that normally depends on the datacomputed by other processing units is allowed to proceed even if not all theseother processing units have completed their computations. Originally calledChaotic Relaxation for fixed-point iterations, asynchronous iterative methods havealso now been developed for numerical optimization. In this minisymposium, recent research is presented both on the theoryand implementation of asynchronous iterative methods.

  • ARock: Asynchronous Parallel Coordinate Updates
    Ming Yan
  • Asynchronous Domain Decomposition Solvers
    Christian Glusa
  • Asynchronous Linear System Solvers on Supercomputers
    Teodor Nikolov
  • Asynchronous Optimized Schwarz Methods for Poisson's Equation in Rectangular Domains
    José Garay
Constrained Low-Rank Matrix and Tensor Approximations (8 talks)
Grey Ballard, Ramakrishnan Kannan, Haesun Park

Constrained low rank matrix and tensor approximations are extremely useful in large-scale data analytics with applications across data mining, signal processing, statistics, and machine learning. Tensors are multidimensional arrays, or generalizations of matrices to more than two dimensions. The talks in this minisymposium will span various matrix and tensor decompositions and discuss applications and algorithms, as well as available software, with a particular focus on computing solutions that satisfy application-dependent constraints.

  • Constrained Low Rank Approximations for Scalable Data Analytics
    Haesun Park
  • Tensor decompositions for big multi-aspect data analytics
    Evangelos Papalexakis
  • Speeding Up Tensor Contractions through Extended BLAS Kernels
    Animashree Anandkumar
  • Scalable & Constrained PARAFAC2 for Sparse Data
    Ioakeim Perros
  • PLANC: Software for Parallel Low-rank Approximation with Nonnegative Constraints
    Ramakrishnan Kannan
  • Efficient CP-ALS and Reconstruction from CP Form
    Jed Deursch
  • Non-negative Sparse Tensor Decomposition on Distributed Systems
    Jiajia Li
  • Communication-Optimal Algorithms for CP Decompositions of Dense Tensors
    Grey Ballard
Coupled matrix and tensor decompositions: Theory and methods (3 talks)
Dana Lahat

Matrices and higher-order arrays, also known as tensors, are natural structures for data representation, and their factorizations in low-rank terms are fundamental tools in data analysis. In recent years, there has been increasing interest in more elaborate data structures and coupled decompositions that provide more efficient ways to exploit the various types of diversity and structure in a single dataset or in an ensemble of possibly heterogeneous linked datasets. Such data arise in multidimensional harmonic retrieval, biomedical signal processing, and social network analysis, to name a few. However, understanding these new types of decompositions necessitates the development of new analytical and computational tools. In this minisymposium, we present several different frameworks that provide new insights into some of these types of coupled matrix and tensor decompositions. We show how the concept of irreducibility, borrowed from representation theory, is related to the uniqueness of coupled decompositions in low-rank terms, as well as to coupled Sylvester-type matrix equations. We compare the gain that can be achieved from computing coupled CP decompositions of tensors in a semi-algebraic framework, in several scenarios. Finally, we study connections between different tensorization approaches that are based on decoupling multivariate polynomials. We discuss advantages and drawbacks of these approaches, as well as their potential applications.

  • Understanding the uniqueness of decompositions in low-rank block terms using Schur's lemma on irreducible representations
    Dana Lahat
  • What can we gain from coupled SECSI in the rank 2 case? An analytical first-order performance analysis
    Yao Cheng
  • Decoupling multivariate polynomials: comparing different tensorization methods
    Philippe Dreesen
Discovery from Data (12 talks)
Theodore E. Schomay, Katherine A. Aiello, Sri Priya Ponnapalli, Orly Alter

The number of large-scale high-dimensional datasets recording different aspects of interrelated phenomena is growing, accompanied by a need for mathematical frameworks for discovery from data arranged in structures more complex than that of a single matrix. In the three sessions of this minisymposium we will present recent studies demonstrating ``Discovery from Data,'' in ``I: Systems Biology,'' and ``II: Personalized Medicine,'' by developing and using the mathematics of ``III: Tensors.''

  • Patterns of DNA Copy-Number Alterations Revealed by the GSVD and Tensor GSVD Encode for Cell Transformation and Predict Survival and Response to Platinum in Adenocarcinomas
    Sri Priya Ponnapalli
  • Systems Biology of Drug Resistance in Cancer
    Antti Hakkinen
  • Single-Cell Entropy for Estimating Differentiation Potency in Waddington's Epigenetic Landscape
    Andrew E. Teschendorff
  • Dimension Reduction for the Integrative Analysis of Multilevel Omics Data
    Gerhard G. Thallinger
  • Mathematically Universal and Biologically Consistent Astrocytoma Genotype Encodes for Transformation and Predicts Survival Phenotype
    Katherine A. Aiello
  • Statistical Methods for Integrative Clustering Analysis of Multi-Omics Data
    Qianxing Mo
  • Structured Convex Optimization Method for Orthogonal Nonnegative Matrix Factorization with Applications to Gene Expression Data
    Junjun Pan
  • Mining the ECG Using Low Rank Tensor Approximations with Applications in Cardiac Monitoring
    Sabine Van Huffel
  • Tensor Higher-Order GSVD: A Comparative Spectral Decomposition of Multiple Column-Matched but Row-Independent Large-Scale High-Dimensional Datasets
    Theodore E. Schomay
  • The GSVD: Where are the Ellipses?
    Alan Edelman
  • Tensor convolutional neural networks (tCNN): Improved featurization using high-dimensional frameworks
    Elizabeth Newman
  • Three-way Generalized Canonical Correlation Analysis
    Arthur Tenenhaus
Domain decomposition methods for heterogeneous and large-scale problems (8 talks)
Eric Chung, Hyea Hyun Kim

Many applications involve the solutions of coupled heterogeneous systems,and the resulting discretizations give huge linear or nonlinear systems of equations,which are in general expensive to compute. One popular and efficient approach is the domain decomposition method.While the method is successful for many problems, there are still many challenges inthe application of the domain decomposition method for heterogeneous and multiscale problems. In this mini-symposium, we will review some recent advances in the use of domain decomposition and related methodsto solve complex heterogeneous and large-scale problems.

  • Fast solvers for multiscale problems
    Hyea Hyun Kim
  • Space-Time Schwarz Preconditioning and Applications
    Xiao-chuan Cai
  • Robust BDDC and FETI-DP methods in PETSc
    Stefano Zampini
  • A GMsFEM based domain decomposition method for linear elasticity in high contrast media
    Yanfang Yang
  • A Parareal Algorithm for Coupled Systems Arising from Optimal Control Problems
    Felix Kwok
  • Convergence of Adaptive Weak Galerkin Finite Element Methods
    Liuqiang Zhong
  • A nonoverlapping DD method for an interior penalty method
    Eun-Hee Park
  • A two-grid preconditioner for flow simulations in highly heterogeneous media with an adaptive coarse space
    Shubin Fu
Efficient Kernel Methods and Numerical Linear Algebra (8 talks)
Evgeny Burnaev, Ivan Oseledets

Summary: Despite their theoretical appeal and grounding in tractable convex optimization techniques, kernel methods are often not the first choice for large-scale machine learning applications due to their significant memory requirements and computational expense. Thus it is not surprising that mainly due to the advances of deep learning, the performances in various machine learning tasks have been progressing intensively. However, in recent years different elegant mechanisms (such as randomized approximate feature maps) to scale-up kernel methods emerged mainly from computational mathematics and applied linear algebra. So these are indications that kernel methods are not dead and that they could match or even outperform deep nets. To tackle such challenging area, one appeals for new advanced approaches at the bridge of numerical linear algebra and kernels methods. Therefore, the purpose of the minisymposium is to bring together experts in modern machine learning and scientific computing to discuss current results in numerical approximation and its usage for scaling up kernel methods, as well as potential areas of application. The emphasis is put on original theoretical and algorithmic developments, however interesting application results are welcome as well.

  • Overview of Large Scale Kernel Methods
    Evgeny Burnaev
  • Kernel Methods and Tensor Decompositions
    Ivan Oseledets
  • Quadrature-based features for kernel approximation
    Ermek Kapushev
  • Convergence Analysis of Deterministic Kernel-based Quadrature Rules in Sobolev Spaces
    Motonobu Kanagawa
  • Sequential Sampling for Kernel Matrix Approximation and Online Learning
    Michal Valko
  • Tradeoffs of Stochastic Approximation in Hilbert Spaces
    Aymeric Dieuleveut
  • Deep Kernel Learning and Structured Kernel Interpolation
    Andrew Gordon Wilson
  • Kernel Methods for Causal Inference
    Krikamol Muandet
Exploiting Low-Complexity Structures in Data Analysis: Theory and Algorithms (8 talks)
Ju Sun, Ke Wei

Summary: Low-complexity structures are central to modern data analysis --- they are exploited to tame data dimensionality, to rescue ill-posed problems, and to ease and speed up hard numerical computation. In this line, the past decade features remarkable advances in theory and practice of estimating sparse vectors or low-rank matrices from few linear measurements. Looking ahead, there are numerous fundamental problems in data analysis coming with more complex data formation processes. For example, the dictionary learning and the blind deconvolution problems have intrinsic bilinear structures, whereas the phase retrieval problem and variants pertain to quadratic measurements. Moreover, many of these applications can be naturally formulated as nonconvex optimization problems, which are ruled to be hard by the worst-case theory. In practice, however, simple numerical methods are surprisingly effective in solving them. Partial explanation of this curious gap has started to appear very recently. This minisymposium highlights the intersection between numerical linear algebra/numerical optimization and the mathematics of modern signal processing and data analysis. Novel results on both theoretical and algorithmic sides of exploiting low-complexity structures will be discussed, with an emphasis on addressing the new challenges.

  • When Are Nonconvex Optimization Problems Not Scary?
    Ju Sun
  • The Scaling Limit of Online Lasso, Sparse PCA and Related Algorithms
    Yue M. Lu
  • Accelerated Alternating Projection for Robust Principle Component Analysis
    Jian-Feng Cai
  • Numerical integrators for rank-constrained differential equations
    Bart Vandereycken
  • Foundations of Nonconvex and Nonsmooth Robust Subspace Recovery
    Tyler Maunu
  • Geometry and Algorithm for Sparse Blind Deconvolution
    Yuqian Zhang
  • Convergence of the randomized Kaczmarz method for phase retrieval
    Halyun Jeong
  • Nonconvex Optimization for High-dimensional Learning
    Mahdi Soltanolkotabi
Generalized Inverses and the Linear Least Squares (8 talks)
Dragana Cvetkovic Ilic, Ken Hayami, Yimin Wei

Summary: Within this minisymposium we will consider some actual problems of the generalized inverses, generalized invertibility of operators, representations of the Drazin inverse, least squares problem, and computing generalized inverses using gradient neural networks and using database stored procedures. We will develop the relationship between generalized inverses and the linear least squares problem with applications in signal processing.

  • Recovery of sparse integer-valued signals
    Xiao-Wen Chang
  • Various properties of operator matrices and its different applications
    Dragana Cvetkovic Ilic
  • Computing time-varying ML-weighted pseudoinverse by the Zhang neural networks
    Sanzheng Qiao
  • GNN and ZNN solutions of linear matrix equations
    Predrag S. Stanimirović
  • Randomized Algorithms forTotal Least Squares Problems
    Hua Xiang
  • Randomized Algorithms for Core Problem and TLS problem
    Liping Zhang
  • Condition Numbers of the Multidimensional Total Least Squares Problem
    Bing Zheng
  • Fast solution of nonnegative matrix factorization via a matrix-based active set method
    Ning Zheng
Iterative Solvers for Parallel-in-Time Integration (4 talks)
Xiao-Chuan Cai, Hans De Sterck

Due to stagnating processor speeds and increasing core counts, the current paradigm of high performance computing is to achieve shorter computing times by increasing the concurrency of computations. Sequential time-stepping is a computational bottleneck when attempting to implement highly concurrent algorithms, thus parallel-in-time methods are desirable. This minisymposium will present recent advances in iterative solvers for parallel-in-time integration. This includes methods like parareal, multigrid reduction, and parallel space-time methods, with application to linear and nonlinear PDEs of parabolic and hyperbolic type.

  • Space-Time Schwarz Preconditioning and Applications
    Xiao-Chuan Cai
  • Parallel-in-Time Multigrid with Adaptive Spatial Coarsening for the Linear Advection and Inviscid Burgers Equations
    Hans De Sterck
  • On the convergence of PFASST
    Matthias Bolten
  • Waveform Relaxation with Adaptive Pipelining (WRAP)
    Felix Kwok
Large-scale eigenvalue problems and applications (11 talks)
Haizhao Yang, Yingzhou Li

Eigenvalue problem is the essential part and the computationally intensivepart in many applications in a variety of areas, including, electronstructure calculation, dynamic systems, machine learning, etc. In all theseareas, efficient algorithms for solving large-scale eigenvalue problems aredemanding. Recently many novel scalable eigensolvers were developed to meetthis demand. The choice of an eigensolver highly depends on the properties and structure of the application. Thisminisymposium invites eigensolver developers to discuss the applicabilityand performance of their new solvers. The ultimate goal is to assistcomputational specialists with the proper choice of eigensolvers fortheir applications.

  • Interior Eigensolver for Sparse Hermitian Definite Matrices Based on Zolotarev's Functions
    Haizhao Yang
  • The ELSI Infrastructure for Large-Scale Electronic Structure Theory
    Volker Blum
  • Recent Progress on Solving Large-scale Eigenvalue Problems in ElectronicStructure Calculations
    Chao Yang
  • Polynomial and rational filtering, spectrum slicing and EVSL package
    Yuanzhe Xi
  • The Full Configuration Interaction Quantum Monte Carlo(FCIQMC) in the lens of inexact power iteration
    Zhe Wang
  • A FEAST Algorithm with oblique projection for generalized eigenvalue problems
    Guojian Yin
  • Real eigenvalues in linear viscoelastic oscillators
    Heinrich Voss
  • Error bounds for Ritz vectors and approximate singular vectors
    Yuji Nakatsukasa
  • Consistent symmetric greedy coordinate descent method
    Yingzhou Li
  • On the accuracy of fast structured eigenvalue solutions
    Jimmy Vogel
  • Generation of large sparse test matrices to aid the development of large-scale eigensolvers
    Peter Tang
Large-scale matrix and tensor optimization (4 talks)
Yangyang Xu

Summary: Matrix and tensor optimization problems naturally arise from applications that involve two-dimensional or multi-dimensional array data, such as social network analysis, neuroimaging, Netflix recommendation system, and so on. Unfolding the matrix and tensor variable and/or data into a vector may lose the intrinsic structure. Hence it is significant to keep the matrix and tensor format. This minisymposium includes talks about recently proposed models and algorithms with complexity analysis for large-scale matrix and tensor optimization.

  • Greedy method for orthogonal tensor decomposition
    Yangyang Xu
  • Second Order Sparsity and Large Scale Matrix Optimization
    Defeng Sun
  • Vector Transport-Free SVRG with General Retraction for Riemannian Optimization: Complexity Analysis and Practical Implementation
    Bo Jiang
  • On conjugate partial-symmetric complex tensors
    Bo Jiang
Low Rank Matrix Approximations with HPC Applications (8 talks)
Hatem Ltaief, David Keyes

Low-rank matrix approximations have demonstrated attractive theoretical bounds, both in memory footprint and arithmetic complexity. In fact, they have even become numerical methods of choice when designing high performance applications, especially when looking at the forthcoming exascale era, where systems with billions of threads will be routine resources at hand. This minisymposium aims at bringing together experts from the field to assess the software adaptation of low-rank matrix computations into HPC applications.

  • Fast Low-Rank Solvers for HPC Applications on Massively Parallel Systems
    Hatem Ltaief
  • GOFMM: A geometry-oblivious fast multipole method for approximating arbitrary SPD matrices
    George Biros
  • A Parallel Implementation of a High Order Accurate Variable Coefficient Helmholtz Solver
    Natalie Beams
  • Low-Rank Matrix Approximations for Oil and Gas HPC Applications
    Issam Said
  • STARS-H: a Hierarchical Matrix Market within an HPC Framework
    Alexandr Mikhalev
  • Matrix-free construction of HSS representations usingadaptive randomized sampling
    Sherry Li
  • Low Rank Approximations of Hessians for PDE Constrained Optimization
    George Turkiyyah
  • Simulations for the European Extremely Large Telescope using Low-Rank Matrix Approximations
    Damien Gratadour
Low-Rank and Toeplitz-Related Structures in Applications and Algorithms (8 talks)
Stefano Serra-Capizzano, Eugene Tyrtyshnikov

The mini-symposium is focused on Structured Matrix Analysis, with the special target of shedding light on Low-Rank and Toeplitz-related Structures. On sufficiently regular domains, certain combinations of such matrix objects weighted with proper diagonal sampling matrices are sufficient for describing in a great generality approximation of integro-differential operators with variable coefficient, by means of (virtually) any type of discretization technique (finite elements, finite differences, isogeometric analysis, finite volumes etc). The considered topics and the young age of the speakers are aimed at fostering the contacts between PhD students, postdocs and young researchers, with a balanced choice of talks addressing at improving collaborations between analysis and applied research,showing connections among different methodologies,using the applications as a challenge for the search of more advanced algorithms.

  • Multilinear and Linear Structures in Theory and Algorithms
    Eugene Tyrtyshnikov
  • Generalized Locally Toeplitz Sequences: a Link between Measurable Functions and Spectral Symbols
    Giovanni Barbarino
  • On the study of spectral properties of matrix sequences
    Stanislav Morozov
  • Cross method accuracy estimates in consistent norms
    Alexander Osinsky
  • Spectral and convergence analysis of the discrete Adaptive Local Iterative Filtering method by means of Generalized Locally Toeplitz sequences
    Antonio Cicone
  • Asymptotic expansion and extrapolation methods for the fast computation of the spectrum of large structured matrices
    Sven-Erik Ekstrom
  • Isogeometric analysis for 2D and 3D curl-div problems: Spectral symbols and fast iterative solvers
    Hendrik Speleers
  • Rissanen-like algorithm for block Hankel matrices in linear storage
    Ivan Timokhin
Machine Learning: theory and practice (4 talks)
Haixia Liu, Yuan Yao

Summary: Machine learning is experiencing a period of rising impact on many areas of the sciences and engineering such as imaging, advertising, genetics, robotics, and speech recognition. On the other hand, it has deep roots in various aspects in mathematics, from optimization, approximation theory, to statistics, etc. This mini-symposium aims to bring together researchers in different aspects of machine learning for discussions on the state-of-the-art developments in theory and practice. The mini-symposium has a total of four talks, which are about fast algorithms solving linear inequalities, genetic data analysis, theory and practice of deep learning.

  • Approximation of inconsistent systems of linear inequalities: fast solvers and applications
    Mila Nikolova
  • Theory of Distributed Learning
    Ding-Xuan Zhou
  • Scattering Transform for the Analysis and Classification of Art Images
    Roberto Leonarduzzi
  • TBA
    Can Yang
Matrix Functions and their Applications (8 talks)
Andreas Frommer, Kathryn Lund, Massimiliano Fasi

Matrix functions are an important tool in many areas of scientific computing. They arise in the solution of differential equations, as the exponential, sine, or cosine; in graph and network analysis, as measurements of communicability and betweenness; and in lattice quantum chromodynamics, as the sign of the Dirac overlap operator. They also have many applications in statistics, theoretical physics, control theory, and machine learning. Methods for computing matrix functions times a vector encompass a variety of numerical linear algebra tools, such as Gauss quadrature, Krylov subspaces, rational and polynomial approximations, and singular value decompositions. Furthermore, many numerical analysis tools are used for analyzing the convergence and stability of these methods, as well as the condition number of $f(A)$ and decay bounds of its entries. Given the rapid expansion of the literature on matrix functions in the last few years, this seminar fills an ongoing need to present and discuss state-of-the-art techniques pertaining to matrix functions, their analysis, and applications.

  • A harmonic Arnoldi method for computing the matrix function $f(A)v$
    Zhongxiao Jia
  • Superlinear convergence for matrix functions times a vector
    Bernhard Beckermann
  • A new framework for understanding block Krylov methods applied to matrix functions
    Kathryn Lund
  • Bounds for the decay of the entries in inverses and Cauchy-Stieltjes functions of certain sparse normal matrices
    Claudia Schimmel
  • Matrix Means for Signed and Multi-layer Graphs Clustering
    Pedro Mercado Lopez
  • A Daleckii--Krein formula for the Fr{é}chet derivative of SVD-based matrix functions
    Vanni Noferini
  • Computing matrix functions in arbitrary precision
    Massimiliano Fasi
  • Matrix function approximation for computational Bayesian statistics
    Markus Hegland
Matrix Optimization and Applications (12 talks)
Xin Liu, Ting Kei Pong

In this session, we focus on optimization problems with matrix variables, including semidefinite programming problems, low rank matrix completion / decomposition problems, and orthogonal constrained optimization problems, etc. These problems arise in various applications such as bio-informatics, data analysis, image processing and materials science, and are also abundant in combinatorial optimization.

  • Faster Riemannian Optimization using Randomized Preconditioning
    Haim Avron
  • Proximal gradient method and nonsmooth convex regression with cardinality penalty
    Wei Bian
  • Implementation of an ADMM-type first-order method for convex composite conic programming
    Liang Chen
  • Relationship between three sparse optimization problems for multivariate regression
    Xiaojun Chen
  • Euclidean distance embedding for collaborative position localization with NLOS mitigation
    Chao Ding
  • An exact penalty method for semidefinite-box constrained low-rank matrix optimization problems
    Tianxiang Liu
  • A parallelizable algorithm for orthogonally constrained optimization problems
    Xin Liu
  • A non-monotone alternating updating method for a class of matrix factorization problems
    Ting Kei Pong
  • Quadratic Optimization with Orthogonality Constraint: Explicit Lojasiewicz Exponent and Linear Convergence of Retraction-Based Line-Search and Stochastic Variance-Reduced Gradient Methods
    Anthony Man-Cho So
  • Algebraic properties for eigenvalue optimization
    Yangfeng Su
  • Local Geometry of Matrix Completion
    Ruoyu Sun
  • Quantum correlations: Conic formulations and matrix factorizations
    Antonios Varvitsiotis
Nonlinear Eigenvalue Problems and Applications (8 talks)
Meiyue Shao, Roel Van Beeumen

Eigenvalue problems arise in many fields of science and engineering and their mathematical properties and numerical solution methods for standard, linear eigenvalue problems are well understood. Recent advances in several application areas resulted in a new type of eigenvalue problem---the nonlinear eigenvalue problem, $A(lambda)x=0$---which exhibits nonlinearity in the eigenvalue parameter. Moreover, the nonlinear eigenvalue problem received more and more attention from the numerical linear algebra community during the last decade. So far, the majority of the work has been focused on polynomial eigenvalue problems. In this minisymposium we will address the general nonlinear eigenvalue problem involving nonlinear functions such as exponential, rational, and irrational ones. Recent literature on numerical methods for solving these general nonlinear eigenvalue problems can, roughly speaking, be subdivided into three main classes: Newton-based techniques, Krylov subspace methods applied to linearizations, and contour integration and rational filtering methods. Within this minisymposium we would like to address all three classes used to solve large-scale nonlinear eigenvalue problems in different application areas.

  • Handling square roots in nonlinear eigenvalue problems
    Roel Van Beeumen
  • Solving nonlinear eigenvalue problems using contour integration
    Marc Van Barel
  • Automatic rational approximation and linearization for nonlinear eigenvalue problems
    Karl Meerbergen
  • Robust Rayleigh quotient optimization and nonlinear eigenvalue problems
    Ding Lu
  • Conquering algebraic nonlinearity in nonlinear eigenvalue problems
    Meiyue Shao
  • Solving different rational eigenvalue problems via different types of linearizations
    Froilán M. Dopico
  • NEP-PACK A Julia package for nonlinear eigenvalue problems
    Emil Ringh
  • A Pade Approximate Linearization for solving nonlinear eigenvalue problems in accelerator cavity design
    Zhaojun Bai
Nonlinear Perron-Frobenius theory and applications (4 talks)
Antoine Gautier, Francesco Tudisco

Nonlinear Perron-Frobenius theory addresses problems such as existence, uniqueness and maximality of positive eigenpairs of different types of nonlinear and order-preserving mappings.In recent years tools from this theory have been successfully exploited to address problems arising from a range of diverse applications and various areas, such as graph and hypergraph analysis, machine learning, signal processing, optimization and spectral problems for nonnegative tensors. This minisymposium sample some recent contributions in this field, covering advances in both the theory and the applications of Perron-Frobenius theory for nonlinear mappings.

  • Nonlinear Perron-Frobenius theorem and applications to nonconvex global optimization
    Antoine Gautier
  • Node and Layer Eigenvector Centralities for Multiplex Networks
    Francesca Arrigo
  • Inequalities for the spectral radius and spectral norm of nonnegative tensors
    Shmuel Friedland
  • Some results on the spectral theory of hypergraphs
    Jiang Zhou
Numerical Linear Algebra Algorithms and Applications in Data Science (7 talks)
Shuqin Zhang, Limin Li

Summary: Data science is currently one of the hottest research fields in many real applications such as medicine, business, finance, transportation, etc.. Lots of computational problems arise in the process of data modelling and data analysis. Due to the finite dimension property of the data samples, most computational problems can be transformed to linear algebra related problems. To date, numerical linear algebra has played important roles in data science. With the fast development of experimental techniques and growth of internet communications, more and more data are generated nowdays. The availability of a huge amount of data brings big challenges for traditional computational methods. On one hand, to handle the big data matrices (high dimension, big sample size), algorithms having high computational speed and accuracy are in great need. This proposes the problem of improving the traditional methods such as SVD methods, conjugate gradient method, matrix preconditioning methods, and so on. On the other hand, with the generation of more data, many new models are proposed. This brings the chances for developing novel algorithms. Taking into account the properties of data to build good models and propose fast and accurate algorithms will accelerate the development of data science greatly. Numerical linear algebra as the essential technique for numerical algorithm development should be paid more attention. The speakers in this minisymposium will discuss work that arises in data modelling including multiview data learning, data dimension reduction, data approximation, and stochastic data analysis. The numerical linear algebra methods cover low-dimension projection, matrix splitting, parallel SVD, conjugate gradient method, matrix preconditioning and so on. This minisymposium brings together researchers from different data analysis fields focusing on numerical linear algebra related algorithm development. It will emphasize the importance and strengthen the role of linear algebra in data science, thereby advances the collaborations for researchers from different fields.

  • Simultaneous clustering of multiview data
    Shuqin Zhang
  • Uniform Projection for Multi-view Learning
    Limin Li
  • Averaged information splitting for heterogeneously high-throughput data analysis
    Shengxin Zhu
  • A distributed parallel SVD algorithm based on the polar decomposition via Zolotarev's function
    Shengguo Li
  • A modified seasonal grey system model with fractional order accumulation for forecasting traffic flow
    Yang Cao
  • A Riemannian variant of Fletcher-Reeves conjugate gradient method for stochastic inverse eigenvalue problems with partial eigendata
    Zheng-Jian Bai
  • A splitting preconditioner for implicit Runge-Kutta discretizations of a differential-algebraic equation
    Shuxin Miao
Numerical Methods for Ground and Excited State Electronic Structure Calculations (8 talks)
Anil Damle, Lin Lin, Chao Yang

Electronic structure theory and first principle calculations are among the most challenging and computationally demanding science and engineering problems. At their core, many of the methods used require the development of efficient and specialized linear algebraic techniques. This minisymposium aims to discuss new developments in the linear algebraic tools, numerical methods, and mathematical analysis used to achieve high levels of accuracy and efficiency in electronic structure theory. We bring together experts on electronic structure theory representing a broad set of computational approaches used in the field.

  • A unified approach to Wannier interpolation
    Anil Damle
  • Potentialities of wavelet formalism towards a reduction of the complexity of large scale electronic structure calculations
    Luigi Genovese
  • Convergence analysis for the EDIIS algorithm
    Tim Kelley
  • A Semi-smooth Newton Method For Solving semidefinite programs inelectronic structure calculations
    Zaiwen Wen
  • Adaptive compression for Hartree-Fock-like equations
    Michael Lindsey
  • Latest developments of the polarizable continuum model within a domain-decomposition paradigm
    Paolo Gatto
  • Projected Commutator DIIS method for linear and nonlinear eigenvalue problems
    Lin Lin
  • Parallel transport evolution of time-dependent density functional theory
    Dong An
Optimization Methods on Matrix and Tensor Manifolds (8 talks)
Gennadij Heidel, Wen Huang

Riemannian optimization methods are a natural extension of Euclidean optimization methods: the search space is generalized from a Euclidean space to a manifold endowed with a Riemannian structure. This allows for many constrained Euclidean optimization problems to be formulated as unconstrained problems on Riemannian manifolds; the geometric structure can be exploited to provide mathematically elegant and computationally efficient solution methods by using tangent spaces as local linearizations. Many important structures from linear algebra admit a Riemannian manifold structure, such as matrices with mutually orthogonal columns (Stiefel manifold), subspaces of fixed dimension (Grassmann manifold), positive definite matrices, or matrices of fixed rank. The first session of this minisymposium will present some applications of the Riemannian optimization framework, such as blind deconvolution, computation of the Karcher mean, and low-rank matrix learning. It will also present novel results on subspace methods in Riemannian optimization. The second session will be centered on the particular class of low-rank tensor manifolds, which make computations with multiway arrays of large dimension feasible and have attracted particular interest in recent research. It will present novel results on second-order methods on tensor manifolds, such as trust-region or quasi-Newton methods. It will also present results on dynamical approximation of tensor differential equations.

  • Blind deconvolution by Optimizing over a Quotient Manifold
    Wen Huang
  • Riemannian optimization and the computation of the divergences and the Karcher mean of symmetric positive definite matrices
    Kyle A. Gallivan
  • A manifold approach to structured low-rank matrix learning
    Bamdev Mishra
  • Subspace methods in Riemannian manifold optimization
    Weihong Yang
  • Quasi-Newton optimization methods on low-rank tensor manifolds
    Gennadij Heidel
  • Robust second order optimization methods on low rank matrix and tensor varieties
    Valentin Khrulkov
  • A Riemannian trust region method for the canonical tensor rank approximation problem
    Nick Vannieuwenhoven
  • Dynamical low-rank approximation of tensor differential equations
    Hanna Walach
Parallel Sparse Triangular Solve on Emerging Platforms (4 talks)
Weifeng Liu, Wei Xue

Summary: Sparse triangular solve (SpTRSV) is an important building block in a number of numerical linear algebra routines such as sparse direct solvers and preconditioned sparse iterative solvers. Compared to dense triangular solve and other sparse basic linear algebra subprograms, SpTRSV is more difficult to parallelize since it is inherently sequential. The set-based methods (i.e., level-set and color-set) brought parallelism but also demonstrated high costs for preprocessing and runtime synchronization. In this proposed minisymposium, we will discuss current challenges and novel algorithms for SpTRSV on shared memory processors with homogeneous architectures (such as GPU and Xeon Phi) and with heterogeneous architectures (such as Sunway and APU), and distributed memory clusters. The objective of this minisymposium is to explore and discuss how emerging parallel platforms can help next-generation SpTRSV algorithm design.

  • Scalability Analysis of Sparse Triangular Solve
    Weifeng Liu
  • Parallel Sparse Triangular Solve on Modern Many-core Processors: Synchronization-Free and 2D Blocking
    Ang Li
  • Refactoring Sparse Triangular Solve on Sunway TaihuLight Many-core Supercomputer
    Wei Xue
  • Enhancing Scalability of Parallel Sparse Triangular Solve in SuperLU
    Yang Liu
Polynomial and Rational Matrices (8 talks)
Javier Pérez, Andrii Dmytryshyn

Polynomial and rational matrices have attracted much attention in the last years. Their appearance in numerous modern applications requires revising and improving known as well as developing new theories and algorithms concerning the associated eigenvalue problems, error and perturbation analyses, efficient numerical implementations, etc. This Mini-Symposium aims to give an overview of the recent research on these topics, focusing on numerical stability of quadratic eigenvalue problem; canonical forms, that reveal transparently the complete eigenstructures; sensitivity of complete eigenstructures to perturbations; low-rank matrix pencils and matrix polinomials; block-tridiagonal linearizations; and structured eigenproblems associated with port-Hamiltonian systems.

  • Stratifying complete eigenstructures: From matrix pencils to polynomials and back
    Bo Kågström
  • Block-symmetric linearizations of odd degree matrix polynomials with optimal condition number and backward error
    Maria Isabel Bueno
  • Transparent Realizations for Polynomial and Rational Matrices
    Steve Mackey
  • Generic eigenstructures of matrix polynomials with bounded rank and degree
    Andrii Dmytryshyn
  • A backward stable quadratic eigenvalue solver
    Françoise Tisseur
  • A geometric description of the sets of palindromic and alternating matrix pencils with bounded ran
    Fernando De Terán
  • Second and higher order port-Hamiltonian systems and their first order representations
    Volker Mehrmann
  • On the stability of the two-level orthogonal Arnoldi method for quadratic eigenvalue problems
    Javier Pérez
Preconditioners for fractional partial differentialequations and applications (4 talks)
Daniele Bertaccini

Summary: Fractional partial differential equations (FDEs) are a strongly emerging tool every day more present in models in many applicative fields where, e.g.,nonlocal dynamics and anomalous diffusion are present such as in viscoelastic and polymeric materials, in control theory, economy, etc. In this minisymposium proposal we plan to give a short but quite illustrative overview of the potentialities and of the related convergence theory for some ah-hoc innovative preconditioning techniques for the iterative solution of the large (but full!) linear systems generated by the discretization of the underlying FDE models. The numerical solution of the underlying linear systems isan important research area as such equations pose substantial challenges to existing algorithms.

  • Limited memory block preconditioners for fast solution of time-dependent fractional PDEs
    Fabio Durastante
  • Spectral analysis and multigrid preconditioners for space-fractional diffusion equations
    Maria Rosa Mazza
  • Fast tensor solvers for optimization problems with FDE-constraints
    Martin Stoll
  • Preconditioner for Fractional Diffusion Equations with Piecewise Continuous Coefficients
    Hai-wei Sun
Preconditioners for ill-conditioned linear systems in large scale scientific computing (8 talks)
Luca Bergamaschi, Massimiliano Ferronato, Carlo Janna

Summary: The efficient solution to sparse linear systems is quite a common issue in several real world applications and often represents the main memory-and time-consuming task in a computer simulation. In many areas of large scale engineering and scientific computing, the solution to large, sparse and very ill-conditioned systems relies on iterative methods which need appropriate preconditioning to achieve convergence in a reasonable number of iterations. The aim of this minisymposium is to present state-of-the-art scalar and parallel preconditioning techniques with particular focus on 1. block preconditioners for indefinite systems 2. multilevel preconditioners 3. preconditionersfor least-squares problems 4. low-rank updates of preconditioners

  • Robust AMG interpolation with target vectors for elasticity problems
    Ruipeng Li
  • A Multilevel Preconditioner for Data Assimilation with 4D-Var
    Alison Ramage
  • Algebraic Multigrid: Theory and Practice
    James Joseph Brannick
  • Preconditioning for multi-physics problems: A general framework
    Massimiliano Ferronato
  • Preconditioners for inexact-Newton methods based on compactrepresentation of Broyden class updates
    J.~Marín
  • Preconditioning for Time-Dependent PDE-Constrained Optimization
    John Pearson
  • Multigrid preconditioning techniques for geophysical electromagnetics
    Hisham bin Zubair Syed
  • Spectral preconditioners for sequences of ill-conditioned linear systems
    Luca Bergamaschi
Preconditioning for PDE Constrained Optimization (4 talks)
Roland Herzog, John Pearson

The field of PDE-constrained optimization provides a gateway to the study of many real-world processes from science and industry.As these problems typically lead to huge-scale matrix systems upon discretization, it is therefore crucial to develop fast and efficient numerical solvers tailored specifically to the application at hand.Significant progress has been made in recent years, and research is now shifting to more challenging problems, e.g., obtaining parameter robust iterations and solving coupled multi-physics systems.In this minisymposium we wish to draw upon state-of-the-art preconditioners to accelerate the convergence of iterative methods when applied to such problems.Speakers in this session will also provide an outlook to future challenges in the field.

  • Preconditioners for PDE constrained optimization problems with coarse distributed observations
    Kent-Andre Mardal
  • Preconditioning for multiple saddle point problems
    Walter Zulehner
  • New Poisson-like preconditioners for fast and memory-efficient PDE-constrained optimization
    Lasse Hjuler Christiansen
  • Preconditioning for Time-Dependent PDEs and PDE Control
    Andrew Wathen
Randomized algorithms for factorizing matrices (4 talks)
Per-Gunnar Martinsson

Summary: Methods based on randomized projections have over the last several years proven to provide powerful tools for computing low-rank approximations to matrices. This minisymposium will explore recent research that demonstrates that the underlying ideas can also be used to solve other linear algebraic problems of high importance in applications. Problems addressed include how to compute extit{full} factorizations of matrices, how to compute matrix factorizations where the factors are required to have non-negative entries, how to compute matrix factorizations under constraints on how matrix entries can be accessed, solving linear systems, and more. The common theme is a focus on high practical efficiency.

  • Randomized Nonnegative Matrix Factorizations
    Benjamin Erichson
  • A randomized blocked algorithm for computing a rank-revealing UTV matrix decomposition
    Nathan Heavner
  • Randomized algorithms for imaging applications
    Sergey Voronin
  • Stabilized streaming algorithms for computing approximate matrix factorizations via randomized projections
    Abinand Gopal
Rank structured methods for challenging numerical computations (8 talks)
Sabine Le Borne, Jianlin Xia

Summary: Rank-structured methods have demonstrated significant advantages in improving the efficiency and reliability of some large-scale computations and engineering simulations. These methods extend the fundamental ideas of multipole and panel-clustering methods to general non-local solution operators. While there exist various more or less closely related methods, the unifying aim of these methods is to explore efficient structured low-rank approximations, especially those exhibiting hierarchical or nested forms. These help the methods to achieve nearly linear complexity. In this minisymposium, we aim to present and exchange recent new developments on rank structured methods for some challenging numerical problems such as high frequencies, ill conditioning, eigenvalue perturbation, and stability. Studies of structures, algorithm design, and accuracy control will be discussed. The minisymposium will include experts working on a broad range of rank structured methods.

  • H-matrices for stable computations in RBF interpolation problems
    Sabine Le Borne
  • How good are H-Matrices at high frequencies?
    Timo Betcke
  • Local low-rank approximation for the high-frequency Helmholtz equation
    Steffen Boerm
  • Efficiency and Accuracy of Parallel Accumulator-based H-Arithmetic
    Ronald Kriemann
  • Randomized techniques for fast eigenvalue solution
    Jianlin Xia
  • The perfect shift and the fast computation of roots of polynomials
    Nicola Mastronardi
  • Structured matrices in polynomial system solving
    Simon Telen
  • Preserving positive definiteness in HSS approximation and its application in preconditioning
    Xin Xing
Rational Krylov Methods and Applications (8 talks)
Stefan Güttel, Patrick Kürschner

Rational Krylov methods have become an indispensable tool of scientific computing. Invented by Axel Ruhe for the solution of large sparse eigenvalue problems, these methods have seen an increasing number of other applications over the last two decades or so. Applications of rational Krylov methods are connected with model order reduction, matrix function approximation, matrix equations, nonlinear eigenvalue problems, and nonlinear rational least squares fitting, to name a few. This minisymposium aims to bring together experts to discuss recent progress on theoretical and numerical aspects of these methods as well as novel applications.

  • The block rational Arnoldi algorithm
    Steven Elsworth
  • Rational Krylov Subspaces for Wavefield Applications
    Jörn Zimmerling
  • Krylov methods for Hermitian nonlinear eigenvalue problems
    Giampaolo Mele
  • Compressing variable-coefficient Helmholtz problems via RKFIT
    Stefan Güttel
  • Rational Krylov methods in discrete inverse problems
    Volker Grimm
  • Inexact Rational Krylov methods applied to Lyapunov equations
    Melina Freitag
  • Numerical methods for Lyapunov matrix equations with banded symmetric data
    Davide Palitta
  • A comparison of rational Krylov and related low-rank methods for large Riccati equations
    Patrick Kürschner
Recent advances in linear algebra for uncertainty quantification (7 talks)
Zhiwen Zhang, Bin Zheng

The aim of this mini-symposium is to present recent development of advanced linear algebra techniques for uncertainty quantification including, but are not limited to, preconditioning techniques and multigrid methods for stochastic partial differential equations, multi-fidelity methods in uncertainty quantification, hierarchical matrices and low-rank tensor approximations, compressive sensing and sparse approximations, model reduction methods for PDEs with stochastic and/or multiscale features, random matrix models, etc.

  • Asymptotic analysis and numerical method for singularly perturbed eigenvalue problems
    Zhongyi Huang
  • An Adaptive Reduced Basis ANOVA Method for High-Dimensional Bayesian Inverse Problems
    Qifeng Liao
  • Title: Randomized Kaczmarz method for linear inverse problems
    Xiliang Lu
  • TBA
    Ju Ming
  • Sequential data assimilation with multiple nonlinear models and applications to subsurface flow
    Peng Wang
  • A new model reduction technique for convection-dominated PDEs with random velocity fields
    Guannan Zhang
  • Gamblet based multilevel decomposition/preconditioner for stochastic multiscale PDE
    Lei Zhang
Recent Advances in Tensor Based Data Processing (Part I) (4 talks)
Chuan Chen, Yi Chang, Yao Wang, Xi-Le Zhao

Summary: As a natural representation for high-dimensional data (e.g., hyperspectral images and heterogeneous information network), tensor (i.e. multidimensional array) has recently becomeubiquitous in data analytics at the confluence ofstatistics, image processing and machine learning. The related advances in applied mathematics motivate us to gradually move from classical matrix based methods to tensor based methods for data processing problems, which could offer new tools to exploit the intrinsic multilinear structure. In this inter-disciplinary research field, there are fast emerging works on tensor based theory, models, numerical algorithms, and applications on data processing. This mini-symposium aims at promoting discussions among researchers investigating innovative tensor based approaches to data processing problems in both theoretical and practical aspects.

  • Block Term Decomposition for Multilayer Networks Clustering
    Zi-Tai Chen
  • Hyperspectral Image Restoration via Tensor-based Priors: From Low-rank to Deep Model
    Yi Chang
  • Compressive Tensor Principal Component Pursuit
    Yao Wang
  • Low-rank tensor completion using parallelmatrix factorization with factor priors
    Xi-Le Zhao
Recent Advances in Tensor Based Data Processing (Part II) (4 talks)
Chuan Chen, Yi Chang, Yao Wang, Xi-Le Zhao

Summary: As a natural representation for high-dimensional data (e.g., hyperspectral images and heterogeneous information network), tensor (i.e. multidimensional array) has recently becomeubiquitous in data analytics at the confluence ofstatistics, image processing and machine learning. The related advances in applied mathematics motivate us to gradually move from classical matrix based methods to tensor based methods for data processing problems, which could offer new tools to exploit the intrinsic multilinear structure. In this inter-disciplinary research field, there are fast emerging works on tensor based theory, models, numerical algorithms, and applications on data processing. This mini-symposium aims at promoting discussions among researchers investigating innovative tensor based approaches to data processing problems in both theoretical and practical aspects.

  • Data Mining with Tensor based Methods
    Xu-Tao Li
  • Low-rank Tensor Analysis with Noise Modeling
    Zhi Han
  • Hyperspectral and Multispectral Image Fusion Via Total Variation Regularized Nonlocal Tensor Train Decomposition
    Kai-Dong Wang
  • A Novel Tensor-based Video Rain Streaks Removal Approach via UtilizingDiscriminatively Intrinsic Priors
    Liang-Jian Deng
Recent applications of rank structures in matrix analysis (8 talks)
Thomas Mach, Stefano Massei, Leonardo Robol

The development of applied science and engineering raised attention on large scale problems, generating an increasing demand of computational effort. In many practical situations, the only way to satisfy this request is to exploit obvious and hidden structures in the data. In this context, rank structures constitute a powerful tool for reaching this goal. Many real-world problems are analyzed by means of algebraic techniques that exploit low-rank structures: fast multipole methods, discretization of PDEs and integral equations, efficient solution of matrix equations, and computation of matrix functions. The representation and the theoretical analysis of these algebraic objects is of fundamental importance to devise fast algorithms. Several representations have been proposed in the literature: $\mathcal{H}$, $\mathcal{H}^{2}$, and HSS matrices, quasiseparable and semiseparable structures. The design of fast methods relying on these representations is currently an active branch of numerical linear algebra. The talks in this minisymposium present some recent advances in this field.

  • Low rank updates and a divide and conquer method for matrix equations
    Stefano Massei
  • $RQZ$: A rational $QZ$ algorithm for the generalized eigenvalue problem
    Daan Camps
  • Fast direct algorithms for least squares and least norm solutions for hierarchical matrices
    Abhay Gupta
  • Computing the inverse matrix $phi_1$-function for a quasiseparable matrix
    Luca Gemignani
  • Superfast direct solvers for 3D MSN PDE solvers
    Shiv Chandrasekaran
  • Fast direct solvers for boundary value problems on globally evolving geometries
    Adrianna Gillman
  • Matrix Aspects of Fast Multipole Method
    Xiaofeng Ou
  • Adaptive Cross Approximation for Ill-Posed Problems
    Thomas Mach
Some recent applications of Krylov subspace methods (10 talks)
Yunfeng Cai, Lei-Hong Zhang

Summary: Krylov subspace methods are generally accepted as one of the most efficient methodsfor solving large sparse linear system of equations and eigenvalue problems. Traditionally, many famous Krylov subspace methods such as PCG, MINRES, GMRES, etc. for linear system of equations, and Lanczos and Arnoldi methods (also their variants) for eigenvalue problems, have been developed, and have been successfully solving numerous crucially important problems in science and engineering. One of recent trends of the Krylov subspace method is to extend its power to solve other important real-world applications. Along this line, we propose this mini-symposium by carefully choosing the following talks on some recent/new applications of Krylov subspace methods. These talks cover the applications of Krylov methods on optimization, tensor analysis, data mining, linear systems, eigenvalue problems, and preconditioning.Through this mini-symposium, we hope to reveal the power of Krylov subspace methods in solving these applications, and stimulate other more important developments.

  • Preconditioning for Accurate Solutions of Linear Systems and Eigenvalue Problems
    Qiang Ye
  • A block term decomposition of high order tensors
    Yunfeng Cai
  • A Fast Implementation On The Exponential Marginal Fisher Analysis For High Dimensionality Reduction
    Gang Wu
  • Deflated block Krylov subspace methods for large scale eigenvalue problems
    Qiang Niu
  • Lanczos type methods for the linear response eigenvalue problem
    Zhongming Teng
  • Sparse frequent direction algorithm for low rank approximation
    Delin Chu
  • On the Generalized Lanczos Trust-Region Method
    Lei-Hong Zhang
  • A Block Lanczos Method for the Extended Trust-Region Subproblems
    Weihong Yang
  • Parametrized quasi-soft thresholding operator for compressed sensing and matrix completion
    An-Bao Xu
  • Two-level RAS preconditioners of Krylov subspace methods for large sparse linear systems
    Xin Lu
Tensor Analysis, Computation, and Applications I (8 talks)
Weiyang Ding

Summary: The term {it tensor} has both meanings of a geometric object and a multi-way array. Applications of tensors include various disciplines in science and engineering, such as mechanics, quantum information, signal and image processing, optimization, numerical PDE, and hypergraph theory. There are several hot research topics on tensors, such as tensor decomposition and low-rank approximation, tensor spectral theory, tensor completion, tensor-related systems of equations, and tensor complementarity problems. Researchers in all these mentioned areas will give presentations to broaden our perspective on tensor research. This is one of a series minisymposia and focuses more on applications of tensors and structured tensors.

  • Applications of tensor eigenvalues in physics
    Liqun Qi
  • The rank of $Wotimes W$ is eight
    Shmuel Friedland
  • Optimization methods using matrix and tensor structures
    Eugene Tyrtyshnikov
  • Exploitation of structure in large-scale tensor decompositions
    Lieven De Lathauwer
  • An Adaptive Correction Approach for Tensor Completion
    Minru Bai
  • Generalized polynomial complementarity problems with structured tensors
    Chen Ling
  • Copositive Tensor Detection and Its Applications in Physics and Hypergraphs
    Haibin Chen
  • The bound of H-eigenvalue of some structure tensors with entries in an interval
    Lubin Cui
Tensor Analysis, Computation, and Applications II (8 talks)
Shenglong Hu

Summary: The term {it tensor} has both meanings of a geometric object and a multi-way array. Applications of tensors include various disciplines in science and engineering, such as mechanics, quantum information, signal and image processing, optimization, numerical PDE, and hypergraph theory. There are several hot research topics on tensors, such as tensor decomposition and low-rank approximation, tensor spectral theory, tensor completion, tensor-related systems of equations, and tensor complementarity problems. Researchers in all these mentioned areas will give presentations to broaden our perspective on tensor research. This is one of a series minisymposia and focuses more on tensor analysis and algorithm design.

  • The Fiedler vector of a Laplacian tensor for hypergraph partitioning
    Yannan Chen
  • Solving tensor problems via continuation methods
    Lixing Han
  • Local convergence rate analysis for the higher-order power method in best rank one approximations of tensors
    Guoyin Li
  • Tensor splitting methods for solving the multi-linear system
    Wen Li
  • Sparse Tucker decomposition completion for 3D facial expression recognition
    Ziyan Luo
  • Tensor ranks and secant varieties
    Yang Qi
  • S-lemma of the fourth order tensor systems
    Qingzhi Yang
  • Hankel Tensor Decompositions and Ranks
    Ke Ye
Tensor-Based Modelling (4 talks)
Lieven De Lathauwer

Summary: An important trend is the extension of applied linear algebra to applied multilinear algebra. The developments gradually allow a transition from classical vector and matrix based methods to methods that involve tensors of arbitrary order. Tensor decompositions open up various new avenues beyond the realm of matrix methods. This minisymposium presents tensor decompositions as new modelling tools. A range of applications in signal processing, data analysis, system modelling en computing is discussed.

  • Prewhitening under channel-dependent signal-to-noise ratios
    Chuan Chen
  • Tensor decompositions in reduced order models
    Youngsoo Choi
  • Block tensor train decomposition for singular value decomposition and missing data estimation
    Namgil Lee
  • Nonlinear system identification with tensor methods
    Kim Batselier
Tensors and multilinear algebra (8 talks)
Anna Seigal, André Uschmajew, Bart Vandereycken

Tensors in the form of multidimensional arrays have seen an increasing interest in recent years in the context of modern data analysis and high-dimensional equations in numerical analysis. Higher-order tensors are a natural generalization of matrices and, just as for matrices, their low-rank decompositions and spectral properties are important for applications. In the multilinear setting of tensors, however, analyzing such structures is challenging and requires conceptually new tools. Many techniques investigate and manipulate unfoldings (flattenings) of tensors into matrices, where linear algebra operations can be applied. In this respect, the subject of tensors and multilinear algebra fits a conference on applied linear algebra in two ways, as it occurs in many modern applications, and requires linear algebra for its treatment. In this minisymposium, we wish to bring the latest developments in this area to attention, and promote it as an active and attractive research field to people interested in linear algebra. Contrary to the other sessions on tensors, this session will focus on algebraic foundations and spectral properties of tensors that are important in understanding their low-rank approximations.

  • The condition number of the canonical polyadic tensor decomposition
    Paul Breiding
  • Geometrical description of feasible singular values in tree tensor formats
    Sebastian Kraemer
  • The positive cone to algebraic varieties of hierarchical tensors
    Benjamin Kutschan
  • Nuclear decomposition of higher-order tensors
    Lek-Heng Lim
  • Low-rank Riemannian optimization for high-dimensional eigenvalue problems
    Maksim Rakhuba
  • Decomposing Tensors into Equiangular Tight Frames
    Elina Robeva
  • Duality of graphical models and tensor networks
    Anna Seigal
  • Orthogonal tensors and rank-one approximation ratio
    Andre Uschmajew
The Perturbation Theory and Structure-Preserving Algorithms (8 talks)
Zheng-Jian Bai, Tiexiang Li, Hanyu Li, Zhi-Gang Jia

Summary: The perturbation theory provides reliability and stability analysis of scientific systems and algorithms, and has been one of the most important topics in numerical analysis. Recently, the perturbation theory has been involved in various fields, including the nonlinear eigenvalue/eigenvector problem, the generalized least square problem, the tensor analysis, the random methods for big data analysis, etc. For example, one crucial subject is to analyze the backward and forward errors of the eigenvector-dependent eigenvalue problem from solving the discrete Kohn-Sham equations.With a rigorous selection, we propose this mini-symposium containing eight presentations on the recent development of the perturbation theory and related works.These presentations include the forward and backward errors of the nonlinear eigenvectors, the random perturbation intervals of symmetric eigenvalue problem, the statistical condition estimation, and the structure-preserving algorithms. The final aim of this mini-symposium is to reveal the new tools in the perturbation theory, and put forward the research of the new methods and subjects in this important field.

  • Perturbation Analysis of an Eigenvector Dependent Nonlinear Eigenvalue Problem with Applications
    Zhi-Gang Jia
  • Improved random perturbation intervals of symmetric eigenvalue problem
    Hanyu Li
  • Error Bounds for Approximate Deflating Subspaces of Linear Response Eigenvalue Problems
    Wei-Guo Wang
  • Relative Perturbation Bounds for Eigenpairs of the Diagonalizable Matrices
    Yanmei Chen
  • Small Sample Statistical Condition Estimation for the Total Least Squares Problem
    Huaian Diao
  • Some perturbation results for Joint Block Diagonalization problems
    Decai Shi
  • A Structure-Preserving ${Gamma}$-Lanczos Algorithm for Bethe-Salpeter Eigenvalue Problems
    Tiexiang Li
  • A Structure-Preserving Jacobi Algorithm for Quaternion Hermitian Eigenvalue Problems
    Ru-Ru Ma
The Spectrum of Hypergraphs via Tensors (8 talks)
Xiying Yuan

Summary: Many graph problems have been successfully solved with linear methods by employing the associated matrices for graphs. As generalized from graphs, hypergraphs are now studied through their representations by tensors, an extended concept of matrices. This minisymposium mainly focuses on recent results related to the spectrum of uniform hypergraphs via tensors, some relevent algorithms and their possible applications in the study of hypernetworks.

  • On the analytic connectivity of uniform hypergraphs
    Changjiang Bu
  • Some recent results on the tensor spectrum of hypergraphs
    An Chang
  • The spectral symmetry and stabilizing property of tensors and hypergraphs
    Yizheng Fan
  • Tensor and the spectral radius of uniform hypergraphs
    Liying Kang
  • Some results on spectrum of graphs
    Mei Lu
  • Spectral Radius of ${0, 1}$-Tensor with Prescribed Number of Ones
    Linyuan Lu
  • Some results in spectral (hyper)graph theory
    Xiaodong Zhang
  • On distance Laplacian spectral radius of graphs
    Bo Zhou
Tridiagonal matrices and their applications in physics and mathematics (8 talks)
Natalia Bebiano, Mikhail Tyaglov

Summary: Tridiagonal matrices emerge in plenty of applications in science and engineering. They are used for solving a variety of problems in disparate contexts. Beyond their several applications seldom discussed, the methods, techniques, and theoretical framework used in this research field make it very interesting and challenging.In this minisymposium we attract people from different areas of mathematics who use tridiagonal matrices in their study to discuss recent developments, new approaches and perspectives as well as new applications of tridiagonal matrices.

  • On von Neumann and Rényi entropies of rings and paths
    Natália Bebiano
  • Tridiaglonal matrices with only one eigenvalue and their relations to polynomials orthogonal with non-Hermitian weight
    Mikhail Tyaglov
  • Positivity and Recursion formula of the linearization coefficients of Bessel polynomials
    M. J. Atia
  • Ultra-discrete analogue of the qd algorithm for Min-Plus tridiagonal matrix
    Akiko Fukuda
  • A generalized eigenvalue problem with two tridiagonal matrices
    Alagacone Sri Ranga
  • Eigenvalue problems of structured band matrices related to discrete integrable systems
    Masato Shinjo
  • On instability of the absolutely continuous spectrum of dissipative Schrödinger operators and Jacobi matrices
    Roman Romanov
  • Block-tridiagonal linearizations of matrix polynomials
    Susana Furtado