Colloquium/Seminar
Year | Month |
2021 | Feb Mar Apr May Jun Jul |
2020 | Jan May Jun Jul Aug Sep Oct Nov Dec |
2019 | Jan Feb Mar Apr May Jun Jul Aug Oct Nov |
2018 | Jan Feb Mar Apr May Jun Jul Aug Oct Nov Dec |
2017 | Jan Feb Mar Apr May Jun Jul Aug Oct Nov Dec |
Event(s) on January 2015
- Monday, 5th January, 2015
Title: Enhanced Optimality Conditions and New Constraint Qualifications for Nonsmooth Optimization Problem Speaker: Dr. ZHANG Jin , Department of Mathematics & Statistics, University of Victoria, Canada Time/Place: 10:30 - 11:30
FSC1217, Fong Shu Chuen Library, HSH Campus, Hong Kong Baptist UniversityAbstract: We investigate necessary optimality conditions for a class of very general nonsmooth optimization problems called the mathematical program with geometric constraints (MPGC). The geometric constraint means that the image of certain mapping is included in a nonempty and closed set. We first study the conventional nonlinear program with equality, inequality and abstract set constraints as a special case of MPGC. We derive the enhanced Fritz John condition and from which, we obtain the enhanced Karush-Kuhn-Tucker (KKT) condition and introduce the associated pseudonormality and quasinormality condition. We prove that either pseudonormality or quasinormality with regularity implies the existence of a local error bound. We also give a tighter upper estimate for the Frechet subdifferential and the limiting subdifferential of the value function in terms of quasinormal multipliers. We then consider a more general MPGC where the image of the mapping from a Banach space is included in a nonempty and closed subset of a finite dimensional space. We obtain the enhanced Fritz John necessary optimality conditions in terms of the approximate subdifferential. We then apply our results to the study of exact penalty and sensitivity analysis. We also study a special class of MPCG named mathematical programs with equilibrium constraints (MPECs). We give upper estimates for the subdifferential of the value function in terms of the enhanced M- and C-multipliers respectively. - Tuesday, 6th January, 2015
Title: The Signless Laplacian Spectral Radius of Graphs Speaker: Mr. HUANG Peng, Department of Mathematics, Hong Kong Baptist University, Hong Kong Time/Place: 14:30 - 15:30
FSC1217, Fong Shu Chuen Library, HSH Campus, Hong Kong Baptist UniversityAbstract: The spectral graph theory is always an important research field because of the wild range of applications in chemistry, physics, networkand other subjects. In this talk, we consider the signless Laplacian spectrum of graphs. It contains two parts. Firstly, by estimating the number of the semi-edge walks of a graph, we obtain a new upper bound on the signless Laplacian spectral radius of the graph. Moreover, combining with the result of the edge decomposition of a planar graph by D.Goncalves, a new upper bound of the signless Laplacian spectral radius of a planar graph is presented. Secondly, we take the vertex connectivity into account and study the gap between 2∆ and q1 if G is irregular, where 2∆ is the maximum degree of a graph and q1 is the signless Laplacian spectral radius of the graph. We establish a lower bound on 2∆- q1. - Wednesday, 7th January, 2015
Title: Optimal tests for weak and sparse signal detection and the calculation of their distributions Speaker: Dr. WU Zheyang , Department of Mathematics, Worcester Polytechnic Institute, USA Time/Place: 15:30 - 16:30
FSC1217, Fong Shu Chuen Library, HSH Campus, Hong Kong Baptist UniversityAbstract: Statistical literature on feature selection procedures usually attempts to separate the signals exactly from the noise. However, in big data studies signals are often relatively sparse and weak so that such consistent parameter estimation is not possible. At the same time, in many applications the exact retrieval of signals is not needed; it is sufficient to infer the existence of signals somewhere in a group of candidate features. For this purpose, it is interesting to develop the so-called optimal tests that can reliably detect minimal signals required by any statistics. In this talk, two research works will be introduced. First we motivate the statistical problem in the scenario of genome-wide association studies (GWAS) and the next generation sequencing studies (NGS). The theory of detection boundary is established to clarify the minimal genetic effects required by any association tests. An optimal test based on Higher Criticism is developed. Meanwhile, we show that many commonly applied association tests are suboptimal. Second, it is critical to understand the behavior of optimal tests in the finite-sample case. We study a large family of goodness-of-fit tests, which covers many optimal tests such as Higher Criticism, Berk-Jones, and Phi-divergence family. We develop an analytical method to approximate their distributions under both the null and the alternative hypotheses. We compare the performance of tests under various alternative hypotheses of signal patterns. For the typical alternative of normal mixtures, HC shows the best power when signals are rare. When signals are relatively common, some Phi-divergence statistics could be better. - Thursday, 8th January, 2015
Title: Robust Recovery and Detection of Structured Signals Speaker: Prof. T. Tony Cai, University of Pennsylvania, USA Time/Place: 17:00 - 18:00 (Preceded by Reception at 4:30pm)
1/F SPH, HSH Campus, Hong Kong Baptist UniversityAbstract: A large collection of statistical methods has been developed for estimation and detection of structured signals in the Gaussian and sub-Gaussian settings. In this talk, we present a general approach to robust recovery and detection of structured signals for a wide range of noise distributions. We illustrate the technique with nonparametric regression and detecting and identifying sparse short segments hidden in an ultra long linear sequence of data. A key step is the development of a quantile coupling theorem that is used to connect our problem with a more familiar Gaussian setting. An application to copy number variation (CNV) analysis based on next generation sequencing (NGS) data is also discussed. - Tuesday, 13th January, 2015
Title: Finite difference methods for fractional Laplacian Speaker: Dr. HUANG Yanghong, Chapman Fellow in Mathematics, Imperial College, UK Time/Place: 11:00 - 12:00
SCT909, Cha Chi-ming Science Tower, HSH Campus, Hong Kong Baptist UniversityAbstract: The fractional Laplacian is the prototypical example of non-local diffusion, and has been employed in many models with long range interactions. In this talk, a general form of the finite difference scheme is proposed and several existing methods in one dimension are reviewed or derived. The general form of the scheme has many discrete counterparts of its continuous definition: discrete convolution, random walk, and multiplier (or symbol) in semi-discrete Fourier transform. Despite the non-locality, the accuracy of different schemes can be obtained from the symbol, and is verified numerically. The schemes are also compared under different criteria, and can be chosen according to the applications. This is a joint work with Adam Oberman in McGill University. - Thursday, 15th January, 2015
Title: Asymptotics for change-point models under varying degrees of mis-specification Speaker: Dr. Rui Song, Department of Statistics, North Carolina State University, USA Time/Place: 10:30 - 11:30
FSC1217, Fong Shu Chuen Library, HSH Campus, Hong Kong Baptist UniversityAbstract: Change-point models are widely used by statisticians to model drastic changes in the pattern of observed data. Least squares/maximum likelihood based estimation of change-points leads to curious asymptotic phenomena. When the change--point model is correctly specified, such estimates generally converge at a fast rate ($n$) and are asymptotically described by minimizers of jump process. Under complete mis-specification by a smooth curve, i.e. when a change--point model is fitted to data described by a smooth curve, the rate of convergence slows down to $n^{1/3}$ and the limit distribution changes to that of the minimizer of a continuous Gaussian process. In this paper we provide a bridge between these two extreme scenarios by studying the limit behavior of change--point estimates under varying degrees of model mis-specification by smooth curves, which can be viewed as local alternatives. We find that the limiting regime depends on how quickly the alternatives approach a change--point model. We unravel a family of `intermediate' limits that can transition, at least qualitatively, to the limits in the two extreme scenarios - Tuesday, 20th January, 2015
Title: Latent Graphical Model for Mixed Data Speaker: Prof. Jianqing Fan, Princeton University, USA Time/Place: 11:30 - 12:30 (Preceded by Reception at 11:00am)
SPH, Ho Sin Hang Campus, Hong Kong Baptist UniversityAbstract: Graphical models are commonly used tools for modeling multivariate random variables. While there exist many convenient multivariate distributions such as Gaussian distribution for continuous data, mixed data with the presence of discrete variables or a combination of both continuous and discrete variables poses new challenges in statistical modeling. In this paper, we propose a semiparametric model named latent Gaussian copula model for binary and mixed data. The observed binary data are assumed to be obtained by dichotomizing a latent variable satisfying the Gaussian copula distribution or the nonparanormal distribution. The latent Gaussian model with the assumption that the latent variables are multivariate Gaussian is a special case of the proposed model. A novel rank-based approach is proposed for both latent graph estimation and latent principal component analysis. Theoretically, the proposed methods achieve the same rates of convergence for both precision matrix estimation and eigenvector estimation, as if the latent variables were observed. Under similar conditions, the consistency of graph structure recovery and feature selection for leading eigenvectors is established. The performance of the proposed methods is numerically assessed through simulation studies, and the usage of our methods is illustrated by a genetic dataset. This is a joint work with Han Liu, Yang Ning, and Hui Zou. - Wednesday, 21st January, 2015
Title: Some Problems for Nonlinear PDE in the Theory of Fields Speaker: Prof. Boling Guo, Beijing Institute of Applied Physics and Computational Mathematics, China Time/Place: 17:00 - 18:00 (Preceded by Reception at 4:30pm)
1/F SPH, Ho Sin Hang Campus, Hong Kong Baptist UniversityAbstract: As well as we know, the theory for nonlinear PDE in the theory of fields have very important theoretical and applicable signatures in many fundamental areas of physics and geometry. In this talk, the recent development of Yang- Mills-Higgs and Maxwell-Chen-Simms model will be introduced, and some recent results also be presented. - Thursday, 22nd January, 2015
Title: Computational and Optimization Methods for Quadratic Inverse Eigenvalue Problems Arising in Mechanical Vibration and Structural Dynamics: Linking Mathematics to Industry Speaker: Prof. Biswa Nath Datta, Northern Illinois University, USA Time/Place: 17:00 - 18:00 (Preceded by Reception at 4:30pm)
SCT909, Science Tower, HSH Campus, Hong Kong Baptist UniversityAbstract: The Quadratic Eigenvalue Problem is to find eigenvalues and eigenvectors of a quadratic matrix pencil of the form P(lambda) = M lambda^2 + C lambda + K, where the matrices M, C, and K are square matrices. Unfortunately, the problem has not been widely studied because of the intrinsic difficulties with solving the problem in a numerically effective way. Indeed, the state-of-the-art computational techniques are capable of computing only a few extremal eigenvalues and eigenvectors, especially if the matrices are large and sparse, which is often the case in practical applications. The inverse quadratic eigenvalue problem, on the other hand, refers to constructing the matrices M, C, and K, given the complete or partial spectrum and the associated eigenvectors. The inverse quadratic eigenvalue problem is equally important and arises in a wide variety of engineering applications, including mechanical vibrations, aerospace engineering, design of space structures, structural dynamics, etc. Of special practical importance is to construct the coefficient matrices from the knowledge of only partial spectrum and the associated eigenvectors. The greatest computational challenge is to solve the partial quadratic inverse eigenvalue problem using the small number of eigenvalues and eigenvectors which are all that are computable using the state-of-the-art techniques. Furthermore, computational techniques must be able to take advantage of the exploitable physical properties, such as the symmetry, positive definiteness, sparsity, etc., which are computational assets for solution of large and sparse problems.
This talk will deal with two special quadratic inverse eigenvalue problems that arise in mechanical vibration and structural dynamics. The first one, Quadratic Partial Eigenvalue Assignment Problem (QPEVAP), arises in controlling dangerous vibrations in mechanical structures. Mathematically, the problem is to find two control feedback matrices such that a small amount of the eigenvalues of the associated quadratic eigenvalue problem, which are responsible for dangerous vibrations, are reassigned to suitably chosen ones while keeping the remaining large number of eigenvalues and eigenvectors unchanged. Additionally, for robust and economic control design, these feedback matrices must be found in such a way that they have the norms as small as possible and the condition number of the modified quadratic inverse problem is minimized. These considerations give rise to two nonlinear unconstrained optimization problems, known respectively, as Robust Quadratic Partial Eigenvalue Assignment Problem (RQPEVAP) and Minimum Norm Quadratic Partial Eigenvalue Assignment Problem (MNQPEVAP). The other one, the Finite Element Model Updating Problem (FEMUP), arising in the design and analysis of structural dynamics, refers to updating an analytical finite element model so that a set of measured eigenvalues and eigenvectors from a real-life structure are reproduced and the physical and structural properties of the original model are preserved. A properly updated model can be used in confidence for future designs and constructions. Another major application of FEMUP is the damage detections in structures. Solution of FEMUP also give rises to several constrained nonlinear optimization problems. I will give an overview of the recent developments on computational methods for these difficult nonlinear optimization problems and discuss directions of future research with some open problems for future research. The talk is interdisciplinary in nature and will be of interests to computational and applied mathematicians, and control and vibration engineers and optimization experts. - Saturday, 24th January, 2015
Title: Risk Management in Banking Industry Speaker: Dr. Eric Fung, Credit Risk Manager, BOCI, Hong Kong Time/Place: 15:00 - 17:00
AAB613, Academic and Administration Building, Baptist University Road Campus, Hong Kong Baptist UniversityAbstract: Dr. Eric Fung is currently a credit risk manager, credit risk management, BOCI. Before that, he worked as a risk manager in asset and liability management in BEA. Dr. Fung is experience in various risk management areas, including, but not limiting to, credit risk, market risk, interest rate risk, ICCAP, stress testing, budgeting, pricing model validation, policy review, EAD/LGD model review and validation etc. - Monday, 26th January, 2015
Title: Semiparametric Z-Estimation for Bundled Parameters Speaker: Prof. Bin NAN, Professor of Biostatistics, University of Michigan, USA Time/Place: 11:00 - 12:00
FSC1217, Fong Shu Chuen Library, HSH Campus, Hong Kong Baptist UniversityAbstract: Many semiparametric models can be parameterized by two types of parameters – a Euclidean parameter of interest and an infinite-dimensional nuisance parameter, and the two parameters are bundled together, i.e., the nuisance parameter is an unknown function that contains the parameter of interest as part of its argument. In this talk, I will present a general semiparametric Z-estimation theory for a class of problems where the estimating function for the Euclidean parameter of interest is constructed by replacing the infinite-dimensional nuisance parameter with a reasonable estimator in some random map. This is motivated by the increasingly used outcome-dependent sampling designs for censored survival data -- the case-cohort studies, for which the commonly used counting process stochastic integrals approach lacks theoretical justification for outcome-dependent weighted methods due to non-predictability. - Tuesday, 27th January, 2015
Title: Shape Optimization For Navier-Stokes Boundary and A Dimensional Splitting Methods Speaker: Dr. Li Kaitai, Professor of Mathematics and Statistics, Xi’an Jiaotong University, China Time/Place: 11:00 - 12:00
FSC1217, Fong Shu Chuen Library, HSH Campus, Hong Kong Baptist UniversityAbstract: The Drag Functional (Hydrodymamical Force acting on the Bounadry) is choosen as objective functional for shape optimization of Navier-Stokes boundary. Since conjiugate gradient methods to compute optimization must do numerical differential for 3D stress tensor and Gateaux derivative of soluyions of Navier-Stokes equation with respect to the shape of boundary. Thus is a difficult and no efficiently problem. Our contributions are that all computation for conjiugate gradient metod for this kind of optimization do not need numerical differentiation for stress tensor and Gateaux derivative of soluyions of Navier-Stokes equation with respect to the shape of boundary it is only to solve two dimentional boundary layer equations I,III,IV .. - Tuesday, 27th January, 2015
Title: Color Image Segmentation by Minimal Surface Smoothing Speaker: Mr. LI Zhi , Department of Mathematics, Hong Kong Baptist University, Hong Kong Time/Place: 15:00 - 16:00
FSC1217, Fong Shu Chuen Library, HSH Campus, Hong Kong Baptist UniversityAbstract: We propose a two-stage approach for multi-channel image segmentation, which is inspired by minimal surface theory. Indeed, the first stage is to find a smooth solution to a convex variational model related to minimal surface smoothing and different data fidelity term are considered. The classical primal-dual algorithm can be applied to efficiently solve the minimization problem. Once the smoothed image u is obtained, in the second stage, the segmentation is done by thresholding. Here, instead of using the classical K-means to find the thresholds, we propose a hill-climbing procedure to find the peaks on the histogram of u, which can be used to determine the required thresholds. The benefit of such approach is that it is more stable and can find the number of segments automatically. Finally, the experiment results illustrate that the proposed algorithm is very robust to noise and exhibits superior performance for color image segmentation. - Tuesday, 27th January, 2015
Title: Improvements on Ensemble Kalman Filter of Chemical Data Assimilation to predict air quality Speaker: Mr. ZHU Zhaochen, Department of Mathematics, Hong Kong Baptist University, Hong Kong Time/Place: 16:30 - 17:30
FSC1217, Fong Shu Chuen Library, HSH Campus, Hong Kong Baptist UniversityAbstract: Air quality prediction has become more and more important in the management of our environment in recently years. At present, the computational power and efficiencies can reach the point that the chemical transport models can predict air quality in the urban with the resolution less than 1km and cover the globe with the resolution less than 50km. Predicting air quality is a challenge because of its complexity of running progresses and strong coupling across scales. And air quality prediction has close relationship with weather prediction, but they have big difference, including the role of pollution emissions and their uncertainties. Improving the air quality prediction need to get a close integration with the models and measurements. And Data assimilation is a good process of integrating observational data and model predictions to obtain an optimal representation of the state of the atmosphere. As more chemical observation in the troposphere are becoming available, chemical data assimilation is expected to play an essential role in air quality forecasting. And ensemble kalman filter is a very good approach of data assimilation especially suits for extremely high order and nonlinear problem. In this paper, since the problem contains highly uncertain for the initial states and a large number of measurements, we choose to improve the ensemble kalman filter to a better air quality prediction. After analyzing the algorithm of ensemble kalman filter, in some special situations, we can reduce the sizes of matrices of H, D, R and X to make the algorithm reduce the amounts of calculation and give same results which will save the storage, time and resource. Then, we add this new method to the system of CMAQ to improve the air quality prediction of Hong Kong with the data got from Hong Kong Environment Protection Department. This new way has been tested successfully with an example and will apply it to a real model to evaluate the efficiency. - Friday, 30th January, 2015
Title: Robust Asymmetric Nonnegative Matrix Factorization Speaker: Dr. Hyenkyun Woo , School of Computational Sciences, Korea Institute for Advanced Study, Korea Time/Place: 14:00 - 15:00
FSC1217, Fong Shu Chuen Library, HSH Campus, Hong Kong Baptist UniversityAbstract: The problems that involve separation of grouped outliers and low rank part in a given data matrix have attracted a great attention in recent years in image analysis such as background modeling and face recognition. In this talk, we introduce a new formulation called Linf-norm based robust asymmetric nonnegative matrix factorization (RANMF) for the grouped outliers and low nonnegative rank separation problems. The main advantage of Linf-norm in RANMF is that we can control denseness of the low nonnegative rank factor matrices. However, we also need to control distinguishability of the column vectors in the low nonnegative rank factor matrices for stable basis. For this, we impose asymmetric constrains, i.e., denseness condition on the coefficient factor matrix only. As a byproduct, we can obtain a well-conditioned basis factor matrix. One of the advantages of the RANMF model, compared to the nuclear norm based low rank enforcing models, is that it is not sensitive to the nonnegative rank constraint parameter due to the proposed soft regularization method. This has a significant practical implication since the rank or nonnegative rank is difficult to compute and many existing methods are sensitive to the estimated rank. Numerical results show that the proposed RANMF outperforms the state-of-the-art robust principal component analysis (PCA) and other robust NMF models in many image analysis applications. - Friday, 30th January, 2015
Title: Asymptotic normality of nonparametric $Z$-estimators with applications to hypothesis testing for panel count data Speaker: Dr. ZHAO Xingqiu , Department of Applied Mathematics, Hong Kong Polytechnic University, Hong Kong Time/Place: 15:30 - 16:30
FSC1217, Fong Shu Chuen Library, HSH Campus, Hong Kong Baptist UniversityAbstract: In semiparametric and nonparametric statistical inference, the weak convergence theory on the asymptotic distribution of estimators have been widely used to establish the asymptotic normality of the estimators when the estimators are $sqrt n$-consistent. In many applications, nonparametric estimators are not able to achieve this rate. We propose a general theorem on the asymptotic normality of nonparametric $Z$-estimators which can be used no matter whether the rate of convergence of an estimator is $n^{-1/2}$ or slower. We apply the proposed theory to study the asymptotic distribution of sieve estimators of functionals of a mean function from a counting process, and develop nonparametric tests for the problem of treatment comparison with panel count data. The test statistics are constructed by using statistically and computationally efficient spline likelihood estimators instead of nonparametric likelihood estimators. The new tests have a more general and simpler structure and thus are easy to implement. Simulation studies show that the proposed tests perform well even for small sample sizes. Moreover, we find that a new test is always powerful for all the situations considered and thus robust.