Colloquium/Seminar
Year | Month |
2020 | Jan Feb |
2019 | Jan Feb Mar Apr May Jun Jul Aug Oct Nov |
2018 | Jan Feb Mar Apr May Jun Jul Aug Oct Nov Dec |
2017 | Jan Feb Mar Apr May Jun Jul Aug Oct Nov Dec |
2016 | Jan Feb Mar Apr May Jun Jul Aug Oct Nov Dec |
2015 | Jan Feb Mar Apr May Jun Aug Sep Oct Nov Dec |
2014 | Jan Feb Mar Apr May Jun Jul Aug Sep Oct Nov Dec |
2013 | Jan Feb Mar Apr May Jun Aug Sep Nov Dec |
2012 | Jan Feb Apr May Jun Jul Aug Sep Nov Dec |
2011 | Jan Feb Mar Apr May Jun Jul Aug Sep Oct Nov Dec |
2010 | Jan Feb Mar Apr May Jun Sep Oct Nov Dec |
2009 | Jan Feb Mar Apr May Jun Jul Aug Sep Oct Nov Dec |
2008 | Jan Feb Mar Apr May Jun Jul Aug Sep Oct Nov Dec |
2007 | Jan Feb Mar Apr May Jun Jul Aug Sep Oct Nov Dec |
2006 | Jan Feb Mar Apr May Jun Jul Sep Oct Nov Dec |
2005 | Jan Feb Mar Apr May Jun Jul Aug Sep Oct Nov Dec |
2004 | Jan Feb Mar Apr May Aug Sep Oct Nov Dec |
Event(s) on February 2016
- Monday, 1st February, 2016
Title: Inverse elastic surface scattering with near-field data Speaker: Dr. Wang Yuliang, Department of Mathematics, Hong Kong Baptist University, Hong Kong Time/Place: 15:00 - 16:00
FSC1217, Fong Shu Chuen Library, HSH Campus, Hong Kong Baptist UniversityAbstract: Consider the scattering of a time-harmonic plane wave by a one-dimensional periodic surface. A novel computational method is proposed for solving the inverse elastic surface scattering problem by using the near-field data. Above the surface, the space is filled with a homogeneous and isotropic elastic medium, while the space below the surface is assumed to be elastically rigid. Given an incident field, the inverse problem is to reconstruct the surface from the displacement of the wave field at a horizontal line above the surface. This paper is a nontrivial extension of the authors’ recent work on near-field imaging of the Helmholtz equation and the Maxwell equation to the more complicated Navier equation due to coexistence of the compressional and shear waves that propagate at different speed. Based on the Helmholtz decomposition, the wave field is decomposed into its compressional and shear parts by using two scalar potential functions. The transformed field expansion is then applied to each component and a coupled recurrence relation is obtained for their power series expansions. By solving the coupled system in the frequency domain, simple and explicit reconstruction formulas are derived for two types of measurement data. The method requires only a single illumination with a fixed frequency and incident angle. Numerical experiments show that it is simple, effective, and efficient to reconstruct the scattering surfaces with subwavelength resolution. - Wednesday, 3rd February, 2016
Title: What is Non-Linear Preconditioning? Speaker: Prof. Martin Gander, Section de Mathématiques, University of Geneva, Switzerland Time/Place: 15:00 - 16:00
FSC1217, Fong Shu Chuen Library, HSH Campus, Hong Kong Baptist UniversityAbstract: The idea of preconditioning iterative methods for the solution of linear systems goes back to Jacobi (1845), who used rotations to obtain a system with more diagonal dominance, before he applied what is now called Jacobi's method. The preconditioning of linear systems for their solution by Krylov methods has become a major field of research over the past decades, and there are two main approaches for constructing preconditioners: either one has very good intuition and can propose directly a preconditioner which leads to a favorable spectrum of the preconditioned system, or one uses the splitting matrix of an effective stationary iterative method like multigrid or domain decomposition as the preconditioner. Much less is known about the preconditioning of non-linear systems of equations. The standard iterative solver in that case is Newton's method (1671) or a variant thereof, but what would it mean to precondition the non-linear problem? An important contribution in this field is ASPIN (Additive Schwarz Preconditioned Inexact Newton)by Cai and Keyes (2002), where the authors use their intuition about domain decomposition methods to propose a transformation of the non-linear equations before solving them by an inexact Newton method. Using the relation between stationary iterative methods and preconditioning for linear systems, we show in this presentation how one can systematically obtain a non-linear preconditioner from classical fixed point iterations, and present as an example a new two level non-linear preconditioner called RASPEN (Restricted Additive Schwarz Preconditioned Exact Newton) with substantially improved convergence properties compared to ASPIN. - Thursday, 4th February, 2016
Title: Bilevel programming associated with second-order cone and robust generalized Nash equilibrium Speaker: Dr. ZHANG Jin, Department of Mathematics , Hong Kong Baptist University, Hong Kong Time/Place: 15:00 - 16:00
FSC1217, Fong Shu Chuen Library, HSH Campus, Hong Kong Baptist UniversityAbstract: Bilevel Programming (BLPP) and Generalized Nash Equilibrium (GNEP) are two fundamental equilibrium programming models. BLPP and GNEP modelings, analysis and computations are important for many applications. Firstly, to study the mathematical programming with second-order cone complementarity constraints (SOCMPCC) which is a structural BLPP, we characterize the structures of first-order and second-order tangent set to second-order cone complementarity set. Based on which, the expression of the regular tangent cone is depicted explicitly, and the representation of the second-order tangent set is established. We also consider a stochastic Nash equilibrium problem where players need to make a decision before realization of the underlying extraneous uncertainty and they do so by minimizing their expected disutility. Differing from existing stochastic equilibrium models, we consider the situation where players are short of complete information on the true probability distribution of the underlying uncertainty but they can use available information such as historical data, sample information or subjective judgements to construct a set of distributions which contains the true distribution. The optimal decision is consequently chosen on the basis of the worst distribution to hedge risk arising from ambiguity of the true probability distribution. We investigate existence of robust equilibrium from this kind of distributionally robust game framework and develop effective numerical schemes for identifying such an equilibrium. - Thursday, 25th February, 2016
Title: Model checking for generalized linear models: a dimension-reduction innovation process approach Speaker: Mr. TAN Falong, Department of Mathematics, Hong Kong Baptist University, Hong Kong Time/Place: 13:00 - 13:30
FSC1217, Fong Shu Chuen Library, HSH Campus, Hong Kong Baptist UniversityAbstract: Global smoothing testing based on empirical processes is convenient and widely used in model checking in the literature. Stute and Zhu (2002) suggested an innovation martingale approach based on one-dimensional projected covariates for checking generalized linear model. However, a generalization of this method to higher dimension is very complicated. In this work, a projection pursuit methodology is proposed to avoid this difficulty. When combined with a dimension reduction model adaptive approach the resulting test turns out to be omnibus and distribution free. This new method can perfectly apply to a design, when the predictors follow a dimension reduction model. Simulation studies show that the proposed test can control the empirical size and sensitive to the alternatives in small to moderate sample size. - Thursday, 25th February, 2016
Title: An Adaptive-to-Model Test for Parametric Single-Index Errors-in-Variables Models Speaker: Mr. XIE Chuanlong , Department of Mathematics, Hong Kong Baptist University, Hong Kong Time/Place: 14:30 - 16:00
FSC1217, Fong Shu Chuen Library, HSH Campus, Hong Kong Baptist UniversityAbstract: This work provides some useful tests for fitting a parametric single-index regression model when covariates are measured with error and validation data is available. We propose two tests whose consistency rates do not depend on the dimension of the covariate vector when an adaptive-to-model strategy is applied. One of these tests has a bias term that becomes arbitrarily large with increasing sample size but its asymptotic variance is smaller, and the other is asymptotically unbiased with larger asymptotic variance. Compared with the existing local smoothing tests, the new tests behave like a classical local smoothing test with only one covariate, and still are omnibus against general alternatives. This avoids the difficulty associated with the curse of dimensionality. Further, a systematic study is conducted to give an insight on the effect of the values of the ratio between the sample size and the size of validation data on the asymptotic behavior of these tests. Simulations are conducted to examine the performance in finite sample scenarios. - Friday, 26th February, 2016
Title: Testing Markov Processes by Distance Correlation and Sure Screening for Interaction Effect in Ultra-high Dimensional Generalized Linear Model Speaker: Mr. ZHOU Min, Department of Mathematics, Hong Kong Baptist University, Hong Kong Time/Place: 11:00 - 12:30
FSC1217, Fong Shu Chuen Library, HSH Campus, Hong Kong Baptist UniversityAbstract: In time series analysis, the Markov property plays an awfully important role. However few existing research mainly focus on how to test the Markov properties for the time series processes. For the first work, we propose a new test procedure to check if the time series with beta-mixing has the Markov property. Our test is based on the Conditional Distance Correlation (CDC). We investigate the theoretical properties of the proposed method. The asymptotic distribution of the proposed test statistic under the null hypothesis is obtained, and the power of the test procedure under local alternative hypothesizes have been studied. For the future work, we will investigate numerical performance of the proposed test statistic, and try to suggest a method to simplify the null hypothesis distribution approximation. For the second work, we propose a simple sure screening procedure to detect significant interaction between predict variables and the response variable in the high or ultra-high dimensional generalized linear regression models. Sure screening method is a simple, but powerful tool for the first step of feature selection or variable selection for ultrahigh-dimensional data. We investigate the sure screening properties of the proposal method from theoretical insight. Furthermore, we have indicated that our proposed method can control the false discovery rate at a reasonable size, so the regularized variable selection methods can be easily applied to get more accurate feature selection in the following model selection procedure. For the future work, we will suggest a much efficient algorithm to realize our proposed sure screening method in practice. We will also investigate the properties of such algorithm and apply it to do some real case studies in bioinformatics. - Friday, 26th February, 2016
Title: Image Deblurring by the Ensemble Transform Kalman Method Speaker: Mr. SIU Ka Wai, Department of Mathematics, Hong Kong Baptist University, Hong Kong Time/Place: 14:30 - 16:00
FSC1217, Fong Shu Chuen Library, HSH Campus, Hong Kong Baptist UniversityAbstract: In image deblurring, we aim to recover the original, sharp image u from the blurred image f = Au + e, where A is a blurring matrix and e is a random noise. The matrix A is determined by a point spread function which determines how each pixel is blurred, and boundary conditions which we assume outside the scope of an image. The A is often ill-conditioned and the small singular values of A often magnifies the random noise which make the deblurring process cumbersome. We tried to apply a class of the Kalman filter methods, namely an ensemble transform Kalman filter to the non-blind deblurring problem. The method makes use of the statistical property of an image, constructing a span of ensemble members with which we combine the knowledge of the blurred image to produce the best linear combination of the ensemble members and hence obtain an analysis, the deblurred image. The construction of the covariance matrix of the background is crucial. In the talk, the theoretical background of the filtering method is provided, followed by numerical examples with respect to different constructions of covariance matrices. Future work includes the alternative use of the filtering method and the regularization with L1 norm which aims to edge-preserving will be included.