Colloquium/Seminar

YearMonth
2017 Jan   Feb   Mar   Apr   May   Jun   Jul   Aug   Oct   Nov   Dec  
2016 Jan   Feb   Mar   Apr   May   Jun   Jul   Aug   Oct   Nov   Dec  
2015 Jan   Feb   Mar   Apr   May   Jun   Aug   Sep   Oct   Nov   Dec  
2014 Jan   Feb   Mar   Apr   May   Jun   Jul   Aug   Sep   Oct   Nov   Dec  
2013 Jan   Feb   Mar   Apr   May   Jun   Aug   Sep   Nov   Dec  
2012 Jan   Feb   Apr   May   Jun   Jul   Aug   Sep   Nov   Dec  
2011 Jan   Feb   Mar   Apr   May   Jun   Jul   Aug   Sep   Oct   Nov   Dec  
2010 Jan   Feb   Mar   Apr   May   Jun   Sep   Oct   Nov   Dec  
2009 Jan   Feb   Mar   Apr   May   Jun   Jul   Aug   Sep   Oct   Nov   Dec  
2008 Jan   Feb   Mar   Apr   May   Jun   Jul   Aug   Sep   Oct   Nov   Dec  
2007 Jan   Feb   Mar   Apr   May   Jun   Jul   Aug   Sep   Oct   Nov   Dec  
2006 Jan   Feb   Mar   Apr   May   Jun   Jul   Sep   Oct   Nov   Dec  
2005 Jan   Feb   Mar   Apr   May   Jun   Jul   Aug   Sep   Oct   Nov   Dec  
2004 Jan   Feb   Mar   Apr   May   Aug   Sep   Oct   Nov   Dec  

Event(s) on December 2008


  • Friday, 5th December, 2008

    Title: ICM Lecture Series: Convex Formulation of Image Segmentation Models and Applications (Lecture 1)
    Speaker: Dr. Xavier Bresson, Department of Mathematics, University of California, Los Angeles
    Time/Place: 10:00  -  11:00
    FSC1217, Fong Shu Chuen Library, Ho Sin Hang Campus, Hong Kong Baptist University
    Abstract: I will introduce a convex formulation for a large class of variational segmentation models known as active contour models. Standard approaches use the Level Set Method (LSM) to implement the active contour model. Although the LSM holds many good properties s.a. natural changes of topology and stable numerical schemes, it also suffers from two serious limitations. First, the level set energy is not convex, which makes the choice of the initial condition critical to get a satisfying solution. Second, standard LSM schemes are slow to converge. We propose a new approach that overcomes these two limitations by computing a global minimizer in a fast way. Since local minimizers can also be useful in some applications s.a. medical imaging in which we want to extract specific objects, we will also introduce a fast numerical scheme to determine a local minimizer. Applications are given for segmentation and for a free boundary problem. Joint work with Stanley Osher and Tony Chan.


  • Friday, 5th December, 2008

    Title: ICM Lecture Series: Fast Numerical Schemes for Geometry Processing (Lecture 2)
    Speaker: Dr. Xavier Bresson, Department of Mathematics, University of California, Los Angeles
    Time/Place: 11:15  -  12:15
    FSC1217, Fong Shu Chuen Library, Ho Sin Hang Campus, Hong Kong Baptist University
    Abstract: Fast algorithms are crucial to develop real-world applications such as object detection in medical images, noise removal, or object tracking in video surveillance. Variational models offer strong mathematical tools to define well-posed algorithms but they are not as fast as discrete optimization techniques s.a. graph cut techniques. We recently propose to define very fast continuous minimization algorithms, close or better than graph cut performances. These algorithms, based on the Bregman iterative scheme, provide fast geometry processing algorithms. Applications to segmentation, surface reconstruction from a set of points and surface interpolation are presented. Joint work with Tom Goldstein, Stanley Osher and Tony Chan.


  • Friday, 5th December, 2008

    Title: ICM Lecture Series: Color Image Processing and Image Completion (Lecture 3)
    Speaker: Dr. Xavier Bresson, Department of Mathematics, University of California, Los Angeles
    Time/Place: 14:30  -  15:30
    FSC1217, Fong Shu Chuen Library, Ho Sin Hang Campus, Hong Kong Baptist University
    Abstract: In this lecture, I will talk about two topics. The first topic will be focused on a fast and well-posed regularization algorithm for color/vectorial images based on a dual formulation of the vectorial Total Variation (VTV). This model is the vectorial extension of Chambolle projection algorithm for scalar images. The proposed model minimizes the exact VTV norm whereas standard approaches use a regularized norm. The numerical scheme is straightforward to implement and finally, the algorithm is fast. Finally, and maybe more importantly, the proposed VTV minimization scheme can be easily extended to many standard applications s.a. inpainting, deblurring, image decomposition, etc. The second topic will be centered on image completion. Image completion aims at recovering lost information in digital images. Many deterministic and stochastic approaches have been proposed to solve the completion problem. We will define a local variational model to recover the geometry following Gestalt's principle of good continuation. We will also introduce a non-local variational model to recover the lost textures. Results are presented on synthetic and natural images. Joint work with Tony Chan.


  • Monday, 15th December, 2008

    Title: DLS: The Spectrum of the 1-Laplace Operator
    Speaker: Prof. Kung-Ching Chang, School of Mathematical Sciences, Peking University, China
    Time/Place: 11:00  -  12:30
    RRS905, Sir Run Run Shaw Building, Ho Sin Hang Campus, HKBU
    Abstract: The eigenfunction of the 1-Laplace operator is defined to be a critical point in the sense of the strong slope for a nonsmooth constraint variational problem. We completely write down all these eigenfunctions for the 1-Laplace operator on intervals.


  • Thursday, 18th December, 2008

    Title: DLS: Risk Assessment and Asset Allocation with Gross Exposure Constraints for Vast Portfolios
    Speaker: Prof. Jianqing Fan, Department of Mechanical and Automation Engineering, Princeton University, USA
    Time/Place: 11:00  -  12:30
    LT2, Ho Sin Hang Campus, Hong Kong Baptist University
    Abstract: Markowitz (1952, 1959) laid down the ground-breaking work on the mean-variance analysis. Under his framework, the theoretical optimal allocation vector can be very different from the estimated one for large portfolios due to the intrinsic difficulty of estimating a vast covariance matrix and return vector. This can result in adverse performance in portfolio selected based on empirical data due to the accumulation of estimation errors. We address this problem by introducing the gross-exposure constrained mean-variance portfolio selection. We show that with gross-exposure constraint the theoretical optimal portfolios have similar performance to the empirically selected ones based on estimated covariance matrices and there is no error accumulation effect from estimation of vast covariance matrices. This gives theoretical justification to the empirical results in Jagannathan and Ma (2003). We also show that the no-short-sale portfolio is not diversified enough and can be improved by allowing some short positions. As the constraint on short sales relaxes, the number of selected assets varies from a small number to the total number of stocks, when tracking portfolios or selecting assets. This achieves the optimal sparse portfolio selection, which has close performance to the theoretical optimal one. Among 1000 stocks, for example, we are able to identify all optimal subsets of portfolios of different sizes, their associated allocation vectors, and their estimated risks. The utility of our new approach is illustrated by simulation and empirical studies on the 100 Fama-French industrial portfolios and the 400 stocks randomly selected from Russell 3000.


  • Monday, 22nd December, 2008

    Title: Convergence Analysis of Adaptive Non-Standard Finite Element Methods
    Speaker: Prof. Ronald H.W. Hoppe, 1) Dept. of Math., Univ. of Houston, Houston, TX 77204-3008, U.S.A. , 2) Inst. of Math., Univ. of Augsburg, D-86159 Augsburg, Germany
    Time/Place: 11:30  -  12:30
    FSC1217, Fong Shu Chuen Library, HSH Campus, Hong Kong Baptist University
    Abstract: Adaptive finite element methods have become powerful tools for the efficient and reliable numerical solution of partial differential equations and systems thereof. They consist of successive loops of the cycle SOLVE ==>ESTIMATE ==> MARK ==> REFINE : Here, SOLVE stands for the solution of the finite element discretized problem with respect to a given triangulation of the computational domain using, e.g., advanced iterative solvers based on multilevel and/or domain decomposition methods. The following step ESTIMATE provides a cheaply computable, localizable a posteriori error estimator for the global discretization error or some other problem-specific quantity of interest. The subsequent step MARK deals with the selection of elements, faces and/or edges of the triangulation for refinement and/or coarsening, whereas the final step REFINE takes care of the technical realization of the refinement/coarsening process. An important issue is the convergence analysis of the adaptive loop in the sense of a guaranteed reduction of the underlying error functional. During the past decade, such a convergence analysis has been successfully established mainly for standard conforming finite element discretizations of second order elliptic boundary value problems. In this contribution, we focus on recent results for non-standard discretizations such as mixed and mixed-hybrid methods, non- conforming techniques including Discontinuous Galerkin methods, and edge element approximations of Maxwell's equations.