Colloquium/Seminar

YearMonth
2018 Jan   Feb   Mar   Apr   May   Jun   Jul   Aug  
2017 Jan   Feb   Mar   Apr   May   Jun   Jul   Aug   Oct   Nov   Dec  
2016 Jan   Feb   Mar   Apr   May   Jun   Jul   Aug   Oct   Nov   Dec  
2015 Jan   Feb   Mar   Apr   May   Jun   Aug   Sep   Oct   Nov   Dec  
2014 Jan   Feb   Mar   Apr   May   Jun   Jul   Aug   Sep   Oct   Nov   Dec  
2013 Jan   Feb   Mar   Apr   May   Jun   Aug   Sep   Nov   Dec  
2012 Jan   Feb   Apr   May   Jun   Jul   Aug   Sep   Nov   Dec  
2011 Jan   Feb   Mar   Apr   May   Jun   Jul   Aug   Sep   Oct   Nov   Dec  
2010 Jan   Feb   Mar   Apr   May   Jun   Sep   Oct   Nov   Dec  
2009 Jan   Feb   Mar   Apr   May   Jun   Jul   Aug   Sep   Oct   Nov   Dec  
2008 Jan   Feb   Mar   Apr   May   Jun   Jul   Aug   Sep   Oct   Nov   Dec  
2007 Jan   Feb   Mar   Apr   May   Jun   Jul   Aug   Sep   Oct   Nov   Dec  
2006 Jan   Feb   Mar   Apr   May   Jun   Jul   Sep   Oct   Nov   Dec  
2005 Jan   Feb   Mar   Apr   May   Jun   Jul   Aug   Sep   Oct   Nov   Dec  
2004 Jan   Feb   Mar   Apr   May   Aug   Sep   Oct   Nov   Dec  

Event(s) on January 2018


  • Monday, 8th January, 2018

    Title: Principal Graph and Structure Learning based on Reversed Graph Embedding
    Speaker: Dr. Li Wang , Department of Mathematics, University of Texas, Arlington
    Time/Place: 11:00  -  12:00
    FSC1217, Fong Shu Chuen Library, HSH Campus, Hong Kong Baptist University
    Abstract: Many scientific datasets are of high dimension, and the analysis usually requires retaining the most important structures of data. Many existing methods work only for data with structures that are mathematically formulated by curves, which is quite restrictive for real applications. To get more general graph structures, we develop a novel principal graph and structure learning framework that captures the local information of the underlying graph structure based on reversed graph embedding. A new learning algorithm is developed that learns a set of principal points and a graph structure from data, simultaneously. Experimental results on various synthetic and real world datasets show that the proposed can uncover the underlying structure correctly. 


  • Wednesday, 10th January, 2018

    Title: Functional and Moonlighting Studies of Proteins and RNA based on Network Organization and Cellular Localization Diversity
    Speaker: Dr. Eason Cheng, Department of Computer Science and Engineering, The Chinese University of Hong Kong, HKSAR
    Time/Place: 10:00  -  11:00
    FSC1217, Fong Shu Chuen Library, HSH Campus, Hong Kong Baptist University
    Abstract: Protein interactome and subcellular localization resources are invaluable for network related analysis, which have provided great insights for the mechanistic understanding of human diseases and drug design. In this talk, (i) I will first provide a comprehensive view of the protein localization in the human protein interactome data and improve the understanding of the localization diversity of protein-protein interactions. (ii) I will introduce a procedure Subcellular Module Identification with Localization Expansion (SMILE) for identifying functional protein modules. (iii) I will show the ncTALENT model to calculate the ncRNA target localization coefficient (TLC) measuring how diversely their targets are distributed among the subcellular locations. (iv) I will introduce a novel methodology, MoonFinder, for the identification of moonlighting lncRNAs. Overall, my studies provide deeper insights into the principles of the molecular localization and the organization of the cellular regulatory and interaction networks, which are fully adapted for network reorganization, functional module identification, localization diversity quantification, and moonlighting macromolecule determination.


  • Thursday, 25th January, 2018

    Title: Robin-Type RASPEN Method for Nonlinear Steady-State Diffusion Equations 
    Speaker: Mr.GU Yaguang, Department of Mathematics, Hong Kong Baptist University, HKSAR
    Time/Place: 14:30  -  16:00
    FSC1217, Fong Shu Chuen Library, HSH Campus, Hong Kong Baptist University
    Abstract: Domain decomposition method became popular since advent of paral-lel computing. The advantages of domain decomposition method are, on the one hand, that one can cut a huge global problem into small pieces, then solve in parallel easier subdomain problems and finally glue up; on the other hand, domain decomposition method can be understood as iterative solver or as preconditioner for the linear system derived from global problem. For linear problems, domain decomposition method has shown its strengths with both Dirichlet and Robin transmission conditions. Recently, a so-called RASPEN Method (restricted additive Schwarz preconditioned exact Newton) was proposed for nonlinear problems where Dirichlet transmission condition was considered. RASPEN is robust since all components involved in Newton's method are exactly computed somewhere else in advance, which is also the key reason that RASPEN converges faster than another exist-ing solver called ASPIN (additive Schwarz preconditioned inexact Newton). We show in this talk our further study on Robin-type RASPEN method. Nonlinear steady-state diffusion equations, including Forchheimer equation, will be tested to illustrate that better preconditioner can be obtained with Robin transmission condition than that of Dirichlet one. At last, we give a discussion on Robin-type RASPEN algorithm, which inspires our future work.


  • Tuesday, 30th January, 2018

    Title: Computationally Efficient Tensor Completion with Statistical Optimality
    Speaker: Dr. Dong Xia, Department of Statistics, Columbia University, U.S.A.
    Time/Place: 11:30  -  12:30
    FSC1217, Fong Shu Chuen Library, HSH Campus, Hong Kong Baptist University
    Abstract: We propose two frameworks for estimating a low rank tensor from a subset of its entries with focus on both the statistical and computational efficiencies. In the noiseless setting, we show that a gradient descent algorithm with initial value obtained from a novel spectral method can reconstruct the tensor with sharp sample size requirement. Unlike those earlier approaches for tensor completion, our method is efficient to compute, easy to implement, and does not impose extra structures on the tensor. If the observations are noisy, we show that an even simpler algorithm by combining spectral thresholding and power iterations achieves the optimal rates of convergence which fills in the void of statistical property of noisy tensor completion problems. Even under weak conditions, our algorithm significantly outperforms the existing approaches in the literature.