Colloquium/Seminar

YearMonth
2017 Jan   Feb   Mar   Apr   May   Jun   Jul   Aug   Oct   Nov   Dec  
2016 Jan   Feb   Mar   Apr   May   Jun   Jul   Aug   Oct   Nov   Dec  
2015 Jan   Feb   Mar   Apr   May   Jun   Aug   Sep   Oct   Nov   Dec  
2014 Jan   Feb   Mar   Apr   May   Jun   Jul   Aug   Sep   Oct   Nov   Dec  
2013 Jan   Feb   Mar   Apr   May   Jun   Aug   Sep   Nov   Dec  
2012 Jan   Feb   Apr   May   Jun   Jul   Aug   Sep   Nov   Dec  
2011 Jan   Feb   Mar   Apr   May   Jun   Jul   Aug   Sep   Oct   Nov   Dec  
2010 Jan   Feb   Mar   Apr   May   Jun   Sep   Oct   Nov   Dec  
2009 Jan   Feb   Mar   Apr   May   Jun   Jul   Aug   Sep   Oct   Nov   Dec  
2008 Jan   Feb   Mar   Apr   May   Jun   Jul   Aug   Sep   Oct   Nov   Dec  
2007 Jan   Feb   Mar   Apr   May   Jun   Jul   Aug   Sep   Oct   Nov   Dec  
2006 Jan   Feb   Mar   Apr   May   Jun   Jul   Sep   Oct   Nov   Dec  
2005 Jan   Feb   Mar   Apr   May   Jun   Jul   Aug   Sep   Oct   Nov   Dec  
2004 Jan   Feb   Mar   Apr   May   Aug   Sep   Oct   Nov   Dec  

Event(s) on June 2013


  • Friday, 7th June, 2013

    Title: Parallel Randomized Algorithms in Optimization
    Speaker: Prof. Stephen Wright, University of Wisconsin-Madison, USA
    Time/Place: 16:30  -  17:30
    LMC509, Lui Ming Choi Centre, HSH Campus, Hong Kong Baptist University
    Abstract: Modern optimization algorithms are making use more and more of randomization as a means of reducing the amount of information needed to perform each step of the algorithm, while eventually accessing enough information about the problem to identify a good approximate solution. Stochastic gradient methods, which obtain gradient estimates from small samples of a full data set, are one example of such methods. Related techniques include the Kaczmarz method for linear algebraic systems and coordinate descent methods in optimization. In this talk, we focus on parallel versions of these methods that are suited to asynchronous implementation on multicore processors. Convergence theory for these methods is described, along with some computational experience.


  • Monday, 10th June, 2013

    Title: A simple approach to sample size calculation for count data in matched cohort studies
    Speaker: Dr. GAO Dexiang, Department of Pediatrics, University of Colorado Denver, USA
    Time/Place: 11:00  -  12:00
    FSC1217, Fong Shu Chuen Library, HSH Campus, Hong Kong Baptist University
    Abstract: In matched cohort studies treated and untreated (or exposed and unexposed) individuals are matched on certain characteristics to form clusters (strata) to reduce potential confounding effects. Data in these studies are clustered and thus dependent due to matching. When the outcome is a count, specialized methods are needed for analysis and sample size estimation. We propose a simple approach for calculating statistical power and sample size for clustered Poisson data in matched cohort studies when the exposure ratio (ratio of exposed to unexposed subjects in a cluster) is constant across clusters. We extend the approach to clustered count data with overdispersion (subject heterogeneity). We evaluated these approaches with simulation studies and applied them to a matched cohort study that examines the association of parental depression with pediatric health care utilization. Simulation results showed that the methods for power and sample size performed well under the scenarios examined and were robust in the presence of mixed exposure ratios up to 30%.


  • Monday, 17th June, 2013

    Title: Model-Averaged Confidence Intervals
    Speaker: Dr. David James Fletcher, Department of Mathematics and Statistics, University of Otago, New Zealand
    Time/Place: 11:00  -  12:00
    FSC1217, Fong Shu Chuen Library, HSH Campus, Hong Kong Baptist University
    Abstract: Model-averaging provides a means of allowing for model uncertainty when estimating a parameter or making a prediction. In the Bayesian approach to inference, model averaging arises naturally and a model-averaged posterior is clearly defined. In the frequentist setting, a model-averaged estimate is usually a weighted mean of the estimates from the individual models, with the weight being a measure of how well that model fits the data. It is less obvious how one should calculate a model-averaged confidence interval. I will provide an overview of this area, including suggestions for further research.


  • Tuesday, 18th June, 2013

    Title: Data-driven tight frame construction and image denoising
    Speaker: Dr. Jian-Feng Cai , Department of Mathematics, University of Iowa, USA
    Time/Place: 11:00  -  12:00
    FSC1217, Fong Shu Chuen Library, HSH Campus, Hong Kong Baptist University
    Abstract: Sparsity based regularization methods for image restoration assume that the underlying image has a good sparse approximation under a certain system. Such a system can be a basis, a frame, or a general over-complete dictionary. One widely used class of such systems in image restoration are wavelet tight frames. There have been enduring e fforts on seeking wavelet tight frames under which a certain class of functions or images can have a good sparse approximation. However, the structure of images varies greatly in practice and a system working well for one type of images may not work for another. I will present a method that derives a discrete tight frame system from the input image itself to provide a better sparse approximation to the input image. Such an adaptive tight frame construction scheme is applied to image denoising by constructing a tight frame tailored to the given noisy data. The experiments showed that the proposed approach performs better in image denoising than those wavelet tight frames designed for a class of images. Moreover, by ensuring the system derived from our approach is always a tight frame, our approach also runs much faster than some other adaptive over-complete dictionary based approaches with comparable PSNR performance.