|2021||Feb Mar Apr May Jun Jul Aug|
|2020||Jan May Jun Jul Aug Sep Oct Nov Dec|
|2019||Jan Feb Mar Apr May Jun Jul Aug Oct Nov|
|2018||Jan Feb Mar Apr May Jun Jul Aug Oct Nov Dec|
|2017||Jan Feb Mar Apr May Jun Jul Aug Oct Nov Dec|
Event(s) on August 2020
- Tuesday, 4th August, 2020
Title: Estimation of Individual Treatment Effect via Gaussian Mixture Model Speaker: Ms WANG Juan, Department of Mathematics, Hong Kong Baptist University, Hong Kong Time/Place: 14:00 - 16:00
Zoom, Meeting ID: 998 0921 8771 Password: 934782
Abstract: In this thesis, we investigate the estimation problem of treatment effect from Bayesian perspective through which one can first obtain the posterior distribution of unobserved potential outcome from observed data, and then obtain the posterior distribution of treatment effect. We mainly consider how to represent a joint distribution of two potential outcomes - one from the treated group and another from the control group, which can give us an indirect impression of correlation, since the estimation of treatment effect depends on correlation between two potential outcomes. The first part of this thesis illustrates the effectiveness of adapting Gaussian mixture models in solving the treatment effect problem. We apply the mixture models - Gaussian Mixture Regression (GMR) and Gaussian Mixture Linear Regression (GMLR)- as a potentially simple and powerful tool to investigate the joint distribution of two potential outcomes. For GMR, we consider a joint distribution of the covariates and two potential outcomes. For GMLR, we consider a joint distribution of two potential outcomes, which linearly depend on covariates. Through developing an EM algorithm for GMLR, we find that GMR and GMLR are effective in estimating means and variances, but they are not effective in capturing correlation between two potential outcomes. In the second part of this thesis, GMLR is modified to capture unobserved covariance structure (correlation between outcomes) that can be explained by latent variables introduced through making an important model assumption. We propose a much more efficient Pre-Post EM Algorithm to implement our proposed GMLR model with unobserved covariance structure in practice. Simulation studies show that Pre-Post EM Algorithm performs well not only in estimating means and variances, but also in estimating covariances.
- Monday, 10th August, 2020
Title: Investigations on Models and Algorithms in Variational Approaches for Image Restoration Speaker: Ms FANG Yingying, Department of Mathematics, Hong Kong Baptist University, Hong Kong Time/Place: 10:30 - 12:30
Zoom, Meeting ID: 974 2874 8031 Password: 665989
Abstract: Variational methods, which have proven to be very useful to solve the ill-posed inverse problems, have been generating a lot of research interest in the image restoration problem. It transforms the restoration problem into the optimization of a well-designed variational model. While the designed model is convex, the recovered image is the global solution found by an appropriate numerical algorithm and the quality of the restored image depends on the accuracy of the designed model. Thus, a lot of eﬀorts have been put to propose more precise models that can produce results with more pleasing visual quality. Besides, due to the high-dimension and the non-smoothness of the imaging model, an eﬃcient algorithm to ﬁnd the exact solution of the variational model, is also of the research interest, since it inﬂuence the eﬃciency of the restoration techniques in the practical applications. In this thesis, we are centered on two objectives in the image restoration problem. The ﬁrst objective of this thesis is to make improvements on two models for image denoising. For the multiplicative noise removal, we designed a regularizer based on the statistical property of the speckle noise, which can transform the traditional model (named by AA) into a convex one. Therefore, a global solution can be found independent of the initialization of the numerical algorithm. Moreover, the regularization term added to the AA model can help produce a sharper result. The second model is improved on the traditional ROF model by adding an edge regularization which incorporates an edge prior obtained from the observed image. Extensive experiments show that designed edge regularization has superiority to increase the texture of the recovered result and remove the staircase artifacts in the meanwhile. It is also presented that the edge regularization designed can be easily adapted into other restoration tasks, such as image deblurring. The second objective of this thesis is to study the numerical algorithms for a general nonsmooth imaging restoration model. As the imaging models are usually high-dimensional, the existing algorithms usually only use the ﬁrst-order information of the image. Diﬀerently, a novel numerical algorithm based on the inexact Lagrangian function is proposed in this thesis, which exploits the second-order information to reach a superlinear convergence rate. Experiments show that the proposed algorithm is able to eﬃciently reach the solution with higher accuracy compared to the state-of-the-art algorithm.
- Thursday, 13th August, 2020
Title: On the Construction of Uniform Designs and the Uniformity Property of Fractional Factorial Designs Speaker: Mr KE Xiao, Department of Mathematics, Hong Kong Baptist University, Hong Kong Time/Place: 10:00 - 12:00
Zoom, Meeting ID: 959 5675 8172 Password: 522127
Abstract: Uniform design has found successful applications in manufacturing, system engineering, pharmaceutics and natural sciences since it appeared in 1980’s. Recently, research related to uniform design is emerging. Discussions are mainly focusing on the construction and the theoretical properties of uniform design. On one hand, new construction methods can help researchers to search for uniform designs in more eﬃcient and eﬀective ways. On the other hand, since uniformity has been accepted as an essential criterion for comparing fractional factorial designs, it is interesting to explore its relationship with other criteria, such as aberration, orthogonality, confounding, etc. The ﬁrst goal of this thesis is to propose new uniform design construction methods and recommend designs with good uniformity. A novel stochastic heuristic technique, the adjusted threshold accepting algorithm, is proposed for searching uniform designs. This algorithm has successfully generated a number of uniform designs, which outperforms the existing uniform design tables in the website “http://uic.edu.hk/itsc/uniformdesign”. In addition, designs with good uniformity are recommended for screening either qualitative or quantitative factors via a comprehensive study of symmetric orthogonal designs with 27 runs, 3 levels and 13 factors. These designs are also outstanding under other traditional criteria. The second goal of this thesis is to give an in-depth study of the uniformity property of fractional factorial designs. Close connections between diﬀerent criteria and lower bounds of the average uniformity have been revealed, which can be used as benchmarks for selecting the best designs. Moreover, we ﬁnd non-isomorphic designs have diﬀerent combinatorial and geometric properties in their projected and level permutated designs. Two new non-isomorphic detection methods are proposed and utilized for classifying fractional factorial designs. The new methods take advantages over the existing ones in terms of computation eﬃciency and classiﬁcation capability. Finally, the relationship between uniformity and isomorphism of fractional factorial designs has been discussed in detail. We ﬁnd isomorphic designs may have diﬀerent geometric structure and propose a new isomorphic identiﬁcation method. This method signiﬁcantly reduces the computational complexity of the procedure. A new uniformity criterion, the uniformity pattern, is proposed to evaluate the overall uniformity performance of an isomorphic design set.
- Friday, 21st August, 2020
Title: Statistical Methods for Blood Pressure Prediction Speaker: Mr HUANG Zijian, Department of Mathematics, Hong Kong Baptist University, Hong Kong Time/Place: 10:00 - 12:00
Zoom, Meeting ID: 918 4242 0507 Passcode: 622311
Abstract: Blood pressure is not only one of the most important indicator for human health. The symptoms of many cardiovascular disease like stroke, atrial fibrillation, and acute myocardial infarction usually include the abnormal variation of blood pressure. Severe symptoms of diseases like coronary syndrome, rheumatic heart disease, arterial aneurysm, and endocarditis also usually appear along with the variation of blood pressure. Most of the current blood pressure measurements rely on Korotkoff sounds method that focuses on one time blood pressure measuring but cannot supervise blood pressure continuously, which cannot effectively detect diseases or alert patients. Previous researches indicating the relationship between photoplethysmogram (PPG) signal and blood pressure brought up the new research direction of blood pressure measurement method. Ideally, with the continuous supervision of PPG signal, the blood pressure of the subject could be measured longitudinally, which matches the current requirements of blood pressure measurement better as a indicator of human health. However, the relationship between blood pressure and PPG signal is very comprehensive that is related to personal and environmental status, which leads to the research challenge for many previous works that tried to find the mapping from PPG signal to blood pressure without considering other factors. In this thesis, we propose two statistical methods modeling the comprehensive relationships among blood pressure, PPG signals, and other factors and predicting blood pressure. We also express the modeling and predicting process for the real data set and provide the accurate prediction results that achieve the international blood pressure measurement standard. In the first part, we propose the Independent Variance Components Mixed-model (IVCM) which introduces the variance components to describe the relationship among observations. The relationship indicators are collected as information to divide observations into different groups. The latent impacts from the properties of groups are estimated and used for predicting the multiple responses. The Stochastic Approximation Minorization-maximization (SAM) algorithm is used for IVCM model parameter estimation. As the expansion of Minorization-maximization (MM) algorithm, the SAM algorithm could provide comparable-level estimations as MM algorithm but with faster computing speed and less computational cost. We also provide the subsampling prediction method for IVCM model prediction that could predict multiple responses variables with the conditional expectation of the model random effects. The prediction speed of the subsampling method is as fast as the SAM algorithm for parameter estimation with very small accuracy loss. Because the SAM algorithm and subsampling prediction method requires assigning hyperparameters, a great amount of simulation results are provided for the hyperparameter selection. In the second part, we propose the Groupwise Reweighted Mixed-model (GRM) to describe the variation of random effects as well as the potential components of mixture distributions. In the model we combine the properties of mixed-model and mixture model together for modeling the comprehensive relationship among observations as well as between the predictive variables and the response variables. We bring up the Groupwise Expectation Minorization-maximization (GEM) algorithm for the model parameter estimation. Developed from MM algorithm and Expectation Maximization (EM) algorithm, the algorithm estimates parameters fast and accurate with adopting the properties of diagonal blocked matrix. The corresponding prediction method for GRM model is provided as well as the simulations for the number of components selection. In the third part, we apply IVCM model and GRM model in modeling real data and predicting blood pressure. We establish the data base for modeling blood pressure with PPG signals and personal characteristics, extract PPG features from PPG signal waves, and analyze the comprehensive relationship between PPG signal and blood pressure with IVCM model and GRM model. The blood pressure prediction results from different models are provided and compared. The best prediction results not only achieve the international blood pressure measurement standard but also show great performance in high blood pressure prediction.
- Thursday, 27th August, 2020
Title: Regularized Neural Networks for Semantic Image Segmentation Speaker: Mr JIA Fan, Department of Mathematics, Hong Kong Baptist University, Hong Kong Time/Place: 14:00 - 16:00
Zoom, Meeting ID: 925 0566 3757 Passcode: 465521
Abstract: Image processing consists of a series of tasks which widely appear in many areas. Among these tasks, image segmentation which plays an important role in many applications is a fundamental task. It aims to classify pixels in a given image into several classes. Variational methods showcase their performance in all kinds of image processing problems, such as image denoising, image debluring, image segmentation and so on. They can preserve structures of images well. In recent decades, it is more and more popular to reformulate an image processing problem into an energy minimization problem. The problem is then minimized by some optimization based methods. Meanwhile, convolutional neural networks (CNNs) gain outstanding achievements in a wide range of fields such as image processing, nature language processing and video recognition. CNNs are data-driven techniques which often need large datasets for training comparing to other methods like variational based methods. When handling image processing tasks with large scale datasets, CNNs are the first selections due to their superior performances. However, the class of each pixel is predicted independently in semantic segmentation tasks which are dense classification problems. Spatial regularity of the segmented objects is still a problem for these methods. Especially when given few training data, CNNs could not perform well in the details. Isolated and scattered small regions often appear in all kinds of CNN segmentation results. In this thesis, we successfully add spatial regularization to the segmented objects. In our methods, spatial regularization such as total variation (TV) can be easily integrated into CNNs and they produce smooth edges and eliminates isolated points. Spatial dependency is a very important prior for many image segmentation tasks. Generally, convolutional operations are building blocks that process one local neighborhood at a time, which means CNNs usually don't explicitly make use of the spatial prior on image segmentation tasks. Empirical evaluations of the regularized neural networks on a series of image segmentation datasets show its good performance and ability in improving the performance of many image segmentation CNNs. We also design a recurrent structure which is composed of multiple TV blocks. By applying this structure to a popular segmentation CNN, the segmentation results are further improved. This is an end-to-end framework to regularize the segmentation results. The proposed framework could give smooth edges and eliminate isolated points. Comparing to other post-processing methods, our method needs little extra computation thus is effective and efficient. Since long range dependency is also very important for semantic segmentation, we further present non-local regularized softmax activation function for semantic image segmentation tasks. We introduce graph operators into CNNs by integrating nonlocal total variation regularizer into softmax activation function. We find the non-local regularized softmax activation function by the primal-dual hybrid gradient method. Experiments show that non-local regularized softmax activation function can bring regularization effect and preserve object details at the same time.