Year | Month |
2023 | Jan Feb Mar Apr May Jun Jul Aug Oct |
2022 | Jan Feb Mar Apr May Jun Jul Aug Oct Nov Dec |
2021 | Feb Mar Apr May Jun Jul Aug Sep Oct Nov Dec |
2020 | Jan May Jun Jul Aug Sep Oct Nov Dec |
2019 | Jan Feb Mar Apr May Jun Jul Aug Oct Nov |
Title: | Estimation and Inference for Generalized Geoadditive Models |
Speaker: | Prof Lijian Yang, Center for Statistical Sciences, Tsinghua University, Beijing, China |
Time/Place: | 09:00 - 10:00 FSC1217, Fong Shu Chuen Library, HSH Campus, Hong Kong Baptist University |
Abstract: | In many application areas, data are collected on a count or binary response with spatial covariate information. In this paper, we introduce a new class of generalized geoadditive models (GGAMs) for spatial data distributed over complex domains. Through a link function, the proposed GGAM assumes that the mean of the discrete response variable depends on additive univariate functions of explanatory variables and a bivariate function to adjust for the spatial effect. We propose a two-stage approach for estimating and making inferences of the components in the GGAM. In the first stage, the univariate components and the geographical component in the model are approximated via univariate polynomial splines and bivariate penalized splines over triangulation, respectively. In the second stage, local polynomial smoothing is applied to the cleaned univariate data to average out the variation of the first-stage estimators. We investigate the consistency of the proposed estimators and the asymptotic normality of the univariate components. We also establish the simultaneous confidence band for each of the univariate components. The performance of the proposed method is evaluated by two simulation studies. We apply the proposed method to analyze the crash counts data in the Tampa-St. Petersburg urbanized area in Florida. |
Title: | Neyman-Pearson classification |
Speaker: | Dr TONG Xin, Department of Data Sciences and Operations , University of Southern California , Los Angeles, CA |
Time/Place: | 10:00 - 11:00 FSC1217, Fong Shu Chuen Library, HSH Campus, Hong Kong Baptist University |
Abstract: | In many binary classification applications, such as disease diagnosis and spam detection, practitioners commonly face the need to limit type I error (that is, the conditional probability of misclassifying a class 0 observation as class 1) so that it remains below a desired threshold. To address this need, the Neyman-Pearson (NP) classification paradigm is a natural choice; it minimizes type II error (that is, the conditional probability of misclassifying a class 1 observation as class 0) while enforcing an upper bound, alpha, on the type I error. Although the NP paradigm has a century-long history in hypothesis testing, it has not been well recognized and implemented in classification schemes. Common practices that directly limit the empirical type I error to no more than alpha do not satisfy the type I error control objective because the resulting classifiers are still likely to have type I errors much larger than alpha. This talk introduces the speaker and coauthors' work on NP classification algorithms and their applications and raises current challenges under the NP paradigm. |
Title: | Mathematical Study on Plasmon Materials and Their Applications |
Speaker: | Mr Hongjie Li, Department of Mathematics, HKBU, HKSAR |
Time/Place: | 14:30 - 16:30 FSC1217, Fong Shu Chuen Library, HSH Campus, Hong Kong Baptist University |
Title: | Why spectral methods are preferred in PDE eigenvalue computations in some cases? |
Speaker: | Prof Zhang Zhimin, Beijing Computational Science Research Center, Beijing, China |
Time/Place: | 15:30 - 16:30 FSC1217, Fong Shu Chuen Library, HSH Campus, Hong Kong Baptist University |
Abstract: | When approximating PDE eigenvalue problems by numerical methods such as finite difference and finite element, it is common knowledge that only a small portion of numerical eigenvalues are reliable. As a comparison, spectral methods may perform extremely well in some situation, especially for 1-D problems. In addition, we demonstrate that spectral methods can outperform traditional methods and the state-of-the-art method in 2-D problems even with singularities. |
Title: | Making molecular movies with X-ray Lasers |
Speaker: | Prof Liu Haiguang , Beijing Computational Science Research Center, Beijing, China |
Time/Place: | 16:30 - 17:30 FSC1217, Fong Shu Chuen Library, HSH Campus, Hong Kong Baptist University |
Abstract: | "“Seeing is believing!” That’s mostly true. Using X-rays, people are seeing microscopic world and engineering the ordering of atoms to make life better. In this talk, I hope to convey a message that we are going to see a 4D microscopic world (space + time), and it is the time for molecular movies to be staged. There are two parts in the talk: imaging the ultrasmall and capture the superfast. X-rays have been used to study the fine structure of molecules at atomic details by measuring the diffraction signals from molecular crystals1. The advancement of X-ray free electron lasers (XFELs) makes it possible to study the structures from single molecules without growing crystals2–4. I would like to describe this single particle imaging method and present the challenges in processing the images resulted from single particle/molecule imaging using XFELs5,6. Due to the ultra-high brilliance of XFEL pulses, each measurement will result a complete destruction of the sample, and the bright side brought by femtosecond X-ray pulses is that each measurement outruns the X-ray damages. Although this ‘diffract-before-damage’ mode is successful, the image processing remains challenging, mostly because the orientation of each molecule is unknown. Furthermore, the coexistence of multiple conformations of molecules complicated the orientation determination. I will explain the data analysis pipeline and the problems that are waiting for (better) solutions. In the second part, I will show how we can make molecular movies. XFELs offer femtosecond laser pulses, enabling temporal resolution down to sub-picosecond using pump-probe approach, and I will show our progress in video-taping an ion pumping protein" |
Title: | Statistical analysis in meta-analysis of observational studies (cohort studies) |
Speaker: | Dr Yinghui JIN, Department of Evidence-Based Medicine and Clinical Epidemiology, Wuhan University, Wuhan, Hubei, China |
Time/Place: | 16:00 - 17:00 FSC1217, Fong Shu Chuen Library, HSH Campus, Hong Kong Baptist University |
Abstract: | The cohort study design is the best available scientific method for measuring the effects of a suspected risk factor. In a prospective cohort study, researchers raise a question and form a hypothesis about what might cause a disease. Some cohort studies have been very large and continued for a long time, producing a good deal of data that serves researchers in different fields. Statistical analysis include calculating incidence of outcome and examining the association between exposure factor and outcomes. RR is used in cohort studies to estimate the strength of the association between risk factors/exposure and outcome. Meta-analysis is a statistical method for combining the result from two or more studies, and it is an optional part of a systematic review. When conducting meta-analysis, we extract data, estimate the effect size in each study separately and then combine the estimators from the different studies in a pooled result. Actually different data form and statistical method were reported between studies included. For example, mixed model analysis, multivariate analysis of variance, generalized estimation equation, cox regression were used to examine the association between risk factors/exposure and outcome, and related OR, RR, β could be reported in one outcome in different primary studies. So how can we perform a meta-analysis based on those complicated data situation? And if there's other thing worth investigating? |
Title: | Extremal curves on Stiefel manifolds |
Speaker: | Prof Irina Markina, Department of Mathematics, University of Bergen, Norway |
Time/Place: | 16:00 - 17:00 FSC1111, Fong Shu Chuen Library, HSH Campus, Hong Kong Baptist University |
Abstract: | We will start from the revising of the role of non linear manifolds, such as n-dimensional sphere, Grassmann, Stiefel manifolds and Lie groups in the computer vision and image understanding. We will explain how their geometric structures can be used to produce geodesics and curves of constant geodesic curvature. More precisely, we describe three left-invariant non-holonomic systems on the Lie group of orthogonal transformation. We show that curves of constant geodesic curvature on Stiefel manifolds are the projections of non-holonomic geodesics generated by the left-invariant distributions on the Lie group that acts transitively on the Stiefel manifold. These curves of constant geodesic curvature are also useful in spline theory. This is a join work with V. Jurdjevic, University of Toronto, Canada and F. Silva Leite, University of Coimbra, Portugal |
Title: | Competing Risk Model with Bivariate Random Effects for Clustered Survival Data |
Speaker: | Dr Xin LAI, School of Management, Xi’an Jiaotong University, Shaanxi, China |
Time/Place: | 17:00 - 18:00 FSC1217, Fong Shu Chuen Library, HSH Campus, Hong Kong Baptist University |
Abstract: | Competing risks are often observed in clinical trial studies. As exemplified in two data sets, the bone marrow transplantation study for leukemia patients and the primary biliary cirrhosis study, patients could experience two competing events which may be correlated due to shared unobservable factors within the same cluster. With the presence of random hospital/cluster effects, a cause-specific hazard model with bivariate random effects is proposed to analyze clustered survival data with two competing events. This model extends earlier work by allowing random effects in two hazard function parts to follow a bivariate normal distribution, which gives a generalized model with a correlation parameter governing the relationship between two events due to the hospital/cluster effects. By adopting the GLMM formulation, random effects are incorporated in the model via the linear predictor terms. Estimation of parameters is achieved via an iterative algorithm. A simulation study is conducted to assess the performance of the estimators, under the proposed numerical estimation scheme. Application to the two sets of data illustrates the usefulness of the proposed model. |
We organize conferences and workshops every year. Hope we can see you in future.
Learn MoreProf. M. Cheng, Dr. Y. S. Hon, Dr. K. F. Lam, Prof. L. Ling, Dr. T. Tong and Prof. L. Zhu have been awarded research grants by Hong Kong Research Grant Council (RGC) — congratulations!
Learn MoreFollow HKBU Math