Year | Month |
2023 | Jan Feb Mar Apr May Jun Jul Aug Oct |
2022 | Jan Feb Mar Apr May Jun Jul Aug Oct Nov Dec |
2021 | Feb Mar Apr May Jun Jul Aug Sep Oct Nov Dec |
2020 | Jan May Jun Jul Aug Sep Oct Nov Dec |
2019 | Jan Feb Mar Apr May Jun Jul Aug Oct Nov |
Title: | Online Tensor Learning: Computational and Statistical Trade-offs, Adaptivity and Optimal Regret |
Speaker: | Dr. Jingyang LI, Department of Mathematics, Hong Kong University of Science and Technology, Hong Kong |
Time/Place: | 11:00 - 12:00 FSC1217, Fong Shu Chuen Library, HSH Campus, Hong Kong Baptist University |
Abstract: | We investigate a generalized framework for estimating latent low-rank tensors in an online setting, encompassing both linear and generalized linear models. This framework offers a flexible approach for handling continuous or categorical variables. Additionally, we investigate two specific applications: online tensor completion and online binary tensor learning. To address these challenges, we propose the online Riemannian gradient descent algorithm, which demonstrates linear convergence and the ability to recover the low-rank component under appropriate conditions in all applications. Furthermore, we establish a precise entry-wise error bound for online tensor completion. Notably, our work represents the first attempt to incorporate noise in the online low-rank tensor recovery task. Intriguingly, we observe a surprising trade-off between computational and statistical aspects in the presence of noise. Increasing the step size accelerates convergence but leads to higher statistical error, whereas a smaller step size yields a statistically optimal estimator at the expense of slower convergence. Moreover, we conduct regret analysis for online tensor regression. Under the fixed step size regime, a fascinating trilemma concerning the convergence rate, statistical error rate, and regret is observed. With an optimal choice of step size we achieve an optimal regret of O(T‾‾√). Furthermore, we extend our analysis to the adaptive setting where the horizon T is unknown. In this case, we demonstrate that by employing different step sizes, we can attain a statistically optimal error rate along with a regret of O(logT). To validate our theoretical claims, we provide numerical results that corroborate our findings and support our assertions. |
Title: | The upper-crossing/solution (US) algorithm for root-finding with strongly stable convergence |
Speaker: | Professor Guoliang Tian, Department of Statistics and Data Science, Southern University of Science and Technology, China |
Time/Place: | 15:30 - 16:30 FSC1217, Fong Shu Chuen Library, HSH Campus, Hong Kong Baptist University |
Abstract: | In this paper, we propose a new and broadly applicable root-finding method, called as the upper-crossing/solution (US) algorithm, which belongs to the category of non-bracketing (or open domain) methods. The US algorithm is a general principle for iteratively seeking the unique root θ^* of a non-linear equation g(θ) = 0 and its each iteration consists of two steps: an upper-crossing step (U-step) and a solution step (S-step), where the U-step finds an upper-crossing function or a U-function U(θ|θ^((t))) [whose form depends on θ^((t)) being the t-th iteration of θ^*] based on a new notion of so-called changing direction inequality, and the S-step solves the simple U-equation U(θ│θ^((t) ) )=0 to obtain its explicit solution θ^((t+1)). The US algorithm holds two major advantages: (i) It strongly stably converges to the root θ^*; and (ii) it does not depend on any initial values, in contrast to Newton's method. The key step for applying the US algorithm is to construct one simple U-function U(θ|θ^((t))) such that an explicit solution to the U-equation U(θ│θ^((t) ) )=0 is available. Based on the first-, second- and third-derivative of g(θ), three methods are given for constructing such U-functions. We show various applications of the US algorithm in calculating quantile in continuous distributions, calculating exact p-values for skew null distributions, and finding maximum likelihood estimates of parameters in a class of continuous/discrete distributions. The analysis of the convergence rate of the US algorithm and some numerical experiments are also provided. Especially, because of the property of strongly stable convergence, the US algorithm could be one of the powerful tools for solving an equation with multiple roots. |
Title: | Factor-Augmented Transformation Models for Interval-Censored Failure Time Data |
Speaker: | Dr. Shuwei Li, School of Economics and Statistics, Guangzhou University, China |
Time/Place: | 16:30 - 17:30 FSC1217, Fong Shu Chuen Library, HSH Campus, Hong Kong Baptist University |
Abstract: | Interval-censored failure time data frequently arise in various scientific studies where each subject experiences periodical examinations for the occurrence of the failure event of interest, and the failure time is only known to lie in a specific time interval. In addition, collected data may include multiple observed variables with a certain degree of correlation, leading to severe multicollinearity issues. This study proposes a factor-augmented transformation model to analyze interval-censored failure time data while reducing model dimensionality and avoiding multicollinearity elicited by multiple correlated covariates. We provide a joint modeling framework by comprising a factor analysis model to group multiple observed variables into a few latent factors and a class of semiparametric transformation models with the augmented factors to examine their and other covariate effects on the failure event. Furthermore, we propose a nonparametric maximum likelihood estimation approach and develop a computationally stable and reliable expectation-maximization algorithm for its implementation. We establish the asymptotic properties of the proposed estimators and conduct simulation studies to assess the empirical performance of the proposed method. An application to the Alzheimer's Disease Neuroimaging Initiative study is provided. An R package ICTransCFA is also available for practitioners. |
Title: | Affine-invariant WENO weights and their applications in solving hyperbolic conservation laws |
Speaker: | Dr. Bao-Shan Wang, School of Mathematical Sicences, Ocean University of China, China |
Time/Place: | 11:00 - 12:00 FSC1217, Fong Shu Chuen Library, HSH Campus, Hong Kong Baptist University |
Abstract: | The user-defined sensitivity parameter responsible for avoiding the division of zero in the WENO nonlinear weights had plagued the schemes’ performance in resolving smooth function with high-order critical points (CP-property) and capturing discontinuity essentially non-oscillatory (ENO-property). In this talk, a novel and simple yet effective WENO weights (Ai-weights) is devised for the (affine-invariant) Ai-WENO operator to handle the case when the function being reconstructed undergoes an affine transformation (Ai-operator) with a constant scaling and translation (Ai-coefficients) within a global stencil. The Ai-weights essentially decouple the inter-dependencies of the Ai-coefficients and sensitivity parameter effectively. For any given sensitivity parameter, the Ai-WENO operator guarantees that the WENO operator and the affine-transformation operator are commutable, as proven theoretically and validated numerically. In the presence of discontinuities, the Ai-WENO scheme satisfies the ENOproperty even when the classical WENO-JS and improved WENO-Z schemes might not. Examples in the shallow water wave equations, the Euler equations under gravitational fields, and hydrostatic reconstruction for hyperbolic chemotaxis models solved by the characteristic-wise Ai-WENO scheme are intrinsically well-balanced (WB-property) are given. In summary, any Ai-weights-based WENO reconstruction/interpolation operator enhances the robustness and reliability of the WENO scheme for solving hyperbolic conservation laws. |
We organize conferences and workshops every year. Hope we can see you in future.
Learn MoreProf. M. Cheng, Dr. Y. S. Hon, Dr. K. F. Lam, Prof. L. Ling, Dr. T. Tong and Prof. L. Zhu have been awarded research grants by Hong Kong Research Grant Council (RGC) — congratulations!
Learn MoreFollow HKBU Math