Year | Month |
2023 | Jan Feb Mar Apr May Jun Jul Aug Oct |
2022 | Jan Feb Mar Apr May Jun Jul Aug Oct Nov Dec |
2021 | Feb Mar Apr May Jun Jul Aug Sep Oct Nov Dec |
2020 | Jan May Jun Jul Aug Sep Oct Nov Dec |
2019 | Jan Feb Mar Apr May Jun Jul Aug Oct Nov |
Title: | New Developments in Meta-analysis with Five-number Summary |
Speaker: | Mr SHI Jiandong, Department of Mathematics, Hong Kong Baptist University, Hong Kong |
Time/Place: | 13:00 - 15:00 Zoom, Meeting ID: 971 1392 8467 Password: 584537 |
Abstract: | Meta-analysis is a statistical method for synthesizing multiple studies to achieve more comprehensive and reliable conclusions. This thesis mainly focuses on the meta-analysis with continuous outcomes, where some studies are reported with the whole or part of the five-number summary including the minimum and maximum values, the first and third quartiles, and the median. To deal with the five-number summary data in meta-analysis, this thesis covers a series of works, including the skewness and/or normality tests, the data transformation and the effect size estimation. In addition, a new paradox is found in random-effects meta-analysis, which reminds the practitioners to be cautious when using the random-effects model. |
Title: | New Advances in Fixed-effects Meta-analysis and a New Measure for Heterogeneity |
Speaker: | Ms YANG Ke, Department of Mathematics, Hong Kong Baptist University, Hong Kong |
Time/Place: | 15:00 - 17:00 Zoom, Meeting ID: 957 1320 3735 Password: 184058 |
Abstract: | Meta-analysis is a statistical tool for evidence-based practice, which aims to synthesize multiple studies and produces a summary conclusion for the whole body of research. In the literature, there are three main statistical models for meta-analysis including the common-effect model, the random-effects model, and the fixed-effects model. Chapter 1 gives a brief introduction on the above three models for meta-analysis, followed by several real data example. Among the three models, the common-effect model and the random-effects model are more commonly used in the literature. To choose a proper model between them, the Q statistic and the I^2 statistic are frequently employed as the criteria. Recently, it is recognized that the fixed-effects model is also essential for meta-analysis. With this new model, the existing methods are no longer sufficient for model selection in meta-analysis. In view of the demand, we propose a novel method for model selection between the fixed-effects model and the random-effects model in Chapter 2. In Chapter 3, we propose a new measure for quantifying the heterogeneity in meta-analysis. To achieve the goal, we first show that the I^2 statistic is heavily dependent on the study sample sizes, and more seriously, it may also yield contradictory results for the amount of heterogeneity. Inspired by this, we propose an Intrinsic measure for Quantifying the heterogeneity in meta-analysis, referred to as the IQ statistic, to overcome the limitations in the I^2 statistic. Combining the p-values is an important statistical approach for the fixed-effects meta-analysis. Existing methods and their statistical properties for combining the p-values rely on the assumption that the individual p-values are independent of each other. In Chapter 4, we propose new methods that are able to combine the p-values derived from dependent tests. Like Birnhaum’s adimissibility for methods of combining the independent p-values, we propose the criterion of admissibility for combining the dependent p-values and derive the set of methods combining the bivariate p-values that satisfy such criterion. The theoretical results for the bivariate p-values are further generalized to the multivariate p-values. |
Title: | Understanding Neural Networks from Geometrical Viewpoints |
Speaker: | Mr LAI Jianfa, Department of Mathematics, Hong Kong Baptist University, Hong Kong |
Time/Place: | 10:30 - 12:30 Zoom, Meeting ID: 961 6316 3621 Password: 047857 |
Abstract: | Deep neural networks, one of the Artificial Intelligence (AI) methodology, are marked as an effective and universal model, having the capability of solving a wide variety of tasks. Even though neural networks are the state-of-the-art technology to fit or approximate complex data, they are often criticized as a black box model due to their complex and nonlinear structure, which will not give any insights on the structure of the data being approximated. To understand such complex structure, this thesis tries to use two geometrical viewpoints to figure out how neural networks actually work. The first viewpoint is called ‘Space Division’. Except for the neurons of the last layer, the neurons of the front layers are to divide the data space into plenty of pieces. And the data in each piece is easy to classify or fit. And the neurons of the last layer are to classify or fit the data in each piece. The second viewpoint is called ‘Layerwise Mapping’. For multilayer neural networks, each layer, each layer is to map the data into a new space and the data is easier to classify or fit in the new space. And finally, the data can be linearly classified or fitted in the last layer. Through these two geometrical viewpoints, we can find there are two key properties of neural networks, called Concentration and Monotonicity. Concentration shows that neural networks can be easier to fit the data with a larger width. Monotonicity shows that the layerwise loss is decreasing. These two properties provide important evidence on why neural networks are able to learn by increasing width and depth. With these two properties, we can have a better understanding of the width and depth of neural networks. |
We organize conferences and workshops every year. Hope we can see you in future.
Learn MoreProf. M. Cheng, Dr. Y. S. Hon, Dr. K. F. Lam, Prof. L. Ling, Dr. T. Tong and Prof. L. Zhu have been awarded research grants by Hong Kong Research Grant Council (RGC) — congratulations!
Learn MoreFollow HKBU Math