Year | Month |
2023 | Jan Feb Mar |
2022 | Jan Feb Mar Apr May Jun Jul Aug Oct Nov Dec |
2021 | Feb Mar Apr May Jun Jul Aug Sep Oct Nov Dec |
2020 | Jan May Jun Jul Aug Sep Oct Nov Dec |
2019 | Jan Feb Mar Apr May Jun Jul Aug Oct Nov |
Title: | A Geometric Understanding of Deep Learning |
Speaker: | Prof David Xianfeng Gu, Department of Computer Science, Stony Brook University, USA |
Time/Place: | 10:00 - 11:00 Zoom, (Meeting ID: 965 6623 4311) |
Abstract: | This work introduces an optimal transportation (OT) view of generative adversarial networks (GANs). Natural datasets have intrinsic patterns, which can be summarized as the manifold distribution principle: the distribution of a class of data is close to a low-dimensional manifold. GANs mainly accomplish two tasks: manifold learning and probability distribution transformation. The latter can be carried out using the classical OT method. From the OT perspective, the generator computes the OT map, while the discriminator computes the Wasserstein distance between the generated data distribution and the real data distribution; both can be reduced to a convex geometric optimization process. Furthermore, OT theory discovers the intrinsic collaborative—instead of competitive—relation between the generator and the discriminator, and the fundamental reason for mode collapse. We also propose a novel generative model, which uses an autoencoder (AE) for manifold learning and OT map for probability distribution transformation. This AE–OT model improves the theoretical rigor and transparency, as well as the computational stability and efficiency; in particular, it eliminates the mode collapse. The experimental results validate our hypothesis, and demonstrate the advantages of our proposed model. |
Title: | The Limits of Matrix Computations at Extreme Scale and Low Precisions |
Speaker: | Prof Nicholas J. Higham, Department of Mathematics, University of Manchester, United Kingdom |
Time/Place: | 16:00 - 17:00 Zoom, (Meeting ID: 921 4525 8273) |
Abstract: | As computer architectures evolve and the exascale era approaches, we are solving larger and larger problems. At the same time, much modern hardware provides floating-point arithmetic in half, single, and double precision formats, and to make the most of the hardware we need to exploit the different precisions. How large can we take the dimension n in matrix computations and still obtain solutions of acceptable accuracy? Standard rounding error bounds are proportional to p(n)u, with p growing at least linearly with n. We are at the stage where these rounding error bounds are not able to guarantee any accuracy or stability in the computed results for extreme-scale or low-accuracy computations. We explain how rounding error bounds with much smaller constants can be obtained. The key ideas are to exploit the use of blocked algorithms, which break the data into blocks of size b and lead to a reduction in the error constants by a factor b or more; to take account of architectural features such as extended precision registers and fused multiply-add operations; and to carry out probabilistic rounding error analysis, which provides error constants that are the square roots of those of the worst-case bounds. Combining these different considerations provides new understanding of the limits of what we can compute at extreme scale and low precision in numerical linear algebra. |
We organize conferences and workshops every year. Hope we can see you in future.
Learn MoreProf. M. Cheng, Dr. Y. S. Hon, Dr. K. F. Lam, Prof. L. Ling, Dr. T. Tong and Prof. L. Zhu have been awarded research grants by Hong Kong Research Grant Council (RGC) — congratulations!
Learn MoreFollow HKBU Math