Graph Neural Networks (GNNs) generalize conventional neural networks to graph-structured data and have received considerable attention owing to their impressive performance. In spite of the notable successes, the performance of Euclidean models is inherently bounded and limited by the representation ability of Euclidean geometry, especially when it comes to datasets with highly non-Euclidean latent anatomy. Recently, hyperbolic spaces have emerged as a promising alternative for processing graph data with tree-like structure or power-law distribution and a surge of works on either methods or novel applications have been seen. Unlike Euclidean space, which expands polynomially, hyperbolic space grows exponentially with its radius, making it more suitable for modeling complex real-world data. Hence, it gains natural advantages in abstracting tree-like graphs with a hierarchical organization or power-law distribution.
This tutorial aims to give a systematical review of the methods, applications, and challenges in this fast-growing and vibrant area, with the express purpose of being accessible to all audiences. More specifically, we will first give a brief introduction to graph neural networks as well as some preliminary of Riemannian manifold and hyperbolic geometry. We then will comprehensively revisit the technical details of the developed HGNNs, by unifying them into a general framework and summarizing the variants of each component. Besides, we will introduce applications deployed in a variety of fields. Finally, we will discuss several challenges and present the potential solutions to address them, including some initial attempts of our own, which potentially paves the path for the further flourishing of the research community. A related GitHub repository of this tutorial can be found at Awesome-Hyperbolic-Graph-Representation-Learning
The topics of this tutorial include (but are not limited to) the following:
- Introduction (Min, 09:00 - 9:30)
- 1.1 An overview of graph neural networks
- 1.2 Brief Introduction of Riemannian Geometry
- 1.3 Motivation of Hyperbolic Graph Representation Learning（HGRL）
- Hyperbolic graph neural networks (Menglin, 10:00 - 10:30 )
- 2.1 Hyperbolic feature transformation
- 2.2 Hyperbolic neighborhood aggregation
- 2.3 Hyperbolic non-linear activation
- 2.3 A unified view of hyperbolic graph neural networks
- Applications (Menglin, 10:30 - 11:00 )
- 3.1 HGNNs for recommender systems
- 3.2 HGNNs for knowledge graph
- 3.3 HGRL for Other Applications
- Advanced Topics (Bo, 11:00 - 12:00)
- 4.1 Complex Structures
- 4.2 Evolving Interactions
- 4.3 Geometry-aware Learning
- 4.4 Trustworthy and Scalability
Dr. Min Zhou is currently a Research Scientist at Huawei Cloud, Shenzhen, China. She received B.S. degree in Automation from the University of Science and Technology of China in 2012, and Ph.D. degree from the Industrial Systems Engineering and Management Department, National University of Singapore in 2016, respectively. She has published several works related to hyperbolic graph representation learning on top conferences. She co-organized the 1st Workshop on Machine Learning in Software Engineering (MLiSE) @ ECML-PKDD 2021 and a tutorial on Hyperbolic Graph Representation Learning@ECML-PKDD 2022
Mr. Menglin Yang is currently a PhD student in Department of Computer Science and Engineering, The Chinese University of Hong Kong (CUHK). His research interest includes graph representation learning, Non-Euclidean geometric learning, recommender systems, and protein docking. His several works related to hyperbolic graph representation learning were accepted recent top conferences, including ICML, KDD, WSDM, WWW, SIGIR.
Mr. Bo Xiong is currently a Marie Sklodowska-Curie Early Stage Researcher at Department of Computer Science, University of Stuttgart, Germany and a PhD student at International Max Plank Research School for Intelligent Systems (IMPRS-IS). His research centres on geometric representation learning on graph and relational data, including hyperbolic representation learning. His research has been published in top conferences, including NeurIPS, KDD, ACL, WWW, SIGIR. His research received the ISWC 2022 Best Student Paper Award.
Prof. Hui Xiong is the Chair Professor and Thrust Head of Artificial Intelligence Thrust, HKUST (Guangzhou). He is also Distinguished Professor at Rutgers, the State University of New Jersey and a Distinguished Guest Professor (Grand Master Chair Professor) at the University of Science and Technology of China (USTC). Dr. Xiong's general area of research is data and knowledge engineering, with a focus on developing effective and efficient data analysis techniques for emerging data intensive applications. He was elected an ACM Distinguished Scientist in 2014, an IEEE Fellow and an AAAS Fellow in 2020. He has served regularly on the organization and program committees of numerous conferences, including as a Program Co-Chair of the Industrial and Government Track for the KDD2012, a Program Co-Chair for the IEEE ICDM2013 , a General Co-Chair for the IEEE ICDM2015, and a Program Co-Chair of the Research Track for KDD2018. Dr. Hui Xiong received his Ph.D. in Computer Science from the University of Minnesota - Twin Cities, USA, in 2005, the B.E. degree in Automation from the University of Science and Technology of China (USTC), Hefei, China, and the M.S. degree in Computer Science from the National University of Singapore (NUS), Singapore.
Prof. Irwin King is the Chair and Professor of Computer Science & Engineering at The Chinese University of Hong Kong. His research interests include machine learning, social computing, AI, web intelligence, data mining, and multimedia information processing. In these research areas, he has over 300 technical publications in journals and conferences. He is an Associate Editor of the Journal of Neural Networks (NN). He is an IEEE Fellow, an ACM Distinguished Member, and a Fellow of Hong Kong Institute of Engineers (HKIE). He has served as the President of the International Neural Network Society (INNS), General Co-chair of The WebConf 2020, ICONIP 2020, WSDM 2011, RecSys 2013, ACML 2015, and in various capacities in a number of top conferences and societies such as WWW, NIPS, ICML, IJCAI, AAAI, APNNS, etc. He is the recipient of the ACM CIKM2019 Test of Time Award, the ACM SIGIR 2020 Test of Time Award, and 2020 APNNS Outstanding Achievement Award for his contributions made in social computing with machine learning. In early 2010 while on leave with AT&T Labs Research, San Francisco, he taught classes as a Visiting Professor at UC Berkeley. He received his B.Sc. degree in Engineering and Applied Science from California Institute of Technology (Caltech), Pasadena and his M.Sc. and Ph.D. degree in Computer Science from the University of Southern California (USC), Los Angeles.