Invited Talks

Hanghang Tong

Title: NetFair: Toward the Why Question of Network Mining

Abstract: Network (i.e., graph) mining plays a pivotal role in many high-impact application domains. State-of-the-art offers a wealth of sophisticated theories and algorithms, primarily focusing on answering who or what type question. On the other hand, the why or how question of network mining has not been well studied. For example, how can we ensure network mining is fair? How do mining results relate to the input graph topology? Why does the mining algorithm ’think’ a transaction looks suspicious? In this talk, I will present our work on addressing individual fairness on graph mining. First, we present a generic definition of individual fairness for graph mining which naturally leads to a quantitative measure of the potential bias in graph mining results. Second, we propose three mutually complementary algorithmic frameworks to mitigate the proposed individual bias measure, namely debiasing the input graph, debiasing the mining model and debiasing the mining results. Each algorithmic framework is formulated from the optimization perspective, using effective and efficient solvers, which are applicable to multiple graph mining tasks. Third, accommodating individual fairness is likely to change the original graph mining results without the fairness consideration. We develop an upper bound to characterize the cost (i.e., the difference between the graph mining results with and without the fairness consideration). Toward the end of my talk, I will also introduce some other recent work on addressing the why & how question of network mining, and share my thoughts about the future work.

Bio: Hanghang Tong is currently an associate professor at Department of Computer Science at University of Illinois at Urbana-Champaign. Before that he was an associate professor at School of Computing, Informatics, and Decision Systems Engineering (CIDSE), Arizona State University. He received his M.Sc. and Ph.D. degrees from Carnegie Mellon University in 2008 and 2009, both in Machine Learning. His research interest is in large scale data mining for graphs and multimedia. He has received several awards, including ACM distinguished member (2020), ICDM Tao Li award (2019), SDM/IBM Early Career Data Mining Research award (2018), NSF CAREER award (2017), ICDM 10-Year Highest Impact Paper award (2022 & 2015), and several best paper awards. He was the Editor-in-Chief of SIGKDD Explorations (ACM), and is an associate editor of ACM Computing Surveys (ACM) and Knowledge and Information Systems (Springer); and has served as a program committee member in data mining, database and artificial intelligence venues (e.g., SIGKDD, SIGMOD, AAAI, WWW, CIKM, etc.). He is a fellow of IEEE.

Jundong Li

Title: Empower Graph Machine Learning for Trustworthy Decision Making

Abstract: Graph machine learning (GML) models, such as graph neural networks, have proven to be highly effective in modeling graph-structured data and achieving remarkable predictive performance in various high-stake applications, including credit risk scoring, crime prediction, and medical diagnosis. However, concerns have been raised regarding the trustworthiness of GML models in decision making scenarios when fairness, transparency, and accountability are lacking. To address these concerns, I will present our recent work on empowering GML for trustworthy decision making by focusing on three key aspects: fairness, explanation, and causality. First, I will discuss how to improve the fairness of GML from a data debiasing perspective. In particular, I will show how to measure data biases regarding different modalities of graph data and how to mitigate the data biases in a model-agnostic manner that can benefit different GML models. Second, I will show that explanation, as an effective debugging tool, not only can help us understand how the decisions are made but also could serve as a useful tool to diagnose how biases and discrimination are introduced in GML. Toward this goal, I will present a post-hoc structural explanation framework that can understand the unfairness issues of GML. Third, I will argue the emerging need to introduce causality for trustworthy decision making on graphs, as traditional GML could heavily rely on spurious correlations for making decisions. To bridge the gap, I will present a GML-based causal inference framework that aims to unleash the power of graph information for causal effect estimation.

Bio: Jundong Li is an Assistant Professor at the University of Virginia with appointments in the Department of Electrical and Computer Engineering, Department of Computer Science, and School of Data Science. Before that, he received his Ph.D. degree in Computer Science at Arizona State University in 2019. His research interests are generally in data mining and machine learning, with a particular focus on graph mining, causal inference, and trustworthy AI, and their applications in cybersecurity, healthcare, biology, and social science. As a result of his research work, he has published over 120 papers in high-impact venues such as KDD, WWW, NeurIPS, IJCAI, AAAI, WSDM, EMNLP, CIKM, ICDM, SDM, ECML-PKDD, CSUR, TPAMI, TKDE, TKDD, and TIST, accumulating over 7,000 citations. He has won several prestigious awards, including KDD Best Research Paper Award (2022), NSF CAREER Award (2022), JP Morgan Chase Faculty Research Award (2021 & 2022), Cisco Faculty Research Award (2021), and AAAI New Faculty Highlights (2021).

James Foulds and Shimei Pan

Title: Human-Centered Approaches to AI Fairness

Abstract: As AI technologies are increasingly used to make decisions that impact people’s lives, it becomes correspondingly important to consider the people who are impacted when designing AI systems. While substantial work has begun on investigating the issues of fairness and bias in AI and machine learning, much remains to be done to connect this work with the human context in which AI systems are embedded. For example, disciplines such as the humanities, law, and philosophy have engaged with the concept of fairness far longer than the discipline of machine learning has done so. Similarly, the fields of human-centered computing (HCI) and science and technology studies (STS), which study the interactions between humans and technology, are well equipped to inform technical algorithmic research on AI fairness. In this presentation, we will discuss our work toward bridging the gaps between disciplines to develop AI fairness solutions that are informed by human-centered considerations. We will begin by describing differential fairness, an AI fairness criterion that is informed by intersectionality, a critical framework from the humanities and legal literature that emphasizes how systems of power and privilege in our society impact individuals at the intersection of multiple protected dimensions. We will then demonstrate how these ideas can be leveraged in developing a fair career recommendation system. Finally we share some insights from a user study exploring how human bias may undermine the effectiveness of a fair AI system.

Bio: Dr. James Foulds is an Assistant Professor in the Department of Information Systems at the University of Maryland, Baltimore County (UMBC). His research aims to improve the role of artificial intelligence in society regarding fairness and privacy, and to promote the practice of computational social science. He received the NSF CAREER and CRII awards for his work on fairness and bias in artificial intelligence. He earned his Ph.D. in Computer Science at the University of California, Irvine. He was a postdoctoral scholar at the University of California, Santa Cruz, followed by the University of California, San Diego. His master’s and bachelor’s degrees were earned with first-class honours at the University of Waikato, New Zealand, where he also contributed to the Weka data mining system.

Bio: Dr. Shimei Pan is an Associate Professor in the Information Systems Department and the director of the text mining and social media analytics lab at University of Maryland, Baltimore County (UMBC). Previously, she was a research scientist at IBM Watson Research Center in New York. Her research focuses on Natural Language Processing (NLP), fair AI, and Human-AI Interaction. She was the program co-chair of 2015 and the conference co-chair of 2019 ACM International Conference on Intelligent User Interfaces (IUI), a premier venue for reporting research and development on Human-AI Interaction. Her current Fulbright Award in Germany is on cross-cultural analysis of social biases with large pre-trained language models. Dr. Pan received a Ph.D. in Computer Science from Columbia University.

Previous
Next