Algorithmic Fairness in Artificial intelligence, Machine learning and Decision making (AFair-AMLD)

Content

Wide application of Artificial intelligence (AI), Machine learning (ML) and Decision making (DM) in sensitive applications like employment, sentencing, resource allocation, etc., emerged the need for development of “fair” AI/ML/DM methods. These methods are considered “fair” if they do not make discrimination based on sensitive attributes with respect to a positive or a negative outcome (i.e. gender or race of a job applicant should not influence candidate selection or salary level). Building “fair” algorithms and models is a highly challenging process having in mind that the goal functions of traditional algorithms are designed to discriminate instances based on outcomes (while disregarding sensitive attributes) through optimization of algorithm performance. Given the importance and increased legal constraints that are imposed for application of AI/ML/DM, many “fair” algorithms are being developed. However, multi-objective nature of the problem as well as potential social implications of exploitation of fair models makes algorithm development very challenging process that needs to address problem of the tradeoff between algorithmic performance and fairness. This often prevents (or decreases level) of adoption of fair models in industry applications. In addition to technical definition of fairness, adoption of fair algorithms and models depends on the moral and ethical arguments of current fairness metrics, their compliance with European and US legal frameworks, their limitations, and risks, etc.

Objectives

The main objective of the workshop is to motivate and enable collaboration of a wider community of interest in AI/ML/DM fair modeling. Further, we plan to stimulate interdisciplinary discussions on cutting edge AI/ML/DM algorithms and their compliance with legal/social definitions of fairness as well as implications of fair (and unfair) AI/ML/DM modelling in real world settings. By accomplishing workshop objectives, we hope that we can produce synergetic effect of legal/social and technical aspects of fairness that would lead to development of novel methods as well to faster adoption of such methods in industry. These objectives will be accomplished by attracting invited talks and direct reach to both technical and social/legal researchers in areas of AI/ML/DM fairness, making sure that both groups are adequately represented.

Topics of interests include (but are not limited to):

  • Fair classification, regression and clustering algorithms
  • Envy free classification, regression and clustering algorithms
  • Pre-processing, in-processing, post-processing techniques in fair AI/ML/DM
  • Fair ranking algorithms
  • Fairness in recommendations and recommender systems
  • Fair classification and regression on graphs
  • Fair deep learning algorithms
  • Novel measures of group and individual fairness
  • Fairness and causal inference
  • Novel mathematical formulations of fairness concepts
  • Trade-offs between fairness metrics
  • Trade-offs between algorithmic performance and fairness metrics.
  • Fair embeddings
  • Fair data imputation
  • Fair algorithm applications
  • Fairness-sensitive algorithms in practice
  • Benchmark datasets for AI/ML/DM
  • Applications and case studies of fair AI/ML/DM models in different domains (marketing, healthcare, law, banking etc.)