First International Workshop on
Responsible Pattern Recognition and Machine Intelligence
(Responsible PR&MI 2021)

to be held as part of the 18th International Conference on Computer Vision (ICCV 2021)

October 17, 2021 07:00am to 11:45am EDT - ONLINE EVENT

Workshop Aims and Scope

The consideration of ethical and beyond-accuracy aspects is of increasing importance in industrial and academic spheres, as systems empowered with AI are influencing all facets of our daily life. It is now commonplace to see evidence on the harmful impacts of current AI systems deployed in various real world, high-stakes environments. Indeed, pattern recognition and machine intelligence that leverage computer vision are among the domains exposed to ethical risks, and recent work emphasizes that the methodologies and countermeasures for facing challenges related to these issues are highly domain-specific. Despite the recent attention, important aspects like fairness, accountability, transparency, and ethics are still under-explored in the computer vision domain. To extend domain-generic studies conducted in literature and enhance our understanding of these aspects specifically, exploring what fairness, accountability, transparency, ethics, and other beyond-accuracy aspects deeply mean in computer vision applications becomes hence essential. Responsible PR&MI 2021 will be the ICCV's workshop aimed at collecting high-quality, high-impact, original research in this emerging field and providing a common ground for all interested researchers and practitioners. Given the growing interest of the community in these topics, this workshop aims to generate a strong outcome and a wide community dialog.

Workshop Topics

The workshop welcomes contributions in all topics related to fairness, accountability, transparency, ethics, and other beyond-accuracy aspects in pattern recognition and machine intelligence applications, with special attention to computer vision, focused (but not limited) to:

  • Data Collection and Problem Modelling:
    • Modelling fairness and/or other ethical aspects of pattern recognition and machine intelligence models (e.g., auditing, fairness concepts, definition of fairness, representative data collection)
    • Modelling accountability of pattern recognition and machine intelligence models (e.g., accountability for different user's groups, accountability-aware model design)
    • Modelling transparency of pattern recognition and machine intelligence models (e.g., participatory studies to identify explanatory needs, explainable prediction schemas)
    • Modelling privacy and security in pattern recognition and machine intelligence models (e.g., privacy-preserving models, attacks threats modelling, requirements on protection of user's representation)
  • Design and Development:
    • Methodologies to improve fairness and/or other ethical aspects in pattern recognition and machine intelligence (e.g., multi-task learning and trade-offs, unfairness mitigation and countermeasures)
    • Methodologies to improve accountability of pattern recognition and machine intelligence (e.g., methods for describing the system, data usage and integrity)
    • Methodologies to improve transparency of pattern recognition and machine intelligence (e.g., explainable user's interfaces, taxonomies for explanations)
    • Methodologies to improve privacy and security of pattern recognition and machine intelligence (e.g., methods that enable user control of shared sensitive attributes, multi-task learning for trade-offs between privacy and accuracy)
  • Evaluation:
    • Methods to assess fairness and/or other ethical aspects in pattern recognition and machine intelligence (e.g., metrics for fairness assessment, evaluation protocols, assessing stakeholder unfairness at group or individual level)
    • Methods to assess accountability in pattern recognition and machine intelligence (e.g., metrics, protocols, and field studies to validate accountability strategies, studies to assess accountability of existing systems)
    • Methods to assess transparency in pattern recognition and machine intelligence (e.g., metrics, protocols, and evaluation frameworks for assessing the impact of explainable strategies and interfaces)
    • Methods to assess privacy and security in pattern recognition and machine intelligence (e.g., metrics, protocols, and evaluation frameworks for assessing privacy and robustness)
  • Applications:
    • Action and behavior recognition
    • Biometric recognition
    • Computational photography
    • Image and video retrieval
    • Medical, biological, and cell microscopy
    • Scene analysis and understanding
    • Vision for robotics and autonomous vehicles
    • ... and more related

Important Dates

  • Submissions: June 19, 2021 July 7, 2021
  • Notifications: July 22, 2021 July 28, 2021
  • Camera-Ready: August 7, 2021
  • Workshop: October 17, 2021 07:00am to 11:45am EDT - ONLINE EVENT

Submission Details

We invite authors to submit 8-page unpublished original papers, with additional pages containing only cited references allowed. Submitted papers should not have been previously published or accepted for publication in substantially similar form in any peer-reviewed venue, such as journals, conferences, or workshops.

All submissions will go through a double-blind review process and be reviewed by at least three reviewers on the basis of relevance for the workshop, novelty/originality, significance, technical quality and correctness, quality and clarity of presentation, quality of references and reproducibility. Submitted papers must be formatted according to the Latex template of the workshop. Authors should consult the workshop paper guidelines for the preparation of their papers. Both the template and the guidelines are identical to the ICCV 2021 main conference ones. All contributions must be submitted as PDF files to https://easychair.org/my/conference?conf=rprmi2021.

Submitted papers will be rejected without review in case they are not properly anonymized, do not comply with the template, or do not follow the above guidelines.

Accepted papers will be published in the ICCV 2021 workshop proceedings.

We expect authors, PC, and the organizing committee to adhere to these policies, same as ICCV 2021 main conference.

Keynote Speakers

Iyad Rahwan

Prof. Dr. Iyad Rahwan
Max Planck Institute for Human Development (Germany)

Title: Adventures in Machine Behavior

Abstract: Machines powered by artificial intelligence increasingly mediate our social, cultural, economic and political interactions. Understanding the behaviour of artificial intelligence systems is essential to our ability to control their actions, reap their benefits and minimize their harms. This talk presents a broad scientific research agenda to study machine behavior. It then summarizes a number of studies of human-machine behavioral dynamics, as well as human perception and expectations of machine behavior.

Short Bio: Iyad Rahwan is the managing director of the Max Planck Institute for Human Development in Berlin, where he founded and directs the Center for Humans & Machines. He is also an honorary professor of Electrical Engineering and Computer Science at the Technical University of Berlin. Until June 2020, he was an Associate Professor of Media Arts & Sciences at the Massachusetts Institute of Technology (MIT). He is the creator of the Moral Machine and Evil AI Cartoons.




Arun Ross

Prof. Dr. Arun Ross
Michigan State University (US)

Title: Altered Biometric Data: The Good and the Bad

Abstract: Biometrics refers to the use of physical and behavioral traits such as fingerprints, face, iris, voice and gait to recognize an individual. The biometric data (e.g., a face image) acquired from an individual may be modified for several reasons. While some modifications are intended to improve the performance of a biometric system (e.g., face alignment and image enhancement), others may be intentionally adversarial (e.g., spoofing or obfuscating an identity). Furthermore, biometric data may be subjected to a sequence of alterations resulting in a set of near-duplicate data (e.g., applying a sequence of image filters to an input face image). In this talk, we will discuss methods for (a) detecting altered biometric data; (b) determining the relationship between near-duplicate biometric data and constructing a phylogeny tree denoting the sequence in which they were transformed; and (c) using altered biometric data to enhance privacy. The goal of the talk is to convey the dangers and, at the same time, the benefits of deliberately altered biometric data.

Short Bio: Arun Ross is the John and Eva Cillag Endowed Chair in the College of Engineering and a Professor in the Department of Computer Science and Engineering at Michigan State University. He also serves as the Site Director of the NSF Center for Identification Technology Research (CITeR). He received the B.E. (Hons.) degree in Computer Science from BITS Pilani, India, and the M.S. and PhD degrees in Computer Science and Engineering from Michigan State University. ​ He was in the faculty of West Virginia University between 2003 and 2012 where he received the Benedum Distinguished Scholar Award for excellence in creative research and the WVU Foundation Outstanding Teaching Award. ​ His expertise is in the area of biometrics, computer vision and machine learning. He has advocated for the responsible use of biometrics in multiple forums including the NATO Advanced Research Workshop on Identity and Security in Switzerland in 2018. He testified as an expert panelist in an event organized by the United Nations Counter-Terrorism Committee at the UN Headquarters in 2013. ​ Ross serves as Associate Editor-in-Chief of the Pattern Recognition Journal, Area Editor of the Computer Vision and Image Understanding Journal and Associate Editor of IEEE Transactions on Biometrics, Behavior, and Identity Science. He has served as Associate Editor of IEEE Transactions on Information Forensics and Security, IEEE Transactions on Image Processing, IEEE Transactions on Circuits and Systems for Video Technology, ACM Computing Surveys and Image & Vision Computing Journal. He has also served as Senior Area Editor of IEEE Transactions on Image Processing. ​ Ross is a recipient of the NSF CAREER Award. He was designated a Kavli Fellow by the US National Academy of Sciences by virtue of his presentation at the 2006 Kavli Frontiers of Science Symposia. In recognition of his contributions to the field of pattern recognition and biometrics, he received the JK Aggarwal Prize in 2014 and the Young Biometrics Investigator Award in 2013 from the International Association of Pattern Recognition (IAPR).

Program

This workshop will take place online on October 17, 2021, 07:00am - 11:45am EDT. To participate, you need to register to the ICCV conference. Once registered, you will receive by e-mail further details on how to join the workshop.

Timing Content
07:00 07:05 Welcome Message
07:05 08:00 Keynote Talk by Prof. Dr. Arun Ross (Michigan State University, US)
08:00 09:00
Paper Session I: Bias and Security
  • 08:00 - 08:20 (15 mins + 5 mins Q&A)
    The Watchlist Imbalance Effect in Biometric Face Identification: Comparing Theoretical Estimates and Empiric Measurements
    Pawel Drozdowski, Christian Rathgeb and Christoph Busch
  • 08:20 - 08:40 (15 mins + 5 mins Q&A)
    Unravelling the Effect of Image Distortions for Biased Prediction of Pre-trained Face Recognition Models
    Puspita Majumdar, Surbhi Mittal, Richa Singh and Mayank Vatsa
  • 08:40 - 09:00 (15 mins + 5 mins Q&A)
    Towards Solving the DeepFake Problem : An Analysis on Improving DeepFake Detection using Dynamic Face Augmentation
    Sowmen Das, Selim Seferbekov, Arup Datta, Md. Saiful Islam and Md. Ruhul Amin
09:00 09:30 Coffee Break
09:30 10:25 Keynote Talk by Prof. Dr. Iyad Rahwan (Max Planck Institute for Human Development, Germany)
10:25 11:25
Paper Session II: Explainability and Privacy
  • 10:25 - 10:45 (15 mins + 5 mins Q&A)
    XAI Handbook: Towards a Unified Framework for Explainable AI
    Sebastian Palacio, Adriano Lucieri, Mohsin Munir, Sheraz Ahmed, Jörn Hees and Andreas Dengel
  • 10:45 - 11:05 (15 mins + 5 mins Q&A)
    Toward Affective XAI: Facial Affect Analysis for Understanding Explainable Human-AI Interactions
    Luke Guerdan, Alex Raymond and Hatice Gunes
  • 11:05 - 11:25 (15 mins + 5 mins Q&A)
    Bridging the gap between debiasing and privacy for deep learning
    Carlo Alberto Barbano, Enzo Tartaglione and Marco Grangetto
11:25 11:45 Conclusions and Final Remarks

Attending

The registration to the workshop is managed by the ICCV 2021 Main Conference at https://iccv2021.thecvf.com/node/47.

Committee

Workshop Chairs

Program Committee

  • Andrea F. Abate, University of Salerno
  • Fernando Alonso-Fernandez, Halmstad University
  • Paola Barra, University of Salerno
  • Carmen Bisogni, University of Salerno
  • Modesto Castrillon Santana, Universidad de Las Palmas de Gran Canaria
  • Jacqueline Cavazos, University of California Irvine
  • Celia Cintas, IBM
  • Naser Damer, Fraunhofer Institute for Computer Graphics Research IGD
  • Maria De Marsico, Sapienza University of Rome
  • Julian Fierrez, Universidad Autonoma de Madrid
  • David Freire-Obregón, Universidad de Las Palmas de Gran Canaria
  • Chiara Galdi, EURECOM
  • Marta Gomez-Barrero, Hochschule Ansbach
  • Hatice Gunes, University of Cambridge
  • Jungseock Joo, University of California Los Angeles
  • Sinan Kalkan, Middle East Technical University
  • Pawel Korus, New York University
  • Emanuela Marasco, George Mason University
  • Shruti Nagpal, IIIT-Delhi
  • Michele Nappi, University of Salerno
  • Fabio Narducci, University of Salerno
  • Tempestt Neal, University of South Florida
  • Javier Ortega-Garcia, Universidad Autonoma de Madrid
  • Jonathon Phillips, National Institute of Standards and Technology
  • Alessandro Sebastian Podda, University of Cagliari
  • Florin Pop, University Politehnica of Bucharest
  • Hugo Proença, Univeristy of Beira Interior
  • Kiran Raja, Norwegian University of Science and Technology
  • Christian Rathgeb, Hochschule Darmstadt
  • Ajita Rattani, Wichita State University
  • Daniel Riccio, University of Naples Federico II
  • Thomas Swearingen, Michigan State University
  • Massimo Tistarelli, University of Sassari
  • Ruben Tolosana, Universidad Autonoma de Madrid
  • Ruben Vera-Rodriguez, Universidad Autonoma de Madrid

Contacts

For general enquiries on the workshop, please send an email to silvio.barra@unina.it, mirko.marras@acm.org, aythami.morales@uam.es, and vpatel36@jhu.edu (all in copy).