Learning models are equipped with hyperparameters (HPs) that control their bias-variance trade-off and consequently generalization performance. Thus, carefully tuning these HPs is of utmost importance to learn ``good'' models. The supervised ML community has focused on Auto-ML toward effective algorithm selection and hyper-parameter optimization (HPO) especially in high dimensions. Yet, automating unsupervised learning remains significantly under-studied. In this talk, I will present vignettes of our recent research toward unsupervised model selection, specifically in the context of anomaly detection. Especially with the advent of end-to-end trainable deep learning based models that exhibit a long list of HPs, and the attractiveness of self-supervised learning objectives for unsupervised anomaly detection, I will demonstrate that effective model selection becomes ever so critical, opening up challenges as well as opportunities.Biographie :
Leman Akoglu is the Heinz College Dean's Associate Professor of Information Systems at Carnegie Mellon University. She holds courtesy appointments in the Computer Science Department (CSD) and Machine Learning Department (MLD) of School of Computer Science (SCS). She has also received her Ph.D. from CSD/SCS of Carnegie Mellon University in 2012. Dr. Akoglu’s research interests broadly span machine learning and data mining, and specifically graph mining, pattern discovery and anomaly detection, with applications to fraud and event detection in diverse real-world domains. At Heinz, Dr. Akoglu directs the Data Analytics Techniques Algorithms (DATA) Lab.
Explanations are necessary for humans to understand and accept decisions made by an AI system when the system’s goal is known. It is even more important when the AI system makes decisions in multi-agent environments where humans do not know the systems’ goals since they may depend on other agents’ preferences. In such situations, explanations should aim to increase user satisfaction, considering the system’s decision, the user’s and the other agents’ preferences, the environment settings, and properties such as fairness, envy, and privacy. We will discuss three cases of Explainable decisions in Multi-Agent Environments (xMASE): explanations for multi-agent Reinforcement Learning, advice explanations in complex repeated decision-making environments and explaining preferences and constraints-driven optimization problems. For each case, we will present an algorithm for generating explanations and report on human experiments that demonstrate the benefits of providing the resulting explanations for increasing human satisfaction from the AI system.Biographie :
Sarit Kraus (Ph.D. Computer Science, Hebrew University, 1989) is a Professor of Computer Science at Bar-Ilan University. Her research is focused on intelligent agents and multi-agent systems integrating machine-learning techniques with optimization and game theory methods. For her work, she received many prestigious awards. She was awarded the IJCAI Computers and Thought Award, the ACM SIGART Agents Research award, the ACM Athena Lecturer, the EMET prize and was twice the winner of the IFAAMAS influential paper award. She is an ACM, AAAI and EurAI fellow and a recipient of the advanced ERC grant. She also received a special commendation from the city of Los Angeles. She is an elected member of the Israel Academy of Sciences and Humanities.
Heiko Paulheim is a full professor for data science at the University of Mannheim. He holds a PhD from the Technical University of Darmstadt and has conducted research at the University of Applied Sciences of Darmstadt, SAP Research, and the Technical University of Darmstadt prior to this position. His group conducts various projects around knowledge graphs, yielding, among others, the public knowledge graphs WebIsALOD, CaLiGraph, and DBkWik. Moreover, his group is concerned with using knowledge graphs in machine learning, which has lead to the development of the widespread RDF2vec method for knowledge graph embeddings. In the recent past, Heiko Paulheim also leads projects which are concerned with ethical, societal, and legal aspects of AI, including KareKoKI, which deals with the impact of price-setting AIs on antitrust legislation, and the ReNewRS project on ethical news recommenders.
La cognition humaine reste notre meilleur exemple de mécanisme de prise de décision dans un environnement ouvert, dynamique et incertain. C'est pourquoi il est intéressant de s'en inspirer pour rendre les systèmes artificiels plus aptes à évoluer de manière autonome dans ce type d'environnement, un des objectifs de l'intelligence artificielle. Or l'intrication des émotions et de la cognition n'est plus à démontrer. C'est pourquoi nous nous intéressons à l'étude des fonctions que les émotions remplissent dans notre cognition afin d'en doter les systèmes artificiels dits situés, c'est-à-dire en interaction avec un environnement ouvert, dynamique et incertain et disposant de ressources limitées. L'intégration de ces fonctions au sein des architectures de prise de décision reste un sujet actif, utilisant les connaissances issues de la neurologie et de la psychologie, pour les fonctions individuelles, et également des sciences sociales pour les fonctions collectives.
Clément Raïevsky est Maître de Conférence en Informatique à l'Université de Grenoble depuis 2015, il a obtenu son Doctorat à l’Université de Sherbrooke (Qc, Canada) en 2009. Ses principaux domaines d’intérêt son la prise de décision autonome, les systèmes multi-agents adaptatifs, l’auto-organisation et la résilience de ces systèmes. Il aborde ces problématiques en tirant partie des connaissances issues de la psychologie des émotions.
A key question of the discussion of the ethics of AI is how to go beyond avoiding ethical problems and instead direct AI development and use in directions that are responsible and desirable. This challenge of positive direction of AI raises the question of criteria of what counts as morally good or socially desirable. In pluralist modern society there are few generally agreed responses to this question. However, one approach that is widely accepted is to use the United Nation’s Sustainable Development Goals (SDGs) as internationally agreed values as a benchmark for social desirability to steer AI development. In my presentation I will draw from the work undertaken in the SHERPA project www.project-sherpa.eu to describe how SDGs can be used to assess ethical qualities of AI. The SHERPA consortium undertook 10 case studies of organisational practice with regards to AI use and mapped the findings onto the SDGs. I will furthermore discuss broader challenges that arise with regards to application of development-oriented metrics such as those associated with the SDGs to a different domain such as AI development. I will use these considerations to contextualise the overall findings and recommendations of the SHERPA project and show how an ecosystems-based approach to the ethics of AI may help us find ways of addressing at least some of its ethical challenges.Biographie :
Bernd Carsten Stahl is Professor of Critical Research in Technology at the School of Computer Science of the University of Nottingham. His interests cover philosophical issues arising from the intersections of business, technology, and information. This includes ethical questions of current and emerging of ICTs, critical approaches to information systems and issues related to responsible research and innovation.
we live an age full of data. In all areas of society, digital data is now abundant, but also unstructured and pretty much unexploited. Environmental science is no exception and the lat years have seen an increase of use of digital sensing to observe an undestand the natural realm and the impacts of human activities. In this talk, I will present some success stories at the interface of machine learning and the geosciences, where satellite and drone data were used to support mapping over land and sea (and even below the surface). I will then sketch a number of points of synergetic action necessary to strengthen such interface, a necessary step to jointly tackling the climate and biodiversity crisis.Biographie :
Devis completed his PhD at University of Lausanne, Switzerland, where he studied kernel methods for hyperspectral satellite data. He then traveled the world as a postdoc, first at University of València, then at CU Boulder and finally back to EPFL. In 2014, he became assistant professor at University of Zurich, and in 2017 he moved to Wageningen University in the Netherlands, where he was chair of the Geo-Information Science and Remote Sensing Laboratory. Since September 2020, he is back to EPFL, where he leads the Environmental Computational Science and Earth Observation laboratory (ECEO) in Sion. There, he studies the Earth from above with machine learning and computer vision.