Learning models are equipped with hyperparameters (HPs) that control their bias-variance trade-off and consequently generalization performance. Thus, carefully tuning these HPs is of utmost importance to learn ``good'' models. The supervised ML community has focused on Auto-ML toward effective algorithm selection and hyper-parameter optimization (HPO) especially in high dimensions. Yet, automating unsupervised learning remains significantly under-studied. In this talk, I will present vignettes of our recent research toward unsupervised model selection, specifically in the context of anomaly detection. Especially with the advent of end-to-end trainable deep learning based models that exhibit a long list of HPs, and the attractiveness of self-supervised learning objectives for unsupervised anomaly detection, I will demonstrate that effective model selection becomes ever so critical, opening up challenges as well as opportunities.Biographie :
Leman Akoglu is the Heinz College Dean's Associate Professor of Information Systems at Carnegie Mellon University. She holds courtesy appointments in the Computer Science Department (CSD) and Machine Learning Department (MLD) of School of Computer Science (SCS). She has also received her Ph.D. from CSD/SCS of Carnegie Mellon University in 2012. Dr. Akoglu’s research interests broadly span machine learning and data mining, and specifically graph mining, pattern discovery and anomaly detection, with applications to fraud and event detection in diverse real-world domains. At Heinz, Dr. Akoglu directs the Data Analytics Techniques Algorithms (DATA) Lab.
Statisticians are often keen to analyze the statistical aspects of the so-called “replication crisis in science“. They condemn fishing expeditions and publication bias across empirical scientific fields applying statistical methods, such as health sciences. But what about good practice issues in their own - methodological - research, i.e. research considering statistical (or more generally, computational) methods as research objects? When developing and evaluating new statistical methods and data analysis tools, do statisticians and data scientists adhere to the good practice principles they promote in fields which apply statistics and data science? I argue that methodological researchers should make substantial efforts to address what may be called the replication crisis in the context of methodological research in statistics and data science, in particular by trying to avoid bias in comparison studies based on simulated or real data. I discuss topics such as publication bias, cherry-picking, and the design and necessity of neutral comparison studies, and review recent positive developments towards more reliable empirical evidence in the context of methodological computational research.Biographie :
Anne-Laure Boulesteix obtained a diploma in engineering from the Ecole Centrale Paris, a diploma in mathematics from the University of Stuttgart (2001) and a PhD in statistics (2005) from the Ludwig Maximilian University (LMU) of Munich. After a postdoc phase in medical statistics, she joined the Medical School of the University of Munich as a junior professor (2009) and professor (2012). She is working at the interface between biostatistics, machine learning and medicine with a particular focus on metascience and evaluation of methods. She is a steering committee member of the STRATOS initiative and founding member of the LMU Open Science Center.
The research area of computational social choice deals with the application of techniques from computer science and AI to the design and analysis of mechanisms for democratic decision making. In this talk, I will report on one of the most exciting recent developments in the field, namely the use of automated reasoning tools—and notably SAT solvers—to support scientists in their quest to obtain a deeper understanding of what is and what is not possible when it comes to designing fair and efficient mechanisms for decision making. No special technical background will be required to follow the exposition.Biographie :
Ulle Endriss is Professor of AI and Collective Decision Making at the University of Amsterdam, where he is based at the interdisciplinary Institute for Logic, Language and Computation (ILLC). Much of his research is concerned with the application of ideas originating in computer science to problems arising in economics and politics. He is an editor of the Handbook of Computational Social Choice (Cambridge University Press, 2016) and served as Associate Editor of the two leading journals publishing research across the full spectrum of AI, namely Artificial Intelligence and the Journal of Artificial Intelligence Research.
Titre : Applications industrielles de la PPC chez Renault : passé, présent et futur
Pour gérer la diversité de la gamme Renault, une technologie a été conçue à la fin des années 1990, puis développée et maintenue par une équipe passionnée, au sein du groupe Renault. Cette technologie s’appuie sur un ensemble de contraintes logiques pour représenter de manière exacte l’ensemble des véhicules possibles : elle est un solveur de satisfaction de contraintes. Depuis son utilisation opérationnelle, cette technologie n'a cessé d'être utilisée pour répondre à de nouveaux besoins, dans tous les secteurs de l'entreprise qui manipulent la variabilité de la gamme. Après une brève présentation des principes de cette technologie, seront abordés trois volets : son utilisation dans les applications industrielles, les efforts d'amélioration continue, les défis qui restent à relever. C'est sous cet angle que nous parlerons de ce logiciel interne.Biographie :
De formation ingénieur en modélisation mathématique et mécanique (ENSEIRB-MATMECA), Siham Essodaigui est la responsable du service Intelligence Artificielle Appliquée (IAA), au pôle Technologies de Renault Digital, depuis 2018. Le service IAA couvre 3 domaines d’activité : la Recherche Opérationnelle, le Traitement du Langage Naturel, et une technologie interne de Représentation de la Connaissance et Raisonnement. Transverse, il répond à des problématiques dans les différents domaines du groupe Renault.
Explanations are necessary for humans to understand and accept decisions made by an AI system when the system’s goal is known. It is even more important when the AI system makes decisions in multi-agent environments where humans do not know the systems’ goals since they may depend on other agents’ preferences. In such situations, explanations should aim to increase user satisfaction, considering the system’s decision, the user’s and the other agents’ preferences, the environment settings, and properties such as fairness, envy, and privacy. We will discuss three cases of Explainable decisions in Multi-Agent Environments (xMASE): explanations for multi-agent Reinforcement Learning, advice explanations in complex repeated decision-making environments and explaining preferences and constraints-driven optimization problems. For each case, we will present an algorithm for generating explanations and report on human experiments that demonstrate the benefits of providing the resulting explanations for increasing human satisfaction from the AI system.Biographie :
Sarit Kraus (Ph.D. Computer Science, Hebrew University, 1989) is a Professor of Computer Science at Bar-Ilan University. Her research is focused on intelligent agents and multi-agent systems integrating machine-learning techniques with optimization and game theory methods. For her work, she received many prestigious awards. She was awarded the IJCAI Computers and Thought Award, the ACM SIGART Agents Research award, the ACM Athena Lecturer, the EMET prize and was twice the winner of the IFAAMAS influential paper award. She is an ACM, AAAI and EurAI fellow and a recipient of the advanced ERC grant. She also received a special commendation from the city of Los Angeles. She is an elected member of the Israel Academy of Sciences and Humanities.
Knowledge graphs are large-scale collections of knowledge of one or more domains, which can be consumed both by humans and computers. For exploiting knowledge graphs in systems using machine learning, they typically need to be transformed to a propositional, i.e., vector-shaped representation of entities. RDF2vec is an example for generating such vectors from knowledge graphs, relying on random walks for extracting pseudo-sentences from a graph, and utilizing word2vec for creating embedding vectors from those pseudo-sentences. In this talk, I will give insights into the idea of RDF2vec, possible application areas, and recently developed variants incorporating different walk strategies and training variations. Moreover, I will step away from purely quantitative evaluations and take a deeper look at what knowledge graph embedding methods like RDF2vec are generally capable of learning.Biographie :
Heiko Paulheim is a full professor for data science at the University of Mannheim. He holds a PhD from the Technical University of Darmstadt and has conducted research at the University of Applied Sciences of Darmstadt, SAP Research, and the Technical University of Darmstadt prior to this position. His group conducts various projects around knowledge graphs, yielding, among others, the public knowledge graphs WebIsALOD, CaLiGraph, and DBkWik. Moreover, his group is concerned with using knowledge graphs in machine learning, which has lead to the development of the widespread RDF2vec method for knowledge graph embeddings. In the recent past, Heiko Paulheim also leads projects which are concerned with ethical, societal, and legal aspects of AI, including KareKoKI, which deals with the impact of price-setting AIs on antitrust legislation, and the ReNewRS project on ethical news recommenders.
La cognition humaine reste notre meilleur exemple de mécanisme de prise de décision dans un environnement ouvert, dynamique et incertain. C'est pourquoi il est intéressant de s'en inspirer pour rendre les systèmes artificiels plus aptes à évoluer de manière autonome dans ce type d'environnement, un des objectifs de l'intelligence artificielle. Or l'intrication des émotions et de la cognition n'est plus à démontrer. C'est pourquoi nous nous intéressons à l'étude des fonctions que les émotions remplissent dans notre cognition afin d'en doter les systèmes artificiels dits situés, c'est-à-dire en interaction avec un environnement ouvert, dynamique et incertain et disposant de ressources limitées. L'intégration de ces fonctions au sein des architectures de prise de décision reste un sujet actif, utilisant les connaissances issues de la neurologie et de la psychologie, pour les fonctions individuelles, et également des sciences sociales pour les fonctions collectives.
Clément Raïevsky est Maître de Conférence en Informatique à l'Université de Grenoble depuis 2015, il a obtenu son Doctorat à l’Université de Sherbrooke (Qc, Canada) en 2009. Ses principaux domaines d’intérêt son la prise de décision autonome, les systèmes multi-agents adaptatifs, l’auto-organisation et la résilience de ces systèmes. Il aborde ces problématiques en tirant partie des connaissances issues de la psychologie des émotions.
A key question of the discussion of the ethics of AI is how to go beyond avoiding ethical problems and instead direct AI development and use in directions that are responsible and desirable. This challenge of positive direction of AI raises the question of criteria of what counts as morally good or socially desirable. In pluralist modern society there are few generally agreed responses to this question. However, one approach that is widely accepted is to use the United Nation’s Sustainable Development Goals (SDGs) as internationally agreed values as a benchmark for social desirability to steer AI development. In my presentation I will draw from the work undertaken in the SHERPA project www.project-sherpa.eu to describe how SDGs can be used to assess ethical qualities of AI. The SHERPA consortium undertook 10 case studies of organisational practice with regards to AI use and mapped the findings onto the SDGs. I will furthermore discuss broader challenges that arise with regards to application of development-oriented metrics such as those associated with the SDGs to a different domain such as AI development. I will use these considerations to contextualise the overall findings and recommendations of the SHERPA project and show how an ecosystems-based approach to the ethics of AI may help us find ways of addressing at least some of its ethical challenges.Biographie :
Bernd Carsten Stahl is Professor of Critical Research in Technology at the School of Computer Science of the University of Nottingham. His interests cover philosophical issues arising from the intersections of business, technology, and information. This includes ethical questions of current and emerging of ICTs, critical approaches to information systems and issues related to responsible research and innovation.
we live an age full of data. In all areas of society, digital data is now abundant, but also unstructured and pretty much unexploited. Environmental science is no exception and the lat years have seen an increase of use of digital sensing to observe an undestand the natural realm and the impacts of human activities. In this talk, I will present some success stories at the interface of machine learning and the geosciences, where satellite and drone data were used to support mapping over land and sea (and even below the surface). I will then sketch a number of points of synergetic action necessary to strengthen such interface, a necessary step to jointly tackling the climate and biodiversity crisis.Biographie :
Devis completed his PhD at University of Lausanne, Switzerland, where he studied kernel methods for hyperspectral satellite data. He then traveled the world as a postdoc, first at University of València, then at CU Boulder and finally back to EPFL. In 2014, he became assistant professor at University of Zurich, and in 2017 he moved to Wageningen University in the Netherlands, where he was chair of the Geo-Information Science and Remote Sensing Laboratory. Since September 2020, he is back to EPFL, where he leads the Environmental Computational Science and Earth Observation laboratory (ECEO) in Sion. There, he studies the Earth from above with machine learning and computer vision.