2020


Retour à la vue des calendrier
Jeudi 6 Février
Heure: 12:15 - 13:30
Lieu: Salle B107, bâtiment B, Université de Villetaneuse
Résumé: What (many kinds of) graphs can contribute to explainable machine learning
Description: Tiphaine Viard AI and machine learning are commonly described as "black boxes" that are efficient, but opaque. While complete opacity would be an exaggeration, it is true that many methods for explainability rely on forms of retro-engineering: we try to infer the model from its (partial, intermediary, final) results. These methods are typically based on large-scale, efficient matrix manipulation.

Graphs and their extensions have shown to be visualisable and interpretable, even at large scales. In their classical formulation, they are also very similar to matrices, but also versatile: they can be directed, weighted, multilayered, temporal, etc. Each of those extensions giving rise to interesting algorithmic and data-driven questions.
To date, few machine learning methods, harness the expressive power of graphs, in part due to the complexities of graph algorithms, typically having polynomial running times, which is incompatible with the scale of data at hand.

However, the situation has changed: (i) the impact of AI on society makes it no longer acceptable to only favour efficiency despite explainability, and (ii) recent advances in algorithmic methods on graphs demonstrates that due to the nature of real-world graphs, even some NP-hard problems become tractable.

The aim of this talk is to explore this avenue of research. We will discuss the state-of-the art as well as some past results in real-world (temporal) graph modeling and in explainability, and will then discuss some recent results on pattern mining on temporal graphs.
Lundi 10 Février
Heure: 14:00 - 15:00
Lieu: Salle B107, bâtiment B, Université de Villetaneuse
Résumé: Attention is all I need
Description: José Angel Gonzalez-Barba The use of attention mechanisms has been widespreaded along all the natural language processing tasks. These kind of mechanisms have increased the capacity of Deep Learning models allowing them to focus explicitly on the most discriminant relationships and properties for a given task. Recently, the Transformer model have replaced Convolutional and Recurrent neural networks in many NLP tasks, mainly due to its capability of modeling sequences, avoiding the sequential processing by using only attention mechanisms. In this talk I will speak about the application of the Transformer encoders to text classification in social media (Sentiment Analysis and Irony Detection in Twitter) and its application in a novel framework for extractive summarization.

About the author:
José Angel just finished his PhD and he is going to start a PostDoc in the group of Yoshua Bengio at the University of Montreal. His works on Spanish NLP have been very promising and he developed some state-of-the-art systems for sentiment analysis and summarization.