16 Mars - 22 Mars


Retour à la vue des calendrier
Lundi 16 Mars
Heure: 12:15 - 13:30
Lieu: Salle B107, bâtiment B, Université de Villetaneuse
Résumé: What (many kinds of) graphs can contribute to explainable machine learning
Description: Tiphaine Viard AI and machine learning are commonly described as "black boxes" that are efficient, but opaque. While complete opacity would be an exaggeration, it is true that many methods for explainability rely on forms of retro-engineering: we try to infer the model from its (partial, intermediary, final) results. These methods are typically based on large-scale, efficient matrix manipulation.

Graphs and their extensions have shown to be visualisable and interpretable, even at large scales. In their classical formulation, they are also very similar to matrices, but also versatile: they can be directed, weighted, multilayered, temporal, etc. Each of those extensions giving rise to interesting algorithmic and data-driven questions.
To date, few machine learning methods, harness the expressive power of graphs, in part due to the complexities of graph algorithms, typically having polynomial running times, which is incompatible with the scale of data at hand.

However, the situation has changed: (i) the impact of AI on society makes it no longer acceptable to only favour efficiency despite explainability, and (ii) recent advances in algorithmic methods on graphs demonstrates that due to the nature of real-world graphs, even some NP-hard problems become tractable.

The aim of this talk is to explore this avenue of research. We will discuss the state-of-the art as well as some past results in real-world (temporal) graph modeling and in explainability, and will then discuss some recent results on pattern mining on temporal graphs.