Scientific Program

Here are the titles and descriptions/abstracts for the mini-courses and research talks given at the workshop.

The workshop started at 10:45 on Monday, May 30 and ended in the early afternoon on Friday, June 10. There were no talks on June 4-6 (Pentecost).

  • Mini-courses

    Patrick Flandrin : Seven roads to time-frequency

    Time-frequency analysis can be viewed as a time-dependent extension of spectral analysis, the reference tool for stationary signals and processes. However, there is no single method to realize such a program, due to the many possible forms of non-stationarity as well as the intrinsic limitations that arise when considering time and frequency together. In this mini-course, we will explore seven approaches motivated by considerations of a priori very different nature: atomic decompositions, measurement systems, covariance principles, correlations, probability, quantum operators, geometry. We will see that, while offering complementary interpretations, most of them highlight the central role played by the few key definitions that are most common. A brief historical perspective will also be provided, focusing on the interplay between signal, mathematics and physics in the specific context of time-frequency analysis.

    Subhro Ghosh : Gaussian analytic functions, their zeros and applications

    Gaussian analytic functions (abbrv. GAF) are holomorphic functions with random Gaussian coefficients. They originated in quantum and statistical physics, but has since become models of fundamental interest in probability and its applications. In particular, three families of GAF exhibit elegant invariance
    properties of their zeros with respect to the group of isometries of the ambient space — planar, spherical and hyperbolic ; the latter model at unit intensity also exhibits an intriguing determinantal structure. Recently, such Gaussian zeros have emerged to be of interest in signal processing, in particular in the time-frequency analysis of spectrogram based data, with wide applications in
    acoustics and other scientific domains. This mini course will cover the mathematical foundations of Gaussian analytic functions and their applications to time frequency analysis.

    Günther Koliander : Point Processes in Time-Frequency Analysis

    In this mini-course, we will study point processes that emerge in time-frequency analysis. More specifically, a random signal can be tranformed by a time-frequency transform into a two-dimensional random field. The two-dimensional domain can be interpreted as the complex plane and for the most basic transform (the short-time Fourier transform with Gaussian window) and white Gaussian noise as a signal, the resulting random complex function is actually a Gaussian analytic function up to a normalization factor. In particular, these random functions are continuous with probability one, and thus we can define their random set of zeros. While the zeros of Gaussian analytic functions have been studied in detail, the interpretation as time-frequency transform of white noise provides many ideas for generalization. We will consider other windows than the Gaussian, add a signal to Gaussian noise, and consider some other transforms. Next to theoretic results like the calculation of moment measures, we will also discuss methods of simulation of the random field and algorithms for the detection of zeros.

    Gaultier Lambert : Universality for free fermions

    There has been significant recent progress in understanding universality of local statistics for the eigenvalues of many random matrix models. These progress were initiated by studying the case of Hermitian (unitary-invariant) random matrices which give raise to a class of determinantal point processes called orthogonal polynomial ensembles. In this course, I will consider another family of determinantal point processes, introduced by Macchi in 1975 and called free fermions, and report on recent results obtained with A. Deleporte ( on universality of local statistics for the ground state. I will review classical tools for semiclassical analysis and explain how to apply these methods to prove universality of local statistics, both in the bulk and around regular boundary points. Then, if time permits, I will also mention a few connections with random matrix theory.

    Mylène Maïda : Introduction to determinantal point processes

    This mini-course will give the mathematical background one needs to follow the other, more advanced, courses of the first week. It is meant to be very accessible, in particular to graduate students. I will gently introduce determinantal point processes (DPP), insisting on the models that you will encounter in the other courses (matrix models, orthogonal polynomial ensembles, Gaussian analytic functions etc.) During the course, you will have a first glance of how a DPP looks like, in which context do DPP arise, what properties we expect from them and what are the basic techniques to study their mathematical properties.

    Satya Majumdar : Noninteracting Trapped Fermions as a DPP: Application to cold atoms

    There have been spectacular progress in cold atoms experiments in recent years that have led to new interesting theoretical challenges. As a simple example of a system of noninteracting cold atoms, I'll discuss the case of N noninteracting fermions trapped in a harmonic well. We will see that even without interactions, this is an interesting many-body system with nontrivial quantum fluctuations arising purely from the Pauli exclusion principle. This system is particularly nice as it provides a nice physically realizable determinantal point process (DPP). For example, in one dimension and at zero temperature, the quantum fluctuations of the positions of the fermions can be exactly mapped to the distribution of eigenvalues of a Gaussian Hermitian random matrix. A lot of nice exact results for the fermions can be extracted using this correspondence. In particular, this connection to random matrix theory predicts exact results at the edges of the fermion density profile, where fluctuations dominate and traditional theories of quantum many-body systems do not work. One example of such exact results at the edges is that the position of the rightmost fermion in 1-d, at T=0, is described by the celebrated Tracy-Widom distribution for the top eigenvalue of a random matrix. I'll then discuss how these results can be generalized to finite temperature. Remarkably, at finite T, the position of the rightmost fermion is closely related to distribution as the height at finite time of the (1+1)-dimensional interfaces described by the Kardar-Parisi-Zhang equation. Interesting results at finite temperature can be derived by exploiting this connection as well. If time permits, I'll also discuss the generalisations to higher dimensions.

    Benjamin Roussel : Introduction to quantum optics of electrons and photons

    The recent progress in nanotechnologies have allowed to generate, manipulate and probe electric currents down to the single-electron level. This opened a new field of mesoscopic physics called electron quantum optics, in which the electronic transport is analyzed in the most fundamental way, electron by electron.

    In this minicourse, I will introduce the field of electron quantum optics. Starting from the experimental toolbox, I will introduce the tools and techniques that are used on the theoretical level. The first question that I will address is the reconstruction of the quantum state of the electronic state from experimental signals, and in particular how to extract the single-electron and single-hole wavefunctions from current noise measurements. A second important challenge of the field is the omnipresence of the Coulomb interactions. I will introduce the bosonization technique, that allows to map the interacting electron problem into the scattering of bosons. Finally, if time allows, I will discuss the possible extensions to the field, such as the addition of correlations between different charge sectors that are allowed by superconductivity.

    Matthia Walschaers : Quantum computational advantages with light

    Bosonic systems, and notably light, have a lot of potential for the development of certain quantum technologies. Advanced LIGO and VIRGO have shown that light is particularly suited for quantum sensing. The compatibility with telecommunication infrastructure also makes light crucial for many quantum communication protocols. However, when it comes to quantum computing, light I sometimes thought to be “too linear” for designing universal fault-tolerant quantum computers. Yet, in 2020 photonics became the second platform to claim a quantum computational advantage by implementing a Gaussian boson sampling protocol. The goal of this mini-course is to explore the different physical resources that are required to achieve such a quantum computational advantage with bosons using quantum optics as a case study.

    The course will be divided in two parts. First, a mathematical framework for describing multimode quantum light will be introduced. The study of such light has two different approaches: the discrete variable (DV) approach based on photons, and the continuous variable (CV) approach based on field quadratures. In the first part of the course, we will focus on the DV setting, where many-particle interference will be introduced as the physical phenomenon that underlies boson sampling. In the second part, a phase space framework based on Wigner functions will be introduced to study light in the CV approach. Some fundamental properties of Wigner functions will be unveiled and we will use them to explore the physical requirements for making some sampling problems hard to simulate.

    This mini-course will combine elements of Tutorials "Signatures of many-particle interference" J. Phys. B: At. Mol. Opt. Phys. 53 043001 (2020) and "Non-Gaussian Quantum States and Where to Find Them" PRX Quantum 2, 030204 (2021).

  • Research talks

    Luis Daniel Abreu : Local maxima of white noise spectrograms and Gaussian Entire Functions

    We show that spectrograms of complex white noise with Gaussian windows (or, equivalently, of square modulus of weighted Gaussian Entire Functions), normalized such that their expected density of zeros is 1, have an expected density of 5/3 critical points. Among these, 1/3 are local maxima and 4/3 saddle points. We will also compute the distributions of ordinate values (heights) for spectrogram values and their local extrema.

    Mathias Albert : Correlations in one dimensional quantum gases

    Indiscernability of identical particles has strong consequences on quantum systems. It is for instance responsible for Bose-Einstein condensation or the Pauli exclusion principle through the only fact that the many-body wave function has to be symmetric or anti-symmetric with respect to exchange of coordinates.

    In this talk, I will discuss various properties of free fermions and strongly interacting bosons in one dimension both in the context of electronic transport and ultra cold atom experiments. I will in particular make use of the determinantal or permanental structure of the ground state wave function to discuss correlations in the system. I will also discuss mixtures of non identical bosons or fermions that can be described by combinations of determinants at low energy.

    Alexander Bufetov : Normal approximation, the Gaussian multiplicative chaos, and excess one for the sine-process

    The Soshnikov Central Limit Theorem states that scaled additive statistics of the sine-process converge to the normal law. The first main result of this talk gives a detailed comparison between the law of an additive, sufficiently Sobolev regular, statistic under the sine-process and the normal law. The comparison for low frequencies is obtained by taking the scaling limit in the Borodin-Okounkov-Geronimo-Case formula. The exponential decay for the high frequencies is obtained, under an additional assumption of holomorphicity in a horizontal strip, with the use of an analogue of the Johansson change of variable formula; quasi-invariance of the sine-process under compactly supported diffeomorphisms plays a key rôle in the proof. The corollaries of the normal approximation theorem include the convergence of the random entire function, the infinite product with zeros at the particles, to Gaussian multiplicative chaos. A complementary estimate to the Ghosh completeness theorem follows in turn: indeed, Ghosh proved that reproducing sine-kernels along almost every configuration of the sine-process form a complete set; it is proved in the talk that if one particle is removed, then the set is still complete; whereas if two particles are removed from the configuration, then the resulting set is the zero set for the Paley-Wiener space. The talk extends the results of the preprint

    Subhro Ghosh : The unreasonable effectiveness of determinantal processes

    In 1960, Wigner published an article famously titled "The Unreasonable Effectiveness of Mathematics in the Natural Sciences”. In this talk we will, in a small way, follow the spirit of Wigner’s coinage, and explore the unreasonable effectiveness of determinantal processes (a.k.a. DPPs) far beyond their context of origin. DPPs originated in quantum and statistical physics, but have emerged in recent years to be a powerful toolbox for many fundamental learning problems. In this talk, we aim to explore the breadth and depth of these applications. On one hand, we will explore a class of Gaussian DPPs and the novel stochastic geometry of their parameter modulation, and their applications to the study of directionality in data and dimension reduction. At the other end, we will consider the fundamental paradigm of stochastic gradient descent, where we leverage connections with orthogonal polynomials to design a minibatch sampling technique based on data-sensitive DPPs ; with provable guarantees for a faster convergence exponent compared to traditional sampling. Based on the following works.

    • Gaussian determinantal processes: A new model for directionality in data, with P. Rigollet, Proceedings of the National Academy of Sciences, vol. 117, no. 24 (2020), pp. 13207--13213 (PNAS Direct Submission)
    • Determinantal point processes based on orthogonal polynomials for sampling minibatches in SGD, with R. Bardenet and M. Lin, Advances in Neural Information Processing Systems 34 (Spotlight at NeurIPS 2021)

    Antti Haimi : Zeros of Gaussian Weyl-Heisenberg functions

    We study Gaussian random functions on the complex plane whose stochastics are invariant under the Weyl-Heisenberg group (twisted stationarity). The theory is modeled on translation invariant Gaussian entire functions, but allows for non-analytic examples, in which case winding numbers can be either positive or negative.

    We calculate the first intensity of zero sets of such functions, both when considered as points on the plane, or as charges according to their phase winding. In the latter case, charges are shown to be in a certain average equilibrium independently of the particular covariance structure (universal screening). We investigate the corresponding fluctuations, and show that in many cases they are suppressed at large scales (hyperuniformity). This means that universal screening is empirically observable at large scales. We also derive an asymptotic expression for the charge variance.

    As a main application, we obtain statistics for the zero sets of the short-time Fourier transform of complex white noise with general windows, and also prove the following uncertainty principle: the expected number of zeros per unit area is minimized, among all window functions, exactly by generalized Gaussians. Further applications include poly-entire functions such as covariant derivatives of Gaussian entire functions.

    Joint work with G. Koliander and J. L. Romero.

    Adrien Kassel : Combinatorial geometries and determinantal measures on graphs

    I will start by explaining the link between determinantal measures on a finite set and combinatorial geometries (or matroids) on that set. I will then consider the case where this set is the collection of edges of a finite connected graph and I will describe a family of such measured combinatorial geometries, the central element of which is the uniform measure on spanning trees, and that involve subgraphs with arbitrary topology. I will finally explain how this point of view sheds light on the study of a Grassmannian random field on graphs which we introduced with Thierry Lévy in a discrete setup inspired by the differential geometry of vector bundles.

    Meixia Lin : Signal Analysis via the Stochastic Geometry of Spectrogram Level Sets

    Spectrograms are fundamental tools in time-frequency analysis, being the squared magnitude of the so-called short time Fourier transform (STFT). Signal analysis via spectrograms has traditionally explored their peaks, i.e. their maxima. This is complemented by a recent interest in their zeros or minima, following seminal work by Flandrin and others, which exploits connections with Gaussian analytic functions (GAFs). However, the zero sets (or extrema) of GAFs have a complicated stochastic structure, complicating any direct theoretical analysis. Standard techniques largely rely on statistical observables from the analysis of spatial data, whose distributional properties for spectrograms are mostly understood only at an empirical level. In this work, we investigate spectrogram analysis via an examination of the stochastic geometric properties of their level sets. We obtain rigorous theorems demonstrating the efficacy of a spectrogram level sets based approach to the detection and estimation of signals, framed in a concrete inferential set-up. Exploiting these ideas as theoretical underpinnings, we propose a level sets based algorithm for signal analysis that is intrinsic to given spectrogram data, and substantiate its effectiveness via extensive empirical studies. Our results also have theoretical implications for spectrogram zero based approaches to signal analysis. To our knowledge, these results are arguably among the first to provide a rigorous statistical understanding of signal detection and reconstruction in this set up, complemented with provable guarantees on detection thresholds and rates of convergence.

    Barbara Pascal : The Kravchuk transform: a novel covariant representation for discrete signals amenable to zero-based detection tests

    Abstract: Recent works in time-frequency analysis proposed to switch the focus from the maxima of the spectrogram toward its zeros, which form a random point pattern with a very stable structure. Several signal processing tasks, such as component disentanglement and signal detection procedures, have already been renewed by using modern spatial statistics on the pattern of zeros. Tough, they require cautious choice of both the discretization strategy and the observation window in the time-frequency plane. To overcome these limitations, we propose a generalized time-frequency representation: the Kravchuk transform,specially designed for discrete signals analysis, whose phase space is the unit sphere, particularly amenable to spatial statistics. We show that it has all desired properties for signal processing, among which covariance, invertibility and symmetry, and that the point process of the zeros of the Kravchuk transform of complex white Gaussian noise coincides with the zeros of the spherical Gaussian Analytic Function. Elaborating on this theorem, we finally develop a Monte Carlo envelope test procedure for signal detection based on the spatial statistics of the zeros of the Kravchuk spectrogram.

    Outline: After reviewing the unorthodox path focusing on the zeros of the standard spectrogram and the associated theoretical results on the distribution of zeros in the case of white noise, I will introduce the Kravchuk transform and study the random point process of its zeros from a spatial statistics perspective. Then I will present the designed Monte Carlo envelop test, and illustrate its numerical performance in adversarial settings, with both low signal-to-noise ratio and small number of samples, and compare it to state-of-the-art zeros-based detection procedures.

    Leonid Pastur : Large-size Behavior of the Entanglement Entropy of Free Disordered Fermions

    We consider the macroscopic system of free lattice fermions and we are interested in the entanglement entropy (EE) of a large block of size L of the system viewing the rest of the system as the macroscopic environment of the block. The entropy is a widely used quantifier of quantum correlations between the block and its environment. We begin with the known results (mostly one-dimensional) on the large-L asymptotic of the EE for translation invariant system, where for any value of Fermi energy there are basically two asymptotics, known as the area law and enhanced (violated) area law. We then show that in the disordered case and the Fermi energy belonging to the localized spectrum of the one-body Hamiltonian the EE follows the area law for all typical realization of the disorder and any dimension. As for the enhanced area law, it proves to be possible for certain special values of the Fermi energy in the one-dimensional case.

    Arnaud Poinas : Distribution of zeros of the spectrogram of noisy signals

    Abstract: Recent works on signal processing studied the repartition of zeros of the spectrogram with gaussian window of complex white noise. It was shown that these zeros behave as a repulsive point process meaning that they tend to spread out and avoid being close to each other. However, this behaviour changes for zeros of the spectrogram of a noisy signals. The possibility of using this change of behaviour for signal detection or even signal reconstruction has been investigated but the distribution of zeros in the presence of a signal was not.

    In this talk, I present a joint work with Rémi Bardenet whose aim is the study of the repartition of zeros of the spectrogram of noisy signals. We give a formula for the local density of zeros that showcase how the presence of a signal affects their spatial distribution. We then study the particular case of three specific signals: a Hermite function, a linear chirp and two parallel linear chirps for which we can derive additional informations. We then finish with a discussion on the use of the location of zeros for signal detection and signal reconstruction and the numerous open problem remaining on the study of distribution of zeros (and maxima) of spectrogram of noisy signals.

    José Luis Romero : Sampling, interpolation, and the planar Coulomb gas at low temperatures

    The problems of sampling and interpolation concern the relation between functions in a given class and their values on a distinguished set (samples). The two main questions are: Is every function determined by its samples? Can a function with prescribed samples be found? I will present classical and recent results on
    sampling and interpolation.

    The Coulomb gas consists of a large number of repelling point charges confined by an external potential. At very low temperatures a certain almost deterministic pattern is expected to emerge (freezing regime). I will present results for the planar case consistent with this intuition (asymptotic separation and equidistribution at the microscopic scale).

    The two topics are closely connected: to study the geometry of a realization of the Coulomb gas we study the extent to which it solves a sampling or interpolation problem for certain weighted polynomials. (Joint work with Yacin Ameur and Felipe Marceca).

    Simona Rota-Nodari : Renormalized Energy Equidistribution and Local Charge Balance in Coulomb Systems

    We consider a classical system of n charged particles confined by an external potential in any dimension bigger or equal than 2. The particles interact via pairwise repulsive Coulomb forces and the pair-interaction strength scales as the inverse of n (mean-field regime). The goal is to investigate the microscopic structure of the minimizers.

    It has been proved by Sandier-Serfaty (d=2) and Rougerie-Serfaty (d>2) that the distribution of particles at the microscopic scale, i.e. after blow-up at the scale corresponding to the interparticle distance, is governed by a renormalized energy which corresponds to the total Coulomb interaction of point charges in a uniform neutralizing background.

    In this talk, I will present some results which show that for minimizers and in any large enough microscopic set, the renormalized energy concentration and the number of points are completely determined by the macroscopic density of points. In other words, points and energy are “equidistributed”.

    Works in collaboration with S. Serfaty and M. Petrache.

    Nicolas Rougerie : On quantum statistics transmutation via flux attachment

    We consider a model for two types (bath and tracers) of 2D quantum particles in a perpendicular (artificial) magnetic field. We assume that the bath particles are fermions, all lying in the lowest Landau level of the magnetic field. Heuristic arguments then indicate that, if the tracers are strongly coupled to the bath, they effectively change their quantum statistics, from bosonic to fermionic or vice-versa. We rigorously compute the energy of a natural trial state, indeed exhibiting this phenomenon of statistics transmutation.

    The proof is based on estimates for the characteristic polynomial of the Ginibre ensemble of random matrices, which appear via the state of the bath, chosen to be a Slater determinant. A (wide open) conjecture is that if the bath particles are also strongly interacting, they form a Laughlin state instead, and then the tracer particles turn to anyons. Technically this would correspond to very fine estimates for non-determinantal beta-ensembles.

    Joint work with Douglas Lundholm and Gaultier Lambert

    Christophe Salomon : Imaging strongly correlated Fermi gases

    Jean-Marie Stéphan : Edge modes and counting statistics in the 2d and 4d Quantum Hall effect

    I discuss simple Slater determinant wave functions generalizing the two-dimensional Integer Quantum Hall effect to four dimensions. The corresponding gapless modes at the edge of the droplet are known to be anisotropic: they only propagate in one direction, foliating the 3d boundary into independent 1d conduction channels. In this talk, I focus on 4d droplets confined by harmonic traps, and show that the nature of the 1d conduction channels depend strongly on the commensurability of the trapping frequencies. In particular incommensurable frequencies produce quasi-periodic, ergodic trajectories; the corresponding correlation functions of edge modes also exhibits fractal-like features. I explore the consequences on counting statistics for particles in a region of space, for which I will present results in 4d but also back in 2d.

    Joint works with B. Estienne, B. Oblak, and W. Witczak-Krempa.

    Nicolas Tremblay : Random Spanning Forests on Graphs for Fast Laplacian-Based Computations

    Graphs are ubiquitous tools to represent networks, may they be networks modeling data from neurosciences, sociology, molecular biology, chemistry, etc. A cornerstone of the analysis of graphs is the Laplacian matrix L that encodes their structure. From a linear algebra point-of-view, the analysis of L offers fundamental insights on key properties of the graph: the diffusion speed of an information or a disease on a network, the vulnerability of a network to targeted or random attacks, the redundancy of certain parts of the network, the network's structure in more or less independent modules, are all examples of characteristics of a network one may extract from the Laplacian matrix.

    In this work, we concentrate on two specific problems that often arise in the context of graph-based data: i/ compute inverse traces of the form Tr( (L+qI)^(-1) ), ii/ compute smoothing operations of the form (L+qI)^(-1) y where q>0 and y some vector defined over the nodes of the graph. These two problems arise in many well-known graph-based algorithms, such as semi-supervised learning, label propagation, graph Tikhonov regularization, graph inpainting, etc.

    In the context of large graphs, the required inverse which scales as O(n^3) in the worst-case, is often too expensive in practice. Many approaches have been developed in the state-of-the-art to circumvent this problem: polynomial approximation and (preconditioned) conjugate gradient are the two most well-known.

    In this work, we develop a new class of techniques based on random spanning forests. We show that these forests are natural candidates to provide original, efficient, and easy-to-implement estimators.
    This is joint work with Pierre-Olivier Amblard, Luca Avena, Simon Barthelmé, Alexandre Gaudillière and Yusuf Yigit Pilavci