Search
Talk
One, Two or Many Frequencies: Synchrosqueezing, EMD and Multicomponent Signal Analysis
Seminar of TeSA, Toulouse, November 23, 2021.
Many signals from the physical world, e.g. speech or physiological records, can be modelled as a sum of amplitude and frequency-modulated (AM/FM) waves often called modes. In the last few decades, there has been an increasing interest in designing new accurate representations and processing methods for these type of signals. Consequently, the retrieval of the components (or modes) of a multicomponent signal is a central issue in many audio processing problems. The most commonly used techniques to carry out the retrieval are time-frequency or time-scale based signal representations. For the former, spectrogram reassignment techniques, reconstruction based on minimization of the ambiguity function associated with the Wigner-Ville distribution, synchrosqueezing using the short time Fourier transform or Fourier ridges have all been successfully used. For the latter, i.e., time-scale representations, wavelet ridges have also proven to be very efficient, the emphasis is on the importance of the wavelet choice with regard to the ridge representation. Synchrosqueezing techniques have also been developed within the wavelet framework. In this talk, I will discuss Empirical mode decomposition and synchrosqueezing methods and their use in analysis of multimode signals.
Signal and image processing / Other
Journal Paper
How to Introduce Expert Feedback in One-Class Support Vector Machines for Anomaly Detection ?
Signal Processing, vol. 188, Art. no 108197, November 2021.
Anomaly detection consists of detecting elements of a database that are different from the majority of normal data. The majority of anomaly detection algorithms considers unlabeled datasets. However, in some applications, labels associated with a subset of the database (coming for instance from expert feedback) are available providing useful information to design the anomaly detector. This paper studies a semi-supervised anomaly detector based on support vector machines, which takes the best of existing supervised and unsupervised support vector machines algorithms. The proposed algorithm allows the maximum proportion of vectors detected as anomalies and the maximum proportion of errors in the supervised data to be controlled, through two hyperparameters defining these proportions. Simulations conducted on various benchmark datasets show the interest of the proposed semi-supervised anomaly detection method.
Signal and image processing / Space communication systems
A marginalised particle filter with variational inference for non‐linear state‐space models with Gaussian mixture noise
IET Radar, Sonar and Navigation, vol. 16, issue 2, pp. 238-248, 2021.
This work proposes a marginalised particle filter with variational inference for non‐linear state‐space models (SSMs) with Gaussian mixture noise. A latent variable indicating the component of the Gaussian mixture considered at each time instant is introduced to specify the measurement mode of the SSM. The resulting joint posterior distribution of the state vector, the mode variable and the parameters of the Gaussian mixture noise is marginalised with respect to the noise variables. The marginalised posterior distribution of the state and mode is then approximated by using an appropriate marginalised particle filter. The noise parameters conditionally on each particle system of the state and mode variable are finally updated by using variational Bayesian inference. A simulation study is conducted to compare the proposed method with state‐of‐the‐art approaches in the context of positioning in urban canyons using global navigation satellite systems.
Signal and image processing and Networking / Localization and navigation
Generalized Isolation Forest for Anomaly Detection
Pattern Recognition Letters, vol. 149, pp. 109-119, September, 2021.
This letter introduces a generalization of Isolation Forest (IF) based on the existing Extended IF (EIF). EIF has shown some interest compared to IF being for instance more robust to some artefacts. However, some information can be lost when computing the EIF trees since the sampled threshold might lead to empty branches. This letter introduces a generalized isolation forest algorithm called Generalized IF (GIF) to overcome these issues. GIF is faster than EIF with a similar performance, as shown in several simulation results associated with reference databases used for anomaly detection.
Signal and image processing / Space communication systems
Randomized rounding algorithms for large scale unsplittable flow problems
Journal of Heuristics, vol. 27, pp. 1081-1110, September, 2021.
Unsplittable flow problems cover a wide range of telecommunication and transportation problems and their efficient resolution is key to a number of applications. In this work, we study algorithms that can scale up to large graphs and important numbers of commodities. We present and analyze in detail a heuristic based on the linear relaxation of the problem and randomized rounding. We provide empirical evidence that this approach is competitive with state-of-the-art resolution methods either by its scaling performance or by the quality of its solutions. We provide a variation of the heuristic which has the same approximation factor as the state-of-the-art approximation algorithm. We also derive a tighter analysis for the approximation factor of both the variation and the state-of-the-art algorithm. We introduce a new objective function for the unsplittable flow problem and discuss its differences with the classical congestion objective function. Finally, we discuss the gap in practical performance and theoretical guarantees between all the aforementioned algorithms.
Networking / Space communication systems
Technical Note
Multipactor Effect
This is the English version of a CNES technical note from 10 October 1983 (see DOI: 10.13140/RG.2.1.2100.8880) The goal of this note is to define and study multipactor effect that may be responsible of failure or even destruction of power radiofrequency equipment in vacuum, particularly satellite transmitters, output circuits and antennas. The knowledge of the conditions for multipactor effect is essential for satellite design, particularly those transmitting high power such as direct television and synthetic aperture radars (SAR). This study has been done in part to support EOPO (Earth Observation Program Office) for the SAR satellite ERS1. The theoretical study is based on previous simple and empirical studies. It shows the limitations of these theories and of the classical definition of multipactor effect. A more rigorous analytical study is proposed. By applying simple physical criteria (stability or instability, limit conditions, …) more interesting results are obtained without requiring empirical values. Then, after comparing theoretical results with published or obtained experimental results, this note proposes directions for further work and a better knowledge of the phenomenon.
Signal and image processing / Space communication systems
PhD Thesis
Répartition de flux dans les réseaux de contenu, application à un contexte satellite.
Defended on September 2, 2021.
With the emergence of video-on-demand services such as Netflix, the use of streaming has exploded in recent years. The large volume of data generated forces network operators to define and use new solutions. These solutions, even if they remain based on the IP stack, try to bypass the point-to-point communication between two hosts (CDN, P2P, ...). In this thesis, we are interested in a new approach, Information Centric Networking, which seeks to deconstruct the IP model by focusing on the desired content. The user indicates to the network that he wishes to obtain a data and the network takes care of retrieving this content. Among the many architectures proposed in the literature, Named Data Networking (NDN) seems to us to be the most mature architecture. For NDN to be a real opportunity for the Internet, it must offer a better Quality of Experience (QoE) to users while efficiently using network capacities. This is the core of this thesis : proposing a solution to NDN to manage user satisfaction. For content such as video, throughput is crucial. This is why we have decided to maximize the throughput to maximize the QoE. The new opportunities offered by NDNs, such as multipathing and caching, have allowed us to redefine the notion of ow in this paradigm. With this definition and the ability to perform processing on every node in the network, we decided to view the classic congestion control problem as finding a fair distribution of flows. In order for the users' QoE to be optimal, this distribution will have to best meet the demands. However, since the network resources are not infinite, tradeoffs must be made. For this purpose, we decided to use the Max-Min fairness criterion which allows us to obtain a Pareto equilibrium where the increase of a ow can only be done at the expense of another less privileged flow. The objective of this thesis was then to propose a solution to the newly formulated problem. We thus designed Cooperative Congestion Control, a distributed solution aiming at distributing the flows fairly on the network. It is based on a cooperation of each node where the users' needs are transmitted to the content providers and the network constraints are re-evaluated locally and transmitted to the users. The architecture of our solution is generic and is composed of several algorithms. We propose some implementations of these and show that even if a Pareto equilibrium is obtained, only local fairness is achieved. Indeed, due to lack of information, the decisions made by the nodes are limited. We also tested our solution on topologies including satellite links (thus offering high delays). Thanks to the emission of Interests regulated by our solution, we show that these high delays, and contrary to state-of-the-art solutions, have very little impact on the performance of CCC.
Networking / Space communication systems
PhD Defense Slides
Répartition de flux dans les réseaux de contenu, application à un contexte satellite.
Defended on September 2, 2021.
With the emergence of video-on-demand services such as Netflix, the use of streaming has exploded in recent years. The large volume of data generated forces network operators to define and use new solutions. These solutions, even if they remain based on the IP stack, try to bypass the point-to-point communication between two hosts (CDN, P2P, ...). In this thesis, we are interested in a new approach, Information Centric Networking, which seeks to deconstruct the IP model by focusing on the desired content. The user indicates to the network that he wishes to obtain a data and the network takes care of retrieving this content. Among the many architectures proposed in the literature, Named Data Networking (NDN) seems to us to be the most mature architecture. For NDN to be a real opportunity for the Internet, it must offer a better Quality of Experience (QoE) to users while efficiently using network capacities. This is the core of this thesis : proposing a solution to NDN to manage user satisfaction. For content such as video, throughput is crucial. This is why we have decided to maximize the throughput to maximize the QoE. The new opportunities offered by NDNs, such as multipathing and caching, have allowed us to redefine the notion of ow in this paradigm. With this definition and the ability to perform processing on every node in the network, we decided to view the classic congestion control problem as finding a fair distribution of flows. In order for the users' QoE to be optimal, this distribution will have to best meet the demands. However, since the network resources are not infinite, tradeoffs must be made. For this purpose, we decided to use the Max-Min fairness criterion which allows us to obtain a Pareto equilibrium where the increase of a ow can only be done at the expense of another less privileged flow. The objective of this thesis was then to propose a solution to the newly formulated problem. We thus designed Cooperative Congestion Control, a distributed solution aiming at distributing the flows fairly on the network. It is based on a cooperation of each node where the users' needs are transmitted to the content providers and the network constraints are re-evaluated locally and transmitted to the users. The architecture of our solution is generic and is composed of several algorithms. We propose some implementations of these and show that even if a Pareto equilibrium is obtained, only local fairness is achieved. Indeed, due to lack of information, the decisions made by the nodes are limited. We also tested our solution on topologies including satellite links (thus offering high delays). Thanks to the emission of Interests regulated by our solution, we show that these high delays, and contrary to state-of-the-art solutions, have very little impact on the performance of CCC.
Networking / Space communication systems
Conference Paper
Robust Hypersphere Fitting from Noisy Data Using an EM Algorithm
In Proc. European Conference on Signal Processing (EUSIPCO), Dublin, Ireland, August 23-27, 2021.
This article studies a robust expectation maximization (EM) algorithm to solve the problem of hypersphere fitting. This algorithm relies on the introduction of random latent vectors having independent von Mises-Fisher distributions defined on the hypersphere and random latent vectors indicating the presence of potential outliers. This model leads to an inference problem that can be solved with a simple EM algorithm. The performance of the resulting robust hypersphere fitting algorithm is evaluated for circle and sphere fitting with promising results.
Signal and image processing / Earth observation
Drowsiness Detection Using Joint EEG-ECG Data With Deep Learning
In Proc. 29th European Signal Processing Conference (EUSIPCO), Dublin, Ireland, August 23-27, 2021.
Drowsiness detection is still an open issue, especially when detection is based on physiological signals. In this sense, light non-invasive modalities such as electroencephalography (EEG) are usually considered. EEG data provides informations about the physiological brain state, directly linked to the drowsy state. Electrocardigrams (ECG) signals can also be considered to involve informations related to the heart state. In this study, we propose a method for drowsiness detection using joint EEG and ECG data. The proposed method is based on a deep learning architecture involving convolutional neural networks (CNN) and recurrent neural networks (RNN). High efficiency level is obtained with accuracy scores up to 97% on validation set. We also demonstrate that a modification of the proposed architecture by adding autoencoders helps to compensate the performance drop when analysing subjects whose data is not presented during the learning step.
Signal and image processing / Other
ADDRESS
7 boulevard de la Gare
31500 Toulouse
France