Publication

Journal papers, Talks, Conference papers, Books, Technical notes

Search

Journal Paper

Randomized rounding algorithms for large scale unsplittable flow problems

Authors: Lamothe François, Rachelson Emmanuel, Hait Alain, Baudoin Cédric and Dupé Jean-Baptiste

Journal of Heuristics, vol. 27, pp. 1081-1110, September, 2021.

Download document

Unsplittable flow problems cover a wide range of telecommunication and transportation problems and their efficient resolution is key to a number of applications. In this work, we study algorithms that can scale up to large graphs and important numbers of commodities. We present and analyze in detail a heuristic based on the linear relaxation of the problem and randomized rounding. We provide empirical evidence that this approach is competitive with state-of-the-art resolution methods either by its scaling performance or by the quality of its solutions. We provide a variation of the heuristic which has the same approximation factor as the state-of-the-art approximation algorithm. We also derive a tighter analysis for the approximation factor of both the variation and the state-of-the-art algorithm. We introduce a new objective function for the unsplittable flow problem and discuss its differences with the classical congestion objective function. Finally, we discuss the gap in practical performance and theoretical guarantees between all the aforementioned algorithms.

Read more

Networking / Space communication systems

Technical Note

Multipactor Effect

Author: Sombrin Jacques B.

Download document

This is the English version of a CNES technical note from 10 October 1983 (see DOI: 10.13140/RG.2.1.2100.8880) The goal of this note is to define and study multipactor effect that may be responsible of failure or even destruction of power radiofrequency equipment in vacuum, particularly satellite transmitters, output circuits and antennas. The knowledge of the conditions for multipactor effect is essential for satellite design, particularly those transmitting high power such as direct television and synthetic aperture radars (SAR). This study has been done in part to support EOPO (Earth Observation Program Office) for the SAR satellite ERS1. The theoretical study is based on previous simple and empirical studies. It shows the limitations of these theories and of the classical definition of multipactor effect. A more rigorous analytical study is proposed. By applying simple physical criteria (stability or instability, limit conditions, …) more interesting results are obtained without requiring empirical values. Then, after comparing theoretical results with published or obtained experimental results, this note proposes directions for further work and a better knowledge of the phenomenon.

Read more

Signal and image processing / Space communication systems

PhD Thesis

Répartition de flux dans les réseaux de contenu, application à un contexte satellite.

Author: Thibaud Adrien

Defended on September 2, 2021.

Download document

With the emergence of video-on-demand services such as Netflix, the use of streaming has exploded in recent years. The large volume of data generated forces network operators to define and use new solutions. These solutions, even if they remain based on the IP stack, try to bypass the point-to-point communication between two hosts (CDN, P2P, ...). In this thesis, we are interested in a new approach, Information Centric Networking, which seeks to deconstruct the IP model by focusing on the desired content. The user indicates to the network that he wishes to obtain a data and the network takes care of retrieving this content. Among the many architectures proposed in the literature, Named Data Networking (NDN) seems to us to be the most mature architecture. For NDN to be a real opportunity for the Internet, it must offer a better Quality of Experience (QoE) to users while efficiently using network capacities. This is the core of this thesis : proposing a solution to NDN to manage user satisfaction. For content such as video, throughput is crucial. This is why we have decided to maximize the throughput to maximize the QoE. The new opportunities offered by NDNs, such as multipathing and caching, have allowed us to redefine the notion of ow in this paradigm. With this definition and the ability to perform processing on every node in the network, we decided to view the classic congestion control problem as finding a fair distribution of flows. In order for the users' QoE to be optimal, this distribution will have to best meet the demands. However, since the network resources are not infinite, tradeoffs must be made. For this purpose, we decided to use the Max-Min fairness criterion which allows us to obtain a Pareto equilibrium where the increase of a ow can only be done at the expense of another less privileged flow. The objective of this thesis was then to propose a solution to the newly formulated problem. We thus designed Cooperative Congestion Control, a distributed solution aiming at distributing the flows fairly on the network. It is based on a cooperation of each node where the users' needs are transmitted to the content providers and the network constraints are re-evaluated locally and transmitted to the users. The architecture of our solution is generic and is composed of several algorithms. We propose some implementations of these and show that even if a Pareto equilibrium is obtained, only local fairness is achieved. Indeed, due to lack of information, the decisions made by the nodes are limited. We also tested our solution on topologies including satellite links (thus offering high delays). Thanks to the emission of Interests regulated by our solution, we show that these high delays, and contrary to state-of-the-art solutions, have very little impact on the performance of CCC.

Read more

Networking / Space communication systems

PhD Defense Slides

Répartition de flux dans les réseaux de contenu, application à un contexte satellite.

Author: Thibaud Adrien

Defended on September 2, 2021.

Download document

With the emergence of video-on-demand services such as Netflix, the use of streaming has exploded in recent years. The large volume of data generated forces network operators to define and use new solutions. These solutions, even if they remain based on the IP stack, try to bypass the point-to-point communication between two hosts (CDN, P2P, ...). In this thesis, we are interested in a new approach, Information Centric Networking, which seeks to deconstruct the IP model by focusing on the desired content. The user indicates to the network that he wishes to obtain a data and the network takes care of retrieving this content. Among the many architectures proposed in the literature, Named Data Networking (NDN) seems to us to be the most mature architecture. For NDN to be a real opportunity for the Internet, it must offer a better Quality of Experience (QoE) to users while efficiently using network capacities. This is the core of this thesis : proposing a solution to NDN to manage user satisfaction. For content such as video, throughput is crucial. This is why we have decided to maximize the throughput to maximize the QoE. The new opportunities offered by NDNs, such as multipathing and caching, have allowed us to redefine the notion of ow in this paradigm. With this definition and the ability to perform processing on every node in the network, we decided to view the classic congestion control problem as finding a fair distribution of flows. In order for the users' QoE to be optimal, this distribution will have to best meet the demands. However, since the network resources are not infinite, tradeoffs must be made. For this purpose, we decided to use the Max-Min fairness criterion which allows us to obtain a Pareto equilibrium where the increase of a ow can only be done at the expense of another less privileged flow. The objective of this thesis was then to propose a solution to the newly formulated problem. We thus designed Cooperative Congestion Control, a distributed solution aiming at distributing the flows fairly on the network. It is based on a cooperation of each node where the users' needs are transmitted to the content providers and the network constraints are re-evaluated locally and transmitted to the users. The architecture of our solution is generic and is composed of several algorithms. We propose some implementations of these and show that even if a Pareto equilibrium is obtained, only local fairness is achieved. Indeed, due to lack of information, the decisions made by the nodes are limited. We also tested our solution on topologies including satellite links (thus offering high delays). Thanks to the emission of Interests regulated by our solution, we show that these high delays, and contrary to state-of-the-art solutions, have very little impact on the performance of CCC.

Read more

Networking / Space communication systems

Conference Paper

Robust Hypersphere Fitting from Noisy Data Using an EM Algorithm

Authors: Lesouple Julien, Pilastre Barbara, Altmann Yoann and Tourneret Jean-Yves

In Proc. European Conference on Signal Processing (EUSIPCO), Dublin, Ireland, August 23-27, 2021.

Download document

This article studies a robust expectation maximization (EM) algorithm to solve the problem of hypersphere fitting. This algorithm relies on the introduction of random latent vectors having independent von Mises-Fisher distributions defined on the hypersphere and random latent vectors indicating the presence of potential outliers. This model leads to an inference problem that can be solved with a simple EM algorithm. The performance of the resulting robust hypersphere fitting algorithm is evaluated for circle and sphere fitting with promising results.

Read more

Signal and image processing / Earth observation

Drowsiness Detection Using Joint EEG-ECG Data With Deep Learning

Authors: Geoffroy Guillaume, Chaari Lotfi, Tourneret Jean-Yves and Wendt Herwig

In Proc. 29th European Signal Processing Conference (EUSIPCO), Dublin, Ireland, August 23-27, 2021.

Download document

Drowsiness detection is still an open issue, especially when detection is based on physiological signals. In this sense, light non-invasive modalities such as electroencephalography (EEG) are usually considered. EEG data provides informations about the physiological brain state, directly linked to the drowsy state. Electrocardigrams (ECG) signals can also be considered to involve informations related to the heart state. In this study, we propose a method for drowsiness detection using joint EEG and ECG data. The proposed method is based on a deep learning architecture involving convolutional neural networks (CNN) and recurrent neural networks (RNN). High efficiency level is obtained with accuracy scores up to 97% on validation set. We also demonstrate that a modification of the proposed architecture by adding autoencoders helps to compensate the performance drop when analysing subjects whose data is not presented during the learning step.

Read more

Signal and image processing / Other

Bayesian Estimation for the Parameters of the Bivariate Multifractal Spectrum

Authors: Leon Arencibia Lorena, Wendt Herwig, Tourneret Jean-Yves and Abry Patrice

In Proc. 29th European Signal Processing Conference (EUSIPCO), Dublin, Ireland, August 23-27, 2021.

Download document

Multifractal analysis is a reference tool for the analysis of data based on local regularity and has proven useful in an increasing number of applications involving univariate data (scalar valued time series or single channel images). Recently the theoretical ground for a multivariate multifractal analysis has been explored, showing its potential for capturing and quantifying transient higher-order dependence beyond correlation among collections of data. Yet, the accurate estimation of the parameters associated with these multivariate multifractal models is challenging. Building on these first formulations of multivariate multifractal analysis, the present work proposes a Bayesian model and studies an estimation framework for the parameters of a quadratic model for the joint multifractal spectrum of bivariate time series. The approach relies on a novel joint Gaussian model for the logarithm of wavelet leaders and leverages on a Whittle approximation and data augmentation for the matrix-valued parameters of interest. Monte Carlo simulations demonstrate the benefits of the method with respect to previous formulations. In particular, we obtain significant performance improvements at only moderately larger computational cost, for large ranges of sample size and multifractal parameter values.

Read more

Signal and image processing / Other

Journal Paper

Anomaly Detection and Classification in Multispectral Time Series based on Hidden Markov Models

Authors: León-López Kareth, Mouret Florian, Arguello Fuentes Henry and Tourneret Jean-Yves

IEEE Transactions on Geoscience and Remote Sensing, vol. 60, pp. 1-11, August, 2021.

Download document

Monitoring agriculture from satellite remote sensing data, such as multispectral images, has become a powerful tool since it has demonstrated a great potential for providing timely and accurate knowledge of crops. Detecting anomalies in time series of multispectral remote sensing images for crop monitoring is generally performed using a large sample of historical data at a pixel level. Conversely, this article presents a framework for anomaly detection (AD), localization, and classification that exploits the temporal information contained in a given season at a parcel level to detect and localize outliers using hidden Markov models (HMMs). Specifically, the AD part is based on the learning of HMM parameters associated with unlabeled normal data that are used in a second step to detect abnormal crop parcels referred to as anomalies. The learned HMM can also be used in time segments to temporally localize the anomalies affecting the crop parcels. The detected and localized anomalies are finally classified using a supervised classifier, e.g., based on support vector machines. The proposed framework is applicable to images partially covered by clouds and can handle a set of crop parcels acquired in the same season bypassing problems due to crop rotations. Numerical experiments are conducted on synthetic and real data, where the real data correspond to vegetation indices extracted from several multitemporal Sentinel-2 images of rapeseed crops. The proposed approach is compared to standard AD methods yielding better detection rates with the advantage of allowing anomalies to be localized and characterized.

Read more

Signal and image processing / Earth observation

Conference Paper

Sparse Representations and Dictionary Learning : from Image Fusion to Motion Estimation

Authors: Tourneret Jean-Yves, Basarab Adrian, Ouzir Nora and Wei Qi

In Proc. International Geoscience and Remote Sensing Symposium (IGARSS), Brussels, Belgium, July 12-16, 2021.

Download document

The first part of this paper presents some works conducted with Jose Bioucas Dias for fusing high spectral resolution images (such as hyperspectral images) and high spatial resolution images (such as panchromatic or multispectral images) in order to build images with improved spectral and spatial resolutions. These works are related to Bayesian fusion strategies exploiting prior information about the target image to be recovered constructed by dictionary learning. Interestingly, these Bayesian image fusion methods can be adapted with limited changes to motion estimation in pairs or sequences of images. The second part of this paper explains how the work of Jose Bioucas Dias has been a source of inspiration for developing new Bayesian motion estimation methods for ultrasound images.

Read more

Signal and image processing / Other

Journal Paper

Automated Machine Health Monitoring at an Expert Level

Authors: Martin Nadine, Mailhes Corinne and Laval Xavier

Special issue of Acoustics Australia on Machine Condition Monitoring, vol. 49, pp. 185-197, June, 2021.

Download document

Machine health condition monitoring is evidently a crucial challenge nowadays. Unscheduled breakdowns increase operating costs due to repairs and production losses. Scheduled maintenance implies taking the risk of replacing fully operational components. Human expertise is a solution for an outstanding expertise but at a high cost and for a limited quantity of data only, the analysis being time-consuming. Industry 4.0 and digital factory offer many alternatives to human monitoring. Time, cost and skills are the real stakes. The key point is how to automate each part of the process knowing that each one is valuable. Leaving aside scheduled maintenance, this paper copes with condition-based preventive maintenance and focuses on one fundamental step : the signal processing. After a brief overview of this specific area in which numerous technologies already exist, this paper argues for an automated signal processing at an expert level. The objective is to monitor a system over days, weeks, or years with as great accuracy as a human expert, and even better in regard to data investigation and analysis efficiency. After a data validation step most often ignored, any multimodal signal (vibration, current, acoustic, ...) is processed over its entire frequency band in view of identifying all harmonic families and their sidebands. Sophisticated processing such as filtering and demodulation creates relevant features describing the fine complex structures of each spectrum. A time-frequency feature tracking constructs trends over time to not only detect a failure but also to characterize and localize it. Such an automated expert-level processing is a way to raise alarms with a reduced false alarm probability.

Read more

Signal and image processing / Other

ADDRESS

7 boulevard de la Gare
31500 Toulouse
France

CONTACT


CNES
Thales Alenia Space
Collins Aerospace
Toulouse INP
ISEA-SUPAERO
IPSA
ENAC
IMT Atlantique