Recherche
Note technique
Multipactor Effect
This is the English version of a CNES technical note from 10 October 1983 (see DOI: 10.13140/RG.2.1.2100.8880) The goal of this note is to define and study multipactor effect that may be responsible of failure or even destruction of power radiofrequency equipment in vacuum, particularly satellite transmitters, output circuits and antennas. The knowledge of the conditions for multipactor effect is essential for satellite design, particularly those transmitting high power such as direct television and synthetic aperture radars (SAR). This study has been done in part to support EOPO (Earth Observation Program Office) for the SAR satellite ERS1. The theoretical study is based on previous simple and empirical studies. It shows the limitations of these theories and of the classical definition of multipactor effect. A more rigorous analytical study is proposed. By applying simple physical criteria (stability or instability, limit conditions, …) more interesting results are obtained without requiring empirical values. Then, after comparing theoretical results with published or obtained experimental results, this note proposes directions for further work and a better knowledge of the phenomenon.
Traitement du signal et des images / Systèmes spatiaux de communication
Thèse de Doctorat
Répartition de flux dans les réseaux de contenu, application à un contexte satellite.
Defended on September 2, 2021.
With the emergence of video-on-demand services such as Netflix, the use of streaming has exploded in recent years. The large volume of data generated forces network operators to define and use new solutions. These solutions, even if they remain based on the IP stack, try to bypass the point-to-point communication between two hosts (CDN, P2P, ...). In this thesis, we are interested in a new approach, Information Centric Networking, which seeks to deconstruct the IP model by focusing on the desired content. The user indicates to the network that he wishes to obtain a data and the network takes care of retrieving this content. Among the many architectures proposed in the literature, Named Data Networking (NDN) seems to us to be the most mature architecture. For NDN to be a real opportunity for the Internet, it must offer a better Quality of Experience (QoE) to users while efficiently using network capacities. This is the core of this thesis : proposing a solution to NDN to manage user satisfaction. For content such as video, throughput is crucial. This is why we have decided to maximize the throughput to maximize the QoE. The new opportunities offered by NDNs, such as multipathing and caching, have allowed us to redefine the notion of ow in this paradigm. With this definition and the ability to perform processing on every node in the network, we decided to view the classic congestion control problem as finding a fair distribution of flows. In order for the users' QoE to be optimal, this distribution will have to best meet the demands. However, since the network resources are not infinite, tradeoffs must be made. For this purpose, we decided to use the Max-Min fairness criterion which allows us to obtain a Pareto equilibrium where the increase of a ow can only be done at the expense of another less privileged flow. The objective of this thesis was then to propose a solution to the newly formulated problem. We thus designed Cooperative Congestion Control, a distributed solution aiming at distributing the flows fairly on the network. It is based on a cooperation of each node where the users' needs are transmitted to the content providers and the network constraints are re-evaluated locally and transmitted to the users. The architecture of our solution is generic and is composed of several algorithms. We propose some implementations of these and show that even if a Pareto equilibrium is obtained, only local fairness is achieved. Indeed, due to lack of information, the decisions made by the nodes are limited. We also tested our solution on topologies including satellite links (thus offering high delays). Thanks to the emission of Interests regulated by our solution, we show that these high delays, and contrary to state-of-the-art solutions, have very little impact on the performance of CCC.
Réseaux / Systèmes spatiaux de communication
Présentation de soutenance de thèse
Répartition de flux dans les réseaux de contenu, application à un contexte satellite.
Defended on September 2, 2021.
With the emergence of video-on-demand services such as Netflix, the use of streaming has exploded in recent years. The large volume of data generated forces network operators to define and use new solutions. These solutions, even if they remain based on the IP stack, try to bypass the point-to-point communication between two hosts (CDN, P2P, ...). In this thesis, we are interested in a new approach, Information Centric Networking, which seeks to deconstruct the IP model by focusing on the desired content. The user indicates to the network that he wishes to obtain a data and the network takes care of retrieving this content. Among the many architectures proposed in the literature, Named Data Networking (NDN) seems to us to be the most mature architecture. For NDN to be a real opportunity for the Internet, it must offer a better Quality of Experience (QoE) to users while efficiently using network capacities. This is the core of this thesis : proposing a solution to NDN to manage user satisfaction. For content such as video, throughput is crucial. This is why we have decided to maximize the throughput to maximize the QoE. The new opportunities offered by NDNs, such as multipathing and caching, have allowed us to redefine the notion of ow in this paradigm. With this definition and the ability to perform processing on every node in the network, we decided to view the classic congestion control problem as finding a fair distribution of flows. In order for the users' QoE to be optimal, this distribution will have to best meet the demands. However, since the network resources are not infinite, tradeoffs must be made. For this purpose, we decided to use the Max-Min fairness criterion which allows us to obtain a Pareto equilibrium where the increase of a ow can only be done at the expense of another less privileged flow. The objective of this thesis was then to propose a solution to the newly formulated problem. We thus designed Cooperative Congestion Control, a distributed solution aiming at distributing the flows fairly on the network. It is based on a cooperation of each node where the users' needs are transmitted to the content providers and the network constraints are re-evaluated locally and transmitted to the users. The architecture of our solution is generic and is composed of several algorithms. We propose some implementations of these and show that even if a Pareto equilibrium is obtained, only local fairness is achieved. Indeed, due to lack of information, the decisions made by the nodes are limited. We also tested our solution on topologies including satellite links (thus offering high delays). Thanks to the emission of Interests regulated by our solution, we show that these high delays, and contrary to state-of-the-art solutions, have very little impact on the performance of CCC.
Réseaux / Systèmes spatiaux de communication
Article de conférence
Robust Hypersphere Fitting from Noisy Data Using an EM Algorithm
In Proc. European Conference on Signal Processing (EUSIPCO), Dublin, Ireland, August 23-27, 2021.
This article studies a robust expectation maximization (EM) algorithm to solve the problem of hypersphere fitting. This algorithm relies on the introduction of random latent vectors having independent von Mises-Fisher distributions defined on the hypersphere and random latent vectors indicating the presence of potential outliers. This model leads to an inference problem that can be solved with a simple EM algorithm. The performance of the resulting robust hypersphere fitting algorithm is evaluated for circle and sphere fitting with promising results.
Traitement du signal et des images / Observation de la Terre
Drowsiness Detection Using Joint EEG-ECG Data With Deep Learning
In Proc. 29th European Signal Processing Conference (EUSIPCO), Dublin, Ireland, August 23-27, 2021.
Drowsiness detection is still an open issue, especially when detection is based on physiological signals. In this sense, light non-invasive modalities such as electroencephalography (EEG) are usually considered. EEG data provides informations about the physiological brain state, directly linked to the drowsy state. Electrocardigrams (ECG) signals can also be considered to involve informations related to the heart state. In this study, we propose a method for drowsiness detection using joint EEG and ECG data. The proposed method is based on a deep learning architecture involving convolutional neural networks (CNN) and recurrent neural networks (RNN). High efficiency level is obtained with accuracy scores up to 97% on validation set. We also demonstrate that a modification of the proposed architecture by adding autoencoders helps to compensate the performance drop when analysing subjects whose data is not presented during the learning step.
Traitement du signal et des images / Autre
Bayesian Estimation for the Parameters of the Bivariate Multifractal Spectrum
In Proc. 29th European Signal Processing Conference (EUSIPCO), Dublin, Ireland, August 23-27, 2021.
Multifractal analysis is a reference tool for the analysis of data based on local regularity and has proven useful in an increasing number of applications involving univariate data (scalar valued time series or single channel images). Recently the theoretical ground for a multivariate multifractal analysis has been explored, showing its potential for capturing and quantifying transient higher-order dependence beyond correlation among collections of data. Yet, the accurate estimation of the parameters associated with these multivariate multifractal models is challenging. Building on these first formulations of multivariate multifractal analysis, the present work proposes a Bayesian model and studies an estimation framework for the parameters of a quadratic model for the joint multifractal spectrum of bivariate time series. The approach relies on a novel joint Gaussian model for the logarithm of wavelet leaders and leverages on a Whittle approximation and data augmentation for the matrix-valued parameters of interest. Monte Carlo simulations demonstrate the benefits of the method with respect to previous formulations. In particular, we obtain significant performance improvements at only moderately larger computational cost, for large ranges of sample size and multifractal parameter values.
Traitement du signal et des images / Autre
Article de journal
Anomaly Detection and Classification in Multispectral Time Series based on Hidden Markov Models
IEEE Transactions on Geoscience and Remote Sensing, vol. 60, pp. 1-11, August, 2021.
Monitoring agriculture from satellite remote sensing data, such as multispectral images, has become a powerful tool since it has demonstrated a great potential for providing timely and accurate knowledge of crops. Detecting anomalies in time series of multispectral remote sensing images for crop monitoring is generally performed using a large sample of historical data at a pixel level. Conversely, this article presents a framework for anomaly detection (AD), localization, and classification that exploits the temporal information contained in a given season at a parcel level to detect and localize outliers using hidden Markov models (HMMs). Specifically, the AD part is based on the learning of HMM parameters associated with unlabeled normal data that are used in a second step to detect abnormal crop parcels referred to as anomalies. The learned HMM can also be used in time segments to temporally localize the anomalies affecting the crop parcels. The detected and localized anomalies are finally classified using a supervised classifier, e.g., based on support vector machines. The proposed framework is applicable to images partially covered by clouds and can handle a set of crop parcels acquired in the same season bypassing problems due to crop rotations. Numerical experiments are conducted on synthetic and real data, where the real data correspond to vegetation indices extracted from several multitemporal Sentinel-2 images of rapeseed crops. The proposed approach is compared to standard AD methods yielding better detection rates with the advantage of allowing anomalies to be localized and characterized.
Traitement du signal et des images / Observation de la Terre
Article de conférence
Sparse Representations and Dictionary Learning : from Image Fusion to Motion Estimation
In Proc. International Geoscience and Remote Sensing Symposium (IGARSS), Brussels, Belgium, July 12-16, 2021.
The first part of this paper presents some works conducted with Jose Bioucas Dias for fusing high spectral resolution images (such as hyperspectral images) and high spatial resolution images (such as panchromatic or multispectral images) in order to build images with improved spectral and spatial resolutions. These works are related to Bayesian fusion strategies exploiting prior information about the target image to be recovered constructed by dictionary learning. Interestingly, these Bayesian image fusion methods can be adapted with limited changes to motion estimation in pairs or sequences of images. The second part of this paper explains how the work of Jose Bioucas Dias has been a source of inspiration for developing new Bayesian motion estimation methods for ultrasound images.
Traitement du signal et des images / Autre
Article de journal
Automated Machine Health Monitoring at an Expert Level
Special issue of Acoustics Australia on Machine Condition Monitoring, vol. 49, pp. 185-197, June, 2021.
Machine health condition monitoring is evidently a crucial challenge nowadays. Unscheduled breakdowns increase operating costs due to repairs and production losses. Scheduled maintenance implies taking the risk of replacing fully operational components. Human expertise is a solution for an outstanding expertise but at a high cost and for a limited quantity of data only, the analysis being time-consuming. Industry 4.0 and digital factory offer many alternatives to human monitoring. Time, cost and skills are the real stakes. The key point is how to automate each part of the process knowing that each one is valuable. Leaving aside scheduled maintenance, this paper copes with condition-based preventive maintenance and focuses on one fundamental step : the signal processing. After a brief overview of this specific area in which numerous technologies already exist, this paper argues for an automated signal processing at an expert level. The objective is to monitor a system over days, weeks, or years with as great accuracy as a human expert, and even better in regard to data investigation and analysis efficiency. After a data validation step most often ignored, any multimodal signal (vibration, current, acoustic, ...) is processed over its entire frequency band in view of identifying all harmonic families and their sidebands. Sophisticated processing such as filtering and demodulation creates relevant features describing the fine complex structures of each spectrum. A time-frequency feature tracking constructs trends over time to not only detect a failure but also to characterize and localize it. Such an automated expert-level processing is a way to raise alarms with a reduced false alarm probability.
Traitement du signal et des images / Autre
Article de conférence
SmartCoop Algorithm : Improving Smartphone Position Accuracy and Reliability via Collaborative Positioning
In Proc. International Conference on Localization and GNSS (ICL-GNSS), Tampere, Finland, June 1-3, 2021.
In recent years, our society is preparing for a paradigm shift toward the hyper-connectivity of urban areas. This highly anticipated rise of connected smart city centers is led by the development of low-cost connected smartphone devices owned by each one of us. In this context, the demand for low-cost, high-precision localization solutions is driven by the development of novel autonomous systems. The creation of a collaborative based network will take advantage of the large number of connected devices in today's city center. This paper validates the positioning performance increase of Android low-cost smartphones device present in a collaborative network. The assessment will be made on both simulated and collected smartphone's GNSS raw data measurements. We propose a collaborative method based on the estimation of distances between network mobile users used in a SMARTphone COOPerative Positioning algorithm (SmartCoop) . Previous analysis made on smartphone data allow us to generate simulated data for experimenting our cooperative engine in nominal conditions. The evaluation and analysis of this innovative method shows a significant increase of accuracy and reliability of smartphones positioning capabilities. Position accuracy improves by more than 3m, in average, for all smartphones within the collaborative network.
Communications numériques / Localisation et navigation
ADRESSE
7 boulevard de la Gare
31500 Toulouse
France