Recherche
Article de conférence
Ultrasound and Magnetic Resonance Image Fusion using a Patch-Wise Polynomial Model
In Proc. International Conference on Image Processing (ICIP), Abu Dhabi, United Arab Emirates, October 25-58, 2020.
This paper introduces a novel algorithm for the fusion of magnetic resonance and ultrasound images, based on a patch-wise polynomial model relating the gray levels of the two imaging systems (called modalities). Starting from observation models adapted to each modality and exploiting a patch-wise polynomial model, the fusion problem is expressed as the minimization of a cost function including two data fidelity terms and two regularizations. This minimization is performed using a PALM-based algorithm, given its ability to handle nonlinear and possibly non-convex functions. The efficiency of the proposed method is evaluated on phantom data. The resulting fused image is shown to contain complementary information from both magnetic resonance (MR) and ultrasound (US) images, i.e., with a good contrast (as for the MR image) and a good spatial resolution (as for the US image).
Traitement du signal et des images / Autre
QUIC: Opportunities and threats in SATCOM
In Proc. Advanced Satellite Multimedia Systems (ASMS), Graz, Austria, October 20-21, 2020.
This article proposes a discussion on the strengths, weaknesses, opportunities and threats related to the deployment of QUIC end-to-end from a satellite-operator point-of-view. The deployment of QUIC is an opportunity for improving the quality of experience when exploiting satellite broadband accesses. Indeed, the fast establishment of secured connections reduces the short files transmission time. Moreover, removing transport layer performance enhancing proxies reduces the cost of network infrastructures and improves the integration of satellite systems. However, the congestion and flow controls at end points are not always suitable for satellite communications due to the intrinsic high bandwidth-delay product. Further acceptance of QUIC in satellite systems would be guaranteed if its performance in specific use-cases is increased. We propose a running code for an IETF document, and based on an emulated platform and on open-source software, this paper proposes values of performance metrics just as one piece of the puzzle. The final performance objective requires consensus among the different actors. The objective should be challenging enough for satellite operators to allow QUIC traffic but reasonable enough to keep QUIC deployable on the Internet.
Cette vidéo est intégrée depuis YouTube. Sa lecture est soumise à la politique de confidentialité de Google.
Réseaux / Systèmes spatiaux de communication
Improving the estimation of the sea level anomaly slope
in Proc. IEEE Int. Geosci. Remote Sens. Symp. (IGARSS), Hawaï, USA, 26 September - 2 October 2020.
Satellite altimeters provide sea level measurements along satellite track. A mean profile based on the measurements averaged over a time period is then subtracted to estimate the sea level anomaly (SLA). In the spectral domain, SLA is characterized by a power spectral density of the form one over a power of the frequency where the power (the slope) is a parameter of great interest for ocean monitoring. However, this information lies in a narrow frequency band, located at very low frequencies, which calls for some specific spectral analysis methods. This paper studies a new parametric method based on an autoregressive model combined with a warping of the frequency scale (denoted as ARWARP). A statistical validation is proposed on simulated SLA signals, showing the performance of slope estimation using this ARWARP spectral estimator, compared to classical Fourier-based methods. Application to Sentinel-3 real data highlights the main advantage of the ARWARP model, making possible SLA slope estimation on a short signal segment, i.e., with a high spatial resolution.
Cette vidéo est intégrée depuis YouTube. Sa lecture est soumise à la politique de confidentialité de Google.
Traitement du signal et des images / Observation de la Terre
Cooperative Congestion Control dans NDN
In Proc. 5ème Rencontres Francophones sur la Conception de Protocoles, l'Evaluation de Performance et l'Expérimentation des Réseaux de Communication (CoRes), Lyon, France, September 28 - October 2, 2020.
Named Data Networking (NDN) est l'une des architectures d'Information Centric Network (ICN). Pour récupérer du contenu, il utilise du multi-source, du multi-chemin ainsi que des caches opportunistes sur les routeurs. Ces propriétés fournissent de nouvelles opportunités pour améliorer la Qualité d'Expérience des utilisateurs finaux (QoE). Cependant, la gestion de plusieurs flux, eux-mêmes utilisant du multi-chemin, est très complexe. L'objectif de notre travail est de proposer un cadreà ce problème en définissant trois principes qu'une solution de gestion de ces flux devrait prendre en compte. Les noeuds devront coopérer, superviser leurs files d'émission et gérer intelligemment les capacités de multi-chemins de NDN. Ces troiséléments sont au coeur de notre proposition : Cooperative Congestion Control (CCC). Plus qu'une solution, CCC est plus une ossature modulaire où chaque principe peutêtre implémenté de multiples façons. L'objectif final est de répartiréquitablement les flux sur le réseau et de maximiser la QoE des utilisateurs. Nousévaluons CCC par simulation avec ndnSIM, pour ensuite le comparer avec les solutions proposées dans l'état de l'art.
Réseaux / Autre
Article de journal
New multiplexing method to add a new signal in the Galileo E1 band
IET Radar, Sonar & Navigation, E-First, Print pp.1751-8784, Online pp. 1751-8792, September, 2020.
This work addresses the problem of integrating a new signal in the Galileo E1 band. Thus, the arising question is how the existing multiplexing methods can be efficiently used or modified to integrate a new binary signal in the Galileo E1 band with the existing Galileo E1 signals. To this end, in this study, the authors first select three efficient multiplexing methods from the state of the art (i.e. interplexing, POCET and CEMIC methods) to multiplex a new Galileo signal along with the Galileo E1 legacy signals in a constant envelope modulation. Moreover, they evaluate their performance and main advantages and drawbacks. Secondly, in order to improve both performance and flexibility/adaptability of the multiplexing method, a modified CEMIC method, called ACEMIC, is proposed. This method allows to design modulations which maximise the power efficiency with respect to a given peak-to-average-power-ratio constraint. Finally, they compare the previous multiplexing methods in terms of power signal distribution, constant envelope fluctuation and power efficiency.
Traitement du signal et des images / Localisation et navigation et Systèmes spatiaux de communication
Article de conférence
An Assessment Methodology of Smartphones Positioning Performance for Collaborative Scenarios in Urban Environment
In Proc. ION GNSS+, St Louis, Missouri, USA, September 21-25, 2020.
The release of Android Global Navigation Satellite Systems (GNSS) raw measurements in late 2016 unlocked the access of smartphones' embedded positioning chipset capabilities for developers and the scientific community. This groundbreaking announcement was followed by technical innovations, made by smartphone brands and chipset manufacturers, in order to obtain the world's most precise smartphone on the market. In recent years, several studies investigated the development of advanced positioning techniques (e.g. Precise Point Positioning (PPP), Real-Time Kinematic (RTK)) using Android raw data measurements. However, most studies drawn their conclusions based on one smartphone brand and model in optimal open-sky conditions despite the fact that most smartphone-based positioning activities are achieved in urban and sub-urban areas. In order to overcome urban smartphone-based positioning issues, we ambition to develop a collaborative user’s network taking advantage of the tremendous numbers of connected Android devices in today's busy city centers. A throughout study has been conducted in the city center of Toulouse in France for characterizing smartphone positioning performance in both nominal and urban conditions. Various limiting factors were exposed during our data collection campaign. Nevertheless, the investigation conducted on Android GNSS raw measurement uncovered smartphone positioning potential for navigation applications in constraint environment. A methodology assessment has been implemented in order to identify, characterize and compare smartphones’ positioning performances. A classification of key parameters has been determined focusing on the implementation of collaborative algorithms, revealing the attributes and components for smartphone-based collaborative methods. Thereafter, a comprehensive state of the art review on existing cooperative positioning techniques, has been achieved. An evaluation of the feasibility and the applicability of those methods into the smartphone domain has been made. We present a method based on simple assumptions, without third-party equipment and data, only relying on smartphones’ own data combination. Our cooperative network can be described as a low-cost embedded structure aiming at providing positioning assistance to its users.
Cette vidéo est intégrée depuis YouTube. Sa lecture est soumise à la politique de confidentialité de Google.
Communications numériques / Localisation et navigation
Hybrid Navigation Filters Performances Between GPS, Galileo and 5G TOA Measurements in Multipath Environment
In Proc. ION GNSS+, St Louis, Missouri, USA, September 21-25, 2020.
In this paper, the performance of different hybrid navigation filters exploiting GPS, Galileo and 5G Time Of Arrival (TOA) measurements in multipath environment are studied. For the realism of the study, realistic propagation channels must be considered and their impacts on the received signals processing must be accurately modelled. GNSS signal mathematical models in multipath environment have been analyzed for a long time. However, 5G mathematical models in a realistic multipath environment are still in its early stages of analysis. This article is divided in three main parts. The first part is dedicated to the identification of compliant GNSS and 5G signal propagation channel models; SCHUN is selected for GNSS and QuaDRiGa is selected for 5G. Based on this, the correlator output mathematical models for 5G signals and GNSS signals are derived. The second part tackles the accurate characterization of the pseudo range errors due to propagation channels shadowing and multipath effect as well as thermal noise. This step is required for the correct derivation of the navigation filters. Indeed, the study will focus on Extended Kalman Filters (EKF) and Unscented Kalman Filters (UKF); both assume a Gaussian distribution of the errors. Therefore, by optimally characterizing the errors, the performances of the filters are expected to be improved. The last part consists in validating through simulations the theory and mathematical models developed in the first and second parts.
Traitement du signal et des images / Localisation et navigation
On the Time-Delay Estimation Accuracy Limit of GNSS Meta-Signals
In Proc. Intelligent Transportation Systems Conference (IEEE/ITSC), Rhodes, Greece, September 20-23, 2020.
In standard two-step Global Navigation Satellite Systems (GNSS) receiver architectures the precision on the position, velocity and time estimates is driven by the precision on the intermediate parameters, i.e., delays and Dopplers. The estimation of the time-delay is in turn driven by the baseband signal resolution, that is, by the type of broadcasted signals. Among the different GNSS signals available the socalled AltBOC modulated signal, appearing in the Galileo E5 band and the new GNSS meta-signal concept, is the one which may provide the better time-delay precision. In order to meet the constraints of safety-critical applications such as Intelligent Transportation Systems or automated aircraft landing, it is fundamental to known the ultimate code-based precision achievable by standalone GNSS receivers. The main goal of this contribution is to assess the time-delay precision of AltBOC type signals. The analysis is performed by resorting to a new compact closed-form Cramér-Rao bound expression for time-delay estimation which only depends on the signal samples. In addition, the corresponding time-delay maximum likelihood estimate is also provided to assess the minimum signal-to-noise ratio that allows to be in optimal receiver operation.
Cette vidéo est intégrée depuis YouTube. Sa lecture est soumise à la politique de confidentialité de Google.
Traitement du signal et des images / Localisation et navigation et Systèmes spatiaux de communication
Simplified Entropy Model for Reduced-Complexity End-to-End Variational Autoencoder with Application to On-Board Satellite Image Compression
In Proc. 7th International Workshop on On-Board Payload Data Compression (OBPDC), Online Event, September 21-23, 2020.
In recent years, neural networks have emerged as data-driven tools to solve problems which were previously addressed with model-based methods. In particular, image processing has been largely impacted by convolutional neural networks (CNNs). Recently, CNN-based auto-encoders have been successfully employed for lossy image compression [1,2,3,4]. These end-to-end optimized architectures are able to dramatically outperform traditional compression schemes in terms of rate-distortion trade-off. The auto-encoder is composed of an encoder and a decoder both learned from the data. The encoder is applied to the input data to produce a latent representation with minimum entropy after quantization. The latent representation, derived through several convolutional layers composed of filters and activation functions, is multi-channel (the output of a particular filter is called a channel or a feature) and non-linear. The representation is then quantized to produce a discrete-valued vector. A standard entropy coding method uses the entropy model inferred from the representation to losslessly compress this discrete-valued vector. A key element of these frameworks is the entropy model. In earlier works [1,2,3], the learned representation was assumed independent and identically distributed within each channel and the channels were assumed independent of each other, resulting in a fully-factorized entropy model. Moreover, a fixed entropy model was learned once, from the training set, preventing any adaptation to the input image during the operational phase. The variational auto-encoder proposed in [4] proposed to use a hyperprior auxiliary network. This network estimates the hyper-parameters of the representation distribution, for each input image. Thus, it does not require the assumption of a fully-factorized model which conflicts with the need for context modeling. This variational auto-encoder achieves compression performance close to the one of BPG (Better Portable Graphics) at the expense of a considerable increase in complexity. However, in the context of on-board compression, a trade-off between compression performance and complexity has to be considered to take into account the strong computational constraints. For this reason, the CCSDS (Consultative Committee for Space Data Systems) lossy compression standard has been designed as a highly simplified version of JPEG2000. This work follows the same logic, however in the context of learned image compression. The aim of this paper is to design a simplified version of the variational auto-encoder proposed in [4] in order to meet the on-board constraints in terms of complexity while preserving high performance in terms of rate-distortion. Apart from straightforward simplifications of the transform (e.g. reduction of the number of filters in the convolutional layers), we mainly propose a simplified entropy model that preserves the adaptability to the input image. A preliminary reduction of the number of filters reduces the complexity by 62% in terms of FLOPs with respect to [4]. It also reduces the number of learned parameters with a positive impact on the memory occupancy. The entropy model simplification exploits a statistical analysis of the learned representation for satellite images, also performed in [5] for natural images. This analysis reveals that most of the features are well fitted by centered Laplacian distributions. The complex hyperprior model based on a non-parametric distribution of [4] can thus be replaced by a simpler parametric centered Laplacian model. The problem then amounts to a classical and simple estimation of a single parameter referred to as the scale. Our simplified entropy models reduces the complexity of the variational auto-encoder coding part by 22% and outperforms the end-to-end model proposed in [1] for the high target rates.
Traitement du signal et des images / Observation de la Terre
Article de journal
Amplitude and Phase Interaction in Hilbert Demodulation of Vibration Signals : Natural Gear Wear Modeling and Time Tracking for Condition Monitoring
Mechanical Systems and Signal Processing, Elsevier, vol. 150, 2021.
In the context of automatic and preventive condition monitoring of rotating machines, this paper revisits the demodulation process essential for detecting and localizing cracks in gears and bearings. The objective of the paper is to evaluate the performance of the well-known Hilbert demodulation by providing a quantified assessment in terms of signal processing. For this purpose, vibration test signals are simulated guided by the analysis of real-world measurements. The database comes from a natural wear experimentation on a test bench at an industrial scale and without any fault initiation. In the proposed simulated model, the amplitude modulation is designed in a physical approach in order to be able to set up the number of faulty teeth and their location. The impact of a limited spectral bandwidth filtering is quantified not only for the amplitude but also for the phase modulation estimations. The interactions between the amplitude and phase estimations are discussed. A focus is made on the analytic signal ambiguity due to the non-uniqueness of the amplitude estimation. This property induces an original investigation when demodulating the residual generated after a time synchronous averaging. Finally, as the objective is a continuous surveillance of a machine, results are given for a sequence of real-world measurements in order to visualize the fault evolution through the demodulation process.
Traitement du signal et des images / Autre
ADRESSE
7 boulevard de la Gare
31500 Toulouse
France