## Recherche

###### Séminaire

## One, Two or Many Frequencies: Synchrosqueezing, EMD and Multicomponent Signal Analysis

**Seminar of TeSA, Toulouse, November 23, 2021.**

Many signals from the physical world, e.g. speech or physiological records, can be modelled as a sum of amplitude and frequency-modulated (AM/FM) waves often called modes. In the last few decades, there has been an increasing interest in designing new accurate representations and processing methods for these type of signals. Consequently, the retrieval of the components (or modes) of a multicomponent signal is a central issue in many audio processing problems. The most commonly used techniques to carry out the retrieval are time-frequency or time-scale based signal representations. For the former, spectrogram reassignment techniques, reconstruction based on minimization of the ambiguity function associated with the Wigner-Ville distribution, synchrosqueezing using the short time Fourier transform or Fourier ridges have all been successfully used. For the latter, i.e., time-scale representations, wavelet ridges have also proven to be very efficient, the emphasis is on the importance of the wavelet choice with regard to the ridge representation. Synchrosqueezing techniques have also been developed within the wavelet framework. In this talk, I will discuss Empirical mode decomposition and synchrosqueezing methods and their use in analysis of multimode signals.

Traitement du signal et des images / Autre

###### Article de journal

## How to Introduce Expert Feedback in One-Class Support Vector Machines for Anomaly Detection ?

**Signal Processing, vol. 188, pp. 108197, November 2021.**

Anomaly detection consists of detecting elements of a database that are different from the majority of normal data. The majority of anomaly detection algorithms considers unlabeled datasets. However, in some applications, labels associated with a subset of the database (coming for instance from expert feedback) are available providing useful information to design the anomaly detector. This paper studies a semi-supervised anomaly detector based on support vector machines, which takes the best of existing supervised and unsupervised support vector machines algorithms. The proposed algorithm allows the maximum proportion of vectors detected as anomalies and the maximum proportion of errors in the supervised data to be controlled, through two hyperparameters defining these proportions. Simulations conducted on various benchmark datasets show the interest of the proposed semi-supervised anomaly detection method.

Traitement du signal et des images / Systèmes spatiaux de communication

## Generalized Isolation Forest for Anomaly Detection

**Pattern Recognition Letters, vol. 149, pp. 109-119, September, 2021.**

This letter introduces a generalization of Isolation Forest (IF) based on the existing Extended IF (EIF). EIF has shown some interest compared to IF being for instance more robust to some artefacts. However, some information can be lost when computing the EIF trees since the sampled threshold might lead to empty branches. This letter introduces a generalized isolation forest algorithm called Generalized IF (GIF) to overcome these issues. GIF is faster than EIF with a similar performance, as shown in several simulation results associated with reference databases used for anomaly detection.

Traitement du signal et des images / Systèmes spatiaux de communication

## Randomized rounding algorithms for large scale unsplittable flow problems

**Springer Link, Journal of Heuristics, September, 2021.**

Unsplittable flow problems cover a wide range of telecommunication and transportation problems and their efficient resolution is key to a number of applications. In this work, we study algorithms that can scale up to large graphs and important numbers of commodities. We present and analyze in detail a heuristic based on the linear relaxation of the problem and randomized rounding. We provide empirical evidence that this approach is competitive with state-of-the-art resolution methods either by its scaling performance or by the quality of its solutions. We provide a variation of the heuristic which has the same approximation factor as the state-of-the-art approximation algorithm. We also derive a tighter analysis for the approximation factor of both the variation and the state-of-the-art algorithm. We introduce a new objective function for the unsplittable flow problem and discuss its differences with the classical congestion objective function. Finally, we discuss the gap in practical performance and theoretical guarantees between all the aforementioned algorithms.

Réseaux / Systèmes spatiaux de communication

###### Thèse de Doctorat

## Répartition de flux dans les réseaux de contenu, application à un contexte satellite.

**Defended on September 2, 2021.**

With the emergence of video-on-demand services such as Netflix, the use of streaming has exploded in recent years. The large volume of data generated forces network operators to define and use new solutions. These solutions, even if they remain based on the IP stack, try to bypass the point-to-point communication between two hosts (CDN, P2P, ...). In this thesis, we are interested in a new approach, Information Centric Networking, which seeks to deconstruct the IP model by focusing on the desired content. The user indicates to the network that he wishes to obtain a data and the network takes care of retrieving this content. Among the many architectures proposed in the literature, Named Data Networking (NDN) seems to us to be the most mature architecture. For NDN to be a real opportunity for the Internet, it must offer a better Quality of Experience (QoE) to users while efficiently using network capacities. This is the core of this thesis : proposing a solution to NDN to manage user satisfaction. For content such as video, throughput is crucial. This is why we have decided to maximize the throughput to maximize the QoE. The new opportunities offered by NDNs, such as multipathing and caching, have allowed us to redefine the notion of ow in this paradigm. With this definition and the ability to perform processing on every node in the network, we decided to view the classic congestion control problem as finding a fair distribution of flows. In order for the users' QoE to be optimal, this distribution will have to best meet the demands. However, since the network resources are not infinite, tradeoffs must be made. For this purpose, we decided to use the Max-Min fairness criterion which allows us to obtain a Pareto equilibrium where the increase of a ow can only be done at the expense of another less privileged flow. The objective of this thesis was then to propose a solution to the newly formulated problem. We thus designed Cooperative Congestion Control, a distributed solution aiming at distributing the flows fairly on the network. It is based on a cooperation of each node where the users' needs are transmitted to the content providers and the network constraints are re-evaluated locally and transmitted to the users. The architecture of our solution is generic and is composed of several algorithms. We propose some implementations of these and show that even if a Pareto equilibrium is obtained, only local fairness is achieved. Indeed, due to lack of information, the decisions made by the nodes are limited. We also tested our solution on topologies including satellite links (thus offering high delays). Thanks to the emission of Interests regulated by our solution, we show that these high delays, and contrary to state-of-the-art solutions, have very little impact on the performance of CCC.

Réseaux / Systèmes spatiaux de communication

###### Présentation de soutenance de thèse

## Répartition de flux dans les réseaux de contenu, application à un contexte satellite.

**Defended on September 2, 2021.**

With the emergence of video-on-demand services such as Netflix, the use of streaming has exploded in recent years. The large volume of data generated forces network operators to define and use new solutions. These solutions, even if they remain based on the IP stack, try to bypass the point-to-point communication between two hosts (CDN, P2P, ...). In this thesis, we are interested in a new approach, Information Centric Networking, which seeks to deconstruct the IP model by focusing on the desired content. The user indicates to the network that he wishes to obtain a data and the network takes care of retrieving this content. Among the many architectures proposed in the literature, Named Data Networking (NDN) seems to us to be the most mature architecture. For NDN to be a real opportunity for the Internet, it must offer a better Quality of Experience (QoE) to users while efficiently using network capacities. This is the core of this thesis : proposing a solution to NDN to manage user satisfaction. For content such as video, throughput is crucial. This is why we have decided to maximize the throughput to maximize the QoE. The new opportunities offered by NDNs, such as multipathing and caching, have allowed us to redefine the notion of ow in this paradigm. With this definition and the ability to perform processing on every node in the network, we decided to view the classic congestion control problem as finding a fair distribution of flows. In order for the users' QoE to be optimal, this distribution will have to best meet the demands. However, since the network resources are not infinite, tradeoffs must be made. For this purpose, we decided to use the Max-Min fairness criterion which allows us to obtain a Pareto equilibrium where the increase of a ow can only be done at the expense of another less privileged flow. The objective of this thesis was then to propose a solution to the newly formulated problem. We thus designed Cooperative Congestion Control, a distributed solution aiming at distributing the flows fairly on the network. It is based on a cooperation of each node where the users' needs are transmitted to the content providers and the network constraints are re-evaluated locally and transmitted to the users. The architecture of our solution is generic and is composed of several algorithms. We propose some implementations of these and show that even if a Pareto equilibrium is obtained, only local fairness is achieved. Indeed, due to lack of information, the decisions made by the nodes are limited. We also tested our solution on topologies including satellite links (thus offering high delays). Thanks to the emission of Interests regulated by our solution, we show that these high delays, and contrary to state-of-the-art solutions, have very little impact on the performance of CCC.

Réseaux / Systèmes spatiaux de communication

###### Article de conférence

## Robust Hypersphere Fitting from Noisy Data Using an EM Algorithm

**In Proc. European Conference on Signal Processing (EUSIPCO), Dublin, Ireland, August 23-27, 2021.**

This article studies a robust expectation maximization (EM) algorithm to solve the problem of hypersphere fitting. This algorithm relies on the introduction of random latent vectors having independent von Mises-Fisher distributions defined on the hypersphere and random latent vectors indicating the presence of potential outliers. This model leads to an inference problem that can be solved with a simple EM algorithm. The performance of the resulting robust hypersphere fitting algorithm is evaluated for circle and sphere fitting with promising results.

Traitement du signal et des images / Observation de la Terre

###### Article de journal

## Foldings of Periodic Nonuniform Samplings

**IEEE Transactions on Circuits and Systems II, vol. 69, issue 3, pp. 1862-1868, March 2022.**

Periodic Nonuniform Samplings of order N (PNSN) are interleavings of periodic samplings. For a base period T, simple algorithms can be used to reconstruct functions of spectrum included in an union of N intervals δk of length 1/T. In this paper we study the behavior of these algorithms when applied to any function. We prove that they result in N (or less) foldings on , each of δk holding at most one folding.

Traitement du signal et des images / Autre

###### Article de conférence

## SmartCoop Algorithm : Improving Smartphone Position Accuracy and Reliability via Collaborative Positioning

**In Proc. International Conference on Localization and GNSS (ICL-GNSS), Tampere, Finland, June 1-3, 2021.**

In recent years, our society is preparing for a paradigm shift toward the hyper-connectivity of urban areas. This highly anticipated rise of connected smart city centers is led by the development of low-cost connected smartphone devices owned by each one of us. In this context, the demand for low-cost, high-precision localization solutions is driven by the development of novel autonomous systems. The creation of a collaborative based network will take advantage of the large number of connected devices in today's city center. This paper validates the positioning performance increase of Android low-cost smartphones device present in a collaborative network. The assessment will be made on both simulated and collected smartphone's GNSS raw data measurements. We propose a collaborative method based on the estimation of distances between network mobile users used in a SMARTphone COOPerative Positioning algorithm (SmartCoop) . Previous analysis made on smartphone data allow us to generate simulated data for experimenting our cooperative engine in nominal conditions. The evaluation and analysis of this innovative method shows a significant increase of accuracy and reliability of smartphones positioning capabilities. Position accuracy improves by more than 3m, in average, for all smartphones within the collaborative network.

Communications numériques / Localisation et navigation

###### Article de journal

## Insights on the Estimation Performance of GNSS-R Coherent and Noncoherent Processing Schemes

**IEEE Geoscience and Remote Sensing Letters, Early Access, pp. 1-5, May 27, 2021.**

Parameter estimation is a problem of interest when designing new remote sensing instruments, and the corresponding lower performance bounds are a key tool to assess the performance of new estimators. In global navigation satellite systems reflectometry (GNSS-R), a noncoherent averaging is applied to reduce speckle and thermal noise, and subsequently the parameters of interest are estimated from the resulting waveform. This approach has been long regarded as suboptimal with respect to the optimal coherent one, which is true in terms of detection capabilities, but no analysis exists on the corresponding parameter estimation performance exploiting GNSS signals. First, we show that for certain signal models, both coherent and noncoherent Cramér-Rao bounds are equivalent, and therefore, any maximum likelihood estimation coherent/noncoherent combination scheme is efficient (optimal) at high signal-to-noise ratios. This is validated for an illustrative GNSS-R estimation problem. In addition, it is shown that considering the joint delay/Doppler/phase estimation problem, the noncoherent performance for the delay is still optimal, which is of practical importance for instance in altimetry applications.

Traitement du signal et des images / Localisation et navigation et Systèmes spatiaux de communication

#### ADRESSE

7 boulevard de la Gare

31500 Toulouse

France