2023 LINCS Annual Workshop

Speaker : LINCS researchers + members of the Scientific Committee
Date: 05/07/2023 - 06/07/2023
Time: 9:30 am - 6:00 pm
Location: Amphi Rose Dieng


The LINCS organizes its Annual Workshop with the Scientific Committee.

**PROGRAM** (<= click)



A 2-day workshop with:

  • LINCS members “scientific highlights”
  • Scientific Committee members invited talks
  • PhD students “elevator pitch” session + posters

We’ll have coffe-breaks just outside the Amphi Rose Dieng and we’ll have lunch at the cantine of Télécom-Paris.

On the evening of Wednesday July 5th we’ll have a shuttle bringing us from Palaiseau to Jussieu (Paris 5e arr.) for a dinner cocktail on top of the Zamansky Tower.

Confirmed talks by members of the Scientific Committee who will be in Palaiseau:

  • Prof. Marco Ajmone Marsan (Politecnico di Torino)
  • Prof. Nick Bambos (Stanford University)
  • Prof. Roch Guerin (Washington University in Saint Luis)
  • Prof. Patrick Thiran (EPFL)

Confirmed talks by members of the Scientific Committee who will be connected via Zoom:

  • Prof. Leandros Tassiulas (Yale University)

Scientific Committee members’ titles+abstracts:

Prof. Marco Ajmone-Marsan (Politecnico di Torino)
Titolo: Equalizing end user access to edge-based services
Abstract: We consider a portion of a RAN where end-users access services that imply the issue of a request through their associated base station (BS), followed by a computation on one of the available in-network computing facilities, and finally by the return of the result of the computation to the end-user who issued the request. The result must be returned within a specified latency deadline in order to be useful.
Since not all BSs are equipped with a computing facility, some end-users may be disadvantaged, because they are associated with a BS from which the delay for a service request to reach a computing facility and for the results of the computation to come back is longer.
Aiming at uniform end-user satisfaction, network operators should strive to on the one hand reduce differences in achieved end-user performance, while on the other obtain an efficient use of network resources.
With simple analytical models we investigate the effectiveness of light network management algorithms, consisting in carefully choosing the routing probabilities of service requests toward one of the available computing facilities. We argue that at least some of such light network management algorithms should be compatible with the very stringent European Network Neutrality rules, and we show that they allow a good trade-off between overall resource utilization and equal performance experienced by end-users.
Prof. Nick Bambos (Stanford University)
Title: Controlling Epidemics in Random Environments via Testing

Abstract: We model epidemic spreading (e.g. of pathogens, rumors/misinformation) in a population within a large region. We allow for mobility of infected individuals, who can infect others while roaming around. We distribute testing centers around the region, where individuals testing positive are quarantined. Infected and recovered individuals cannot be reinfected in the current epidemic wave. The infection transmission rate (infectivity) is a random field over the region, for example, due to various counter-measures (e.g. masking and social distancing in the case of airborne pathogens).We infinitesimally seed the infection in the large region and explore conditions under which it inherently spreads (as opposed to dying out). We observe that the average infectivity is not enough to characterize spreading and fluctuations do matter; indeed, even under subcritical averages, overcritical infectivity fluctuations can cause the infection to spread. We then focus on the impact of testing center density on suppressing an epidemic that has the inherent potential to spread. Finally, we discuss how to optimize testing vs. recovery resources.

*Joint work with Petros Meramveliotakis and Prof. Aris Moustakas (Univ. of Athens, Greece) and Kyriakos Lotidis (Stanford).

Prof. Roch Guerin (Washington University in Saint Luis)

Title:  On the benefits of proactively changing traffic profiles*§

Abstract:  Token buckets are commonly used to specify traffic profiles and there is a growing number of network environments where hard delay bounds are required (e.g., as in the DetNet or TSN standards).  In this talk, we explore the extent to which it may be beneficial to modify up-front the traffic profiles originally specified by users, i.e., “reprofile” them, while still delivering the end-to-end delay bounds they require albeit with fewer network resources (less bandwidth).  The answer depends on the type of schedulers available in the network, and in the degenerate one-hop case, it is known that reprofiling is of no benefit when optimal schedulers (EDF) are available. We explore whether this also holds in a more general network setting and demonstrate that it does not.  However, devising optimal reprofiling solutions, unfortunately, appears intractable.  As a result, we explore and evaluate heuristics for networks with both EDF and FIFO schedulers.  In the latter case, a simple strategy appears to perform well.

* Joint work with Jiaming Qiu (WashU), Henry Sariowan (Google), and Jiayi Song (WashU, now at ByteDance)

§Some of this work is still in progress


Prof. Patrick Thiran (EPFL)

Title: Source location in random networks: the noisy case.

Abstract: Last year, I surveyed recent results on the location of the source of a diffusion or of an epidemics in a network, given the infection data gathered at some of the nodes, when the propagation delays along the edges of the network are deterministic. In this follow-up talk, I will survey some early results on the more complex case, when propagation delays are i.i.d. Gaussian random variables. We compare two sensor placement strategies: either all the sensors are placed at once, before the diffusion starts (off-line placement) or they are placed sequentially, as the diffusion unfolds in the network (on-line placement). The on-line placement strategy consumes obviously fewer sensors than an off-line placement, but the difference is very small when the propagation delays are deterministic and the network is an Erdos-Renyi random graph. In contrast, when the propagation delays have a sufficiently large variance, the difference can be huge: from the order of n to log log n, for a linear graph of n nodes. This is a joint work with Gergely Odor and Victor Lecomte.

LINCS members scientific highlights:

francois baccelli (inria)

Title: Cox Point Processes for Multi-Altitude LEO Satellite Networks

Abstract: We propose a simple analytical approach to describe the locations of low earth orbit (LEO) satellites based on a Cox point process. We develop a variable-altitude Poisson orbit process by accounting for the fact that satellites are always located on circular orbits and these orbits may have different altitudes. Then, the satellites on these orbits are modeled as the Poisson point processes conditionally on the orbit process. For this model, we derive the distribution of the distance to the nearest visible satellite, the outage probability, the Laplace functional of the proposed satellite Cox point process, and the Laplace transform of the interference under a general fading. The derived statistics allow one to evaluate the performance of such LEO satellite communication systems as functions of network parameters.



Title: Quantum networking at LINCS

Abstract: Quantum networking is an emerging scientific domain. Quantum networks are distributed systems of quantum devices that utilize fundamental
quantum mechanical phenomena such as superposition, entanglement, and
quantum measurement to achieve capabilities beyond what is possible with
classical networks. The potential applications of quantum networks are
quantum cryptography (Quantum Key Distribution), quantum consensus,
privacy-preserving quantum computing or distributed quantum computing
applications. In this talk, we will describe the past, current and
future activities at LINCS related to this prospective research domain
on quantum networking.



Title: Inference of network characteristics using non-invasive data exploration

Abstract: Recent years witnessed a trend of “softwarization” of network components. Instead of static, expensive hardware, operators have started to adopt a more flexible approach based on Virtual Network Functions. This paradigm (aka Network Function Virtualization) advocates implementing network middleboxes such as firewalls or NATs as pieces of software to be deployed and executed on commercial off-the-shelf (COTS) hardware. This has boosted the development of several packet processing frameworks and software switches, which show nowadays multi 10-Gbps capabilities in COTS servers. In parallel, network systems are increasingly adopting machine learning (ML) techniques to solve complex networking tasks such as traffic classification or resource allocation.

As ML techniques require a large amount of data to be collected for both training and validation, when done in software, such measurements can highly affect the measured values, thus biasing the collected data. The intensity of this becomes stronger when measurements are taken close to the data path. Second, even after the training phase, complex model calculations may require dedicated hardware such as external GPUs or custom hardware designed for neural network processing such as TPUs or VPUs.
In this talk, we present a novel approach based on non-invasive data collection relying on pure software.

Our methodology consists in (i) low-impact network measurements with both direct and indirect observations; (ii) inference/predictive modeling of a complete system with ML and/or classical approaches; (iii) deployment of low-resource models for runtime query/action operations and automated recovery. The project (acronym: IONOS-DX) has received an individual grant from the ANR (French Agency of Research).



Title: Predicting network hardware faults through layered treatment of alarms logs

Abstract: Maintaining and managing ever more complex telecommunication networks is an increasingly complex task, which often challenges the capabilities of human experts. There is a consensus both in academia and in the industry on the need of enhancing human capabilities with sophisticated algorithmic tools for decision-making, with the aim of transitioning towards more autonomous, self-optimizing networks. We aim at contributing to this larger project. We tackle the problem of detecting and predicting the occurrence of faults in hardware components in a radio access network, leveraging the alarm logs produced by the network elements. We design a range of algorithmic solutions, and we test them on real data, collected from a major telecommunication operator. We are able to predict the failure of a network component, with satisfying precision and recall.


Title: Towards An Open Edge Cloud

Abstract: Cloud services are moving to the edge. With 5G, it is envisaged that telecom operators will have their own data centers. Services hosted in these data centers will be closer to their customers than they would be if they were hosted in classic large centralized data centers. These services will enjoy: lower latency, higher bandwidth, and no intermediate parties along the path. Quality of service will be higher, and the responsibility for maintaining that quality of service will be clearer. At least, that is the vision. But much of the architecture of this future edge cloud remains to be conceived. We offer one important brick: a container orchestration tool for the edge cloud. Our tool, EdgeNet, is an extension to Kubernetes, the de facto standard for deploying containers to classic large centralized data centers. EdgeNet takes into account the particular nature of the edge cloud: there will be many providers of edge clouds; and there will be many customers that need to share more limited resources in each cloud. We describe the EdgeNet vision, the components that we have built, and those that remain to be built.

This talk presents the work of Berat Senel, which formed the basis of the doctoral dissertation that he defended in June 2023 at Sorbonne Université.


KE FENG (Inria)

Title: Spatial Network Calculus and Performance Guarantees in Wireless Networks

Abstract: Network calculus is initially a methodology allowing one to provide performance guarantees in queuing networks subject to regulated traffic arrivals and service guarantees. It is a key design tool for latency-critical wireline communication networks where it allows one to e.g. guarantee bounds on the end-to-end latency of all transmitted packets. In wireless networks, service guarantees are more intricate as electromagnetic signals propagate in a heterogeneous medium and interfere each other. In this work, we present a novel approach toward performance guarantees for all links in arbitrarily large wireless networks. We introduce spatial regulation properties for stationary spatial point processes, and develop the first steps of a calculus for this type of regulation.


fabien mathieu (invited inria)

Title: Corsort: An anytime sorting algorithm

Abstract:  An anytime algorithm is an algorithm that is able to give an estimation of the result after each step of execution. Ee study the problem of anytime sorting. We consider that each comparison is a step of execution, and we measure the proximity between the estimation and the sorted list with the Kendall tau distance. We present Corsort, a family of anytime sorting algorithms using estimators. By simulation, we show that a well-configured Corsort has a quasi-optimal termination time, and gives better estimations than the other algorithms of our benchmark.


SWAPNIL DHAMAL (Télécom-SudParis)

Title: Resource Allocation and Pricing for Network Slicing in 5G: A Learning Perspective

Abstract: Network slicing is a critical component in 5G networks, since the intended services such as Ultra Reliable Low Latency Communications (URLLC) and enhanced Mobile BroadBand (eMBB) give rise to very distinct requirements. Each slice can be customized for a given type of service, and a given tenant who is characterized by a stochastic demand and a resource utility function reflecting its Quality-of-Service (QoS) requirements. In this work, we study the techno-economic aspect of the slice market that involves the operator and the tenants (slice owners). In particular, the game that we study is a Stackelberg game, where the operator is the leader who presents a pricing scheme that defines the price corresponding to each bandwidth-level, and the tenants are the followers who decide which bandwidth-level to request. Since the operator has a certain capacity constraint, it takes the requested bandwidth-levels of the tenants into account, and determines the admission control and resource allocation such that its expected profit is maximized while satisfying the capacity constraint. Our framework models the joint admission control, resource allocation, and pricing for network slicing in the above game, as an optimization problem aiming to maximize the operator’s expected profit. We show that solving the formulated optimization problem is NP-hard. We also encounter a paradox that the operator’s profit could decrease if a tenant’s resource utility increases. We consider a practical scenario where the utility matrix, comprising the resource utilities of the tenants for the different bandwidth-levels, are not known to the operator. We propose several approaches for learning an optimal pricing scheme, including a neural network-based approach, as well as approaches based on iteratively updating and refining the ambiguity set of utility matrices by observing the tenants’ requested bandwidth-levels corresponding to the presented pricing schemes. We study the performance of the various approaches and present insights.



Title: Causal Reasoning for configurable network systems

Abstract: With the rapid advancement in B5G, IoT, and network softwarization, modern ICT network systems are becoming increasingly diverse, disaggregated, and complex. Consequently, understanding and managing these systems has thus become a daunting task. Although AI/ML techniques can lend sound predictive services, they need more robust, counterfactual reasoning and decision-making. In this talk, I will present our ongoing work exploring causal research for network diagnosis and optimization. Our study focuses on real-world systems capable of processing network traffic at extremely high speed, e.g., 10-100 Gbps. We take two paths to approach causal reasoning: i) causal discovery from observational/interventional data and ii) causal inference for insight extraction. The ultimate goal is to implement a generic, robust, production-ready toolset that can effectively uncover performance bottlenecks and guide optimizations for different network systems.