2021 Workshop with LINCS Scientific Committee

Speaker : LINCS Scientific Committee
Politecnico di Torino, Yale University, University of Massachusetts, Universität Paderborn, Texas A&M University, Washington University in Saint Louis
Date: 21/06/2021 - 22/06/2021
Time: All Day
Location: LINCS + Zoom

Abstract

 

LINCS organizes a rich two-day event with talks by its international Scientific Committee.

Here the Zoom link to attend these events: https://telecom-paris.zoom.us/j/97120167222?pwd=NS9FQlAzTUlkdzFyMmlmWS9Ld1lMQT09

– ID de réunion : 971 2016 7222
– Code secret : 895458

Monday 21st (Paris time)

15.15 – 16.00 Invited talk by Professor Marco Ajmone-Marsan (PoliTecnico di torino) on “Queuing Models of Radio Access Networks Offering Streaming and Elastic Services”         

We consider radio access networks (RANs) offering streaming and elastic services. The RAN is modeled with tools from queuing theory, and we examine the cases in which the system performance can be derived with low complexity. In addition, we provide numerical results for common cell configurations, showing that some of the emerging behaviors can be unexpected, and provide insight into the effective deployment of small cells.

16.10 -16.55 Invited talk by Professor P.R. Kumar (Texas A&M University) on “Security of Cyberphysical Systems”

The coming decades may see the large scale deployment of networked cyber-physical systems to address global needs in areas such as energy, water, health care, and transportation. However, as recent events have shown, such systems are vulnerable to cyber attacks. We present a general technique, called “dynamic watermarking,” for detecting any sort of malicious activity in networked systems of sensors and actuators. We present results of tests on automobiles both in a lab setting, and on a test track, a lab process control system, a lab model of a helicopter, a lab version of a solar-powered distribution system, and a simulation study of defense against an attack on the power system.

[Joint work with Bharadwaj Satchidanandan, Jaewon Kim, Woo Hyun Ko, Tong Huang, Lantian Shangguan, Kenny Chour, Gopal Kamath, Jorge Ramos, Le, Swaminathan Gopalswamy, Le Xie, and Prasad Enjeti].

17.05 – 17.50 Invited talk by Professor Jim Kurose (University of Massachusetts) on “From artifacts to systems to people: evolving directions in computing research and education”

Computing is now “old enough” as a discipline that we can already detect broad trends in research directions.  We discuss these trends, with an emphasis on both current and future directions in computing research, which we see reflected locally (at my own university, UMass Amherst) nationally and internationally.  We’ll discuss recent national computing research and education programmatics and trends, and the role of computing in the larger R&D enterprise.

Tuesday 22nd (Paris time)

14.25 – 15.10 Invited talk by Professor Holger Karl (Universität Paderborn) on “Machine Learning for Network Management”       

Machine Learning has been applied to a wide range of networking topics. Network management provides ample examples, often of the type of combinatorial optimizations. In the talk, we look at two examples. One example comes from the context of network softwarization, where we investigate how machine learning approaches can help with scaling and placing virtual network functions, as well as routing traffic between them. The second example comes from wireless sensor networks with acoustic applications in mind, where we need to balance acoustic sensing quality with wireless network quality and use these two metrics to move mobile microphones into good locations.

15.40 – 16.25 Invited talk by Professor Leandros Tassiulas (Yale University) on “Enabling intelligent services via function virtualization at the network edge”

The proliferation of novel mobile applications and the associated AI services necessitates a fresh view on the architecture, algorithms and services at the network edge in order to meet stringent performance requirements. Some recent work addressing these challenges is presented. In order to meet the requirement for low-latency, the execution of computing tasks moves form the cloud to the network edge, closer to the end-users. The joint optimization of service placement and request routing in dense mobile edge computing networks is considered. Multidimensional constraints are introduced to capture the storage requirements of the vast amounts of data needed. An algorithm that achieves close-to-optimal performance using a randomized rounding technique is presented. Recent advances in network virtualization and programmability enable realization of services as chains, where flows can be steered through a pre-defined sequence of functions deployed at different network locations. The optimal deployment of such service chains where storage is a stringent constraint in addition to computation and bandwidth is considered and an approximation algorithm with provable performance guarantees is proposed and evaluated. Finally the problem of traffic flow classification as it arises in firewalls and intrusion detection applications is presented. An approach for realizing such functions based on a novel two-stage deep learning method for attack detection is presented. Leveraging the high level of data plane programmability in modern network hardware, the realization of these mechanisms at the network edge is demonstrated. 

16.35 – 17.20 Invited talk by Professor Roch Guerin (Washington University in Saint Louis) on “Edge Classification: Offloading under Token Bucket Constraints”        

We consider an edge-computing setting where machine learning-based algorithms are used for real-time classification of inputs acquired by devices, e.g., cameras.  Computational resources on the devices are constrained, and therefore only capable of running machine learning models of limited accuracy. A subset of inputs can be offloaded to the edge for processing by a more accurate but resource-intensive machine learning model. Both models process inputs with low-latency, but offloading incurs additional network and compute delays. To manage these delays and meet application deadlines, a token bucket constrains transmissions from the device. We first introduce a Markov Decision Process-based framework to make offload decisions under such constraints when the input process is i.i.d. Decisions are based on the local model’s confidence and the token bucket state, with the goal of minimizing a specified error measure for the application. We then extend the approach to configurations involving multiple devices connected to the same access switch to realize the benefits of a shared token bucket. Next we explore an approach based on deep Q-networks (DQN) to handle more complex input and classification processes, and demonstrate its ability to learn such features and effectively incorporate them into policy decisions.  We evaluate and analyze the policies derived using our framework on the standard ImageNet image classification benchmark.1,2

1. This is joint work with Ayan Chakrabarti, Chenyang Lu, Jiaming Qiu, Ruiqi Wang, and Jiangnan Liu 2. Part of this work will be presented at the 2021 ACM/IEEE SEC conference