|Speaker :||Myriana Rifai|
|Nokia Bell Labs|
|Time:||2:00 pm - 3:00 pm|
|Location:||LINCS / EIT Digital|
Software Defined Networking (SDN) is gaining momentum with the support of major manufacturers. While it brings flexibility in the management of flows within the data center fabric, this flexibility comes at the cost of smaller routing table capacities. Indeed, the Ternary Content Addressable Memory (TCAM) needed by SDN devices has smaller capacities than CAMs used in legacy hardware. To solve this problem, we investigated compression techniques to maximize the utility of SDN switches forwarding tables and created MINNIE. MINNIE dynamically compresses SDN rules without a noticeable impact on quality of service. We validate our solution, with intensive simulations for well-known data center topologies, to study its efficiency and compression ratio for a large number of forwarding rules. Our results indicate that MINNIE scales well, being able to deal with around a million of different flows with less than 1000 forwarding entry per SDN switch, requiring negligible computation time. To assess the operational viability of MINNIE in real networks, we deployed it on a k = 4 fat-tree data center topology emulated testbed. We demonstrate on one hand, that even with a small number of clients, the limit in terms of number of rules is reached if no compression is performed, increasing the delay of new incoming flows. MINNIE, on the other hand, reduces drastically the number of rules that need to be stored, with no packet losses, nor detectable extra delays if routing lookups are done in ASICs. Hence, both simulations and experimental results suggest that MINNIE can be safely deployed in real networks, providing compression ratios between 70% and 99%.