|Speaker :||Virag Shah|
|Time:||2:00 pm - 4:00 pm|
|Location:||LINCS Meeting Room 40|
We consider a centralized content delivery infrastructure where a large number of storage-intensive files are replicated across several collocated servers. To achieve scalable delays in file downloads under stochastic loads, we allow multiple servers to work together as a pooled resource to meet individual download requests. In such systems important questions include: How and where to replicate files; How significant are the gains of resource pooling over policies which use single server per request; What are the tradeoffs among conflicting metrics such as delays, reliability and recovery costs, and power; How robust is performance to heterogeneity and choice of fairness criterion; etc.In this talk we provide a simple performance model for large systems towards addressing these basic questions. For large systems where the overall system load is proportional to the number of servers, we establish scaling laws among delays, system load, number of file replicas, demand heterogeneity, power, and network capacity.