Abstract: |
In this talk we present new results on the stochastic bandit problems with a continuum set of arms and where the expected reward is a continuous and unimodal function of the arm. Our setting for instance includes the problem considered in (Cope, 2009) and (Yu, 2011). No assumption beyond unimodality is made regarding the smoothness and the structure of the expected reward function. Our first result is an impossiblity result: without knowledge of the smoothness of the reward function, there exists no stochastic equivalent to Kiefer’s golden section search (Kiefer, 1953). Further, we propose Stochastic Pentachotomy (SP), an algorithm for which we derive finite-time regret upper bounds. In particular, we show that, for any expected reward function $mu$ that behaves as $mu(x)=mu(x^star)-C|x-x^star|^xi$ locally around its maximizer $x^star$ for some $xi, C>0$, the SP algorithm is order-optimal, i.e., its regret scales as $O(sqrtTlog(T))$ when the time horizon $T$ grows large. This regret scaling is achieved without the knowledge of $xi$ and $C$. Our algorithm is based on asymptotically optimal sequential statistical tests used to successively prune an interval that contains the best arm with high probability. To our knowledge, the SP algorithm constitutes the first sequential arm selection rule that achieves a regret scaling as $O(sqrtT)$ up to a logarithmic factor for non-smooth expected reward functions, as well as for smooth functions with unknown smoothness. This is a joint work with Alexandre Proutière available at : http://arxiv.org/abs/1406.7447 |