BEGIN:VCALENDAR
VERSION:2.0
PRODID:-//wp-events-plugin.com//7.2.3.1//EN
BEGIN:VEVENT
UID:849@lincs.fr
DTSTART;TZID=Europe/Paris:20241018T140000
DTEND;TZID=Europe/Paris:20241018T150000
DTSTAMP:20241021T141220Z
URL:https://www.lincs.fr/events/understanding-reinforcement-learning-error
 -in-image-based-environments/
SUMMARY:Understanding Reinforcement Learning error in image-based
 environments
DESCRIPTION:In many Reinforcement Learning (RL) environments the state is
 represented by an image. In such cases\, if the RL doesn’t work well\, is
 the problem that the image processing doesn’t recognize the salient
 features of the image well (i.e. there’s a recognition issue)? Or is the
 problem that the RL can’t learn the correct action\, even though it
 correctly recognizes the state (i.e. there’s a decision issue)? Or is it
 a bit of both?\n\nIn this talk we’ll discuss two ways to formalize these
 questions. In the first\, we examine how well an agent can learn the
 “Q-value” of an image\, if it is given some explicit examples in a
 training set. We then compare an agent trained in this way with an agent
 trained with standard RL techniques. In the second\, we decompose the
 regret of an RL agent into two terms that separately capture the
 recognition and decision error.\n\nWe illustrate our techniques using
 standard RL environments such as Minigrid and Pong.\n\nJoint work with
 Alihan Huyuk\, Xueyuan She\, Atefeh Mohajeri\, Ryo Koblitz
CATEGORIES:Seminars,Youtube
LOCATION:Amphi 6\, 19 Place Marguerite Perey\, Palaiseau\, France
X-APPLE-STRUCTURED-LOCATION;VALUE=URI;X-ADDRESS=19 Place Marguerite Perey\,
 Palaiseau\, France;X-APPLE-RADIUS=100;X-TITLE=Amphi 6:geo:0,0
END:VEVENT
BEGIN:VTIMEZONE
TZID:Europe/Paris
X-LIC-LOCATION:Europe/Paris
BEGIN:DAYLIGHT
DTSTART:20240331T030000
TZOFFSETFROM:+0100
TZOFFSETTO:+0200
TZNAME:CEST
END:DAYLIGHT
END:VTIMEZONE
END:VCALENDAR