BEGIN:VCALENDAR
VERSION:2.0
PRODID:-//wp-events-plugin.com//7.2.3.1//EN
BEGIN:VEVENT
UID:649@lincs.fr
DTSTART;TZID=Europe/Paris:20210609T140000
DTEND;TZID=Europe/Paris:20210609T150000
DTSTAMP:20210614T121126Z
URL:https://www.lincs.fr/events/replica-mean-field-limits-for-intensity-ba
 sed-neural-networks/
SUMMARY:Replica-mean-field limits for intensity-based neural networks
DESCRIPTION:Computational neuroscience features a variety of stochastic
 network models where both the input and the output of each neuron are point
 processes. In these models\, each neuron has a state which evolves as an
 integral with respect to its input point processes and which resets when
 the neuron spikes. In intensity-based neural networks\, the spikes of a
 given neuron have a stochastic intensity which is a linear function of the
 neuron state. Spiking events define the output point processes of the
 neuron. These\, together with the geometry of the network connections\,
 define in turn the input point processes of other neurons.\n\nDue to the
 inherent complexity of such intensity-based neural models\, relating the
 spiking activity of a network to its structure currently requires
 simplifying assumptions\, such as considering models in the thermodynamic
 mean-field limit. In this limit\, an infinite number of neurons interact
 via vanishingly small interactions\, thereby erasing the finite size
 geometry of interactions.\n\nTo better capture the geometry in question\,
 this paper analyzes the activity of intensity-based neural networks in the
 replica-mean-field limit regime. Such systems are made of infinitely many
 replicas which have the same basic structure as that of the finite network
 of interest and interact through randomized connections.\n\nThe main
 contribution is an analytical characterization of the stationary dynamics
 of intensity-based neural networks excitatory synapses in this
 replica-mean-field limit. Specifically\, the stationary dynamics of these
 limiting networks is functionally characterized via ordinary or partial
 differential equations derived from the Poisson Hypothesis of stochastic
 network theory. This functional characterization is reduced to a system of
 self-consistency equations specifying the stationary neuronal spiking
 rates. The approach combines the rate-conservation principle of Palm
 calculus\, analytical considerations from generating-function methods\, and
 propagation of chaos techniques.\n\nSuch limits can be used for first-order
 models\, whereby elementary replica constituents are single neurons with
 independent Poisson inputs\, and in second-order models\, where these
 constituents are pairs of neurons with exact pairwise interactions. In both
 cases\, these replica-mean-field networks provide tractable versions that
 retain important features of the finite network structure of
 interest.\n\nJoint work with T. Taillefumier.
CATEGORIES:Seminars,Youtube
LOCATION:Zoom + LINCS\, 23 avenue d'Italie\, Paris\, 75013\, France
X-APPLE-STRUCTURED-LOCATION;VALUE=URI;X-ADDRESS=23 avenue d'Italie\,
 Paris\, 75013\, France;X-APPLE-RADIUS=100;X-TITLE=Zoom + LINCS:geo:0,0
END:VEVENT
BEGIN:VTIMEZONE
TZID:Europe/Paris
X-LIC-LOCATION:Europe/Paris
BEGIN:DAYLIGHT
DTSTART:20210328T030000
TZOFFSETFROM:+0100
TZOFFSETTO:+0200
TZNAME:CEST
END:DAYLIGHT
END:VTIMEZONE
END:VCALENDAR