BEGIN:VCALENDAR
VERSION:2.0
PRODID:-//wp-events-plugin.com//7.2.3.1//EN
BEGIN:VEVENT
UID:453@lincs.fr
DTSTART;TZID=Europe/Paris:20190417T103000
DTEND;TZID=Europe/Paris:20190417T113000
DTSTAMP:20190423T122734Z
URL:https://www.lincs.fr/events/combatting-toxicity-in-on-line-conversatio
 ns/
SUMMARY:Combatting toxicity in on-line conversations
DESCRIPTION:\nAbusive behaviour in online social media
 platforms&nbsp\;(a.k.a. hate speech\, toxicity\, cyberbullying)&nbsp\;has
 forced major companies to&nbsp\;hire hundreds of moderators\, or
 even&nbsp\;buy a whole company&nbsp\;to deal with the problem. Toxicity
 detection (i.e.\, classify a post as toxic or not) has recently gained
 popularity\, with two very successful&nbsp\;workshops&nbsp\;being organized
 and an&nbsp\;international challenge&nbsp\;attracting thousands of systems.
 In previous work\, it was shown that Recurrent Neural Networks (RNNs)
 achieve state of the art performance for the task of automatic toxicity
 detection&nbsp\;(Pavlopoulos et al. 2017a)\, while incorporating
 classification-specific attention mechanisms and user embeddings improved
 further the overall performance of the RNNs (Pavlopoulos et al.
 2017b).&nbsp\;Interestingly\, the classification-specific attention
 mechanism highlights suspicious words for free\, without including
 highlighted words in the training data (Pavlopoulos et al.
 2017c).&nbsp\;Despite this recent work\, toxicity detection systems suffer
 from two major shortcomings. First\, although most abusive posts appear
 within conversations (e.g.\, utterances\, posted as replies to other posted
 utterances)\, the structure of the conversation is currently ignored and
 systems are solely based on the text of each single post\, in isolation\,
 for their decisions.&nbsp\;Second\, systems only detect abusive posts\;
 that is\, current technology does not help users modify their posts
 themselves to avoid being abusive. Such systems are primarily used to
 censor online conversations on a platform\, which often results in abusive
 users migrating to another platform (Chandrasekharan et al.
 2018).&nbsp\;Instead\, it has been argued\, in both academic (Zhang et al.
 2018)&nbsp\;and&nbsp\;industrial&nbsp\;circles that a more fruitful
 approach is to help users improve their posts in on-line conversations
 (e.g.\, by suggesting non-abusive rewrites).&nbsp\;To address the first
 problem current research investigates a) the compilation of taxonomies of
 context-aware toxicity and b) the creation of datasets of toxic utterances
 (context aware). To address the second&nbsp\;problem&nbsp\;we investigate
 the creation of datasets with&nbsp\;annotations on the word level to study
 deeper the various forms of toxicity and&nbsp\;the difficulty of its
 rephrasing.\n
CATEGORIES:Seminars,Youtube
LOCATION:Paris-Rennes Room (EIT Digital)\, 23 avenue d'Italie\, 75013
 Paris\, France
X-APPLE-STRUCTURED-LOCATION;VALUE=URI;X-ADDRESS=23 avenue d'Italie\, 75013
 Paris\, France;X-APPLE-RADIUS=100;X-TITLE=Paris-Rennes Room (EIT
 Digital):geo:0,0
END:VEVENT
BEGIN:VTIMEZONE
TZID:Europe/Paris
X-LIC-LOCATION:Europe/Paris
BEGIN:DAYLIGHT
DTSTART:20190331T030000
TZOFFSETFROM:+0100
TZOFFSETTO:+0200
TZNAME:CEST
END:DAYLIGHT
END:VTIMEZONE
END:VCALENDAR