In this task, given a pair of topic and claim, participants are required to classify the stance of the claim towards the topic into either Support, Against or Neutral.
We collect debating topics from online forums and topic-related articles from Wikipedia (for English) and Baidu Encyclopedia (for Chinese).
Human annotators were recruited to annotate claims from the collected articles and identify their stances with regards to the given topics.
Human translators were recruited to translate the English claims into Chinese.
The format of data file is as follows:
The data is in TXT format, and each line includes three items:
topic, claim, and the label ({support, against, neutral}),
separated by tab. Below are some examples:
Accuracy is used as the evaluation metric.
Ruidan He, heruidan0830@gmail.com
Interactive argument opposition refers to the opposite views expressed by different participants on the same topic in a dialogical argumentation scenario
(such as a debate contest, which involves two or more parties).
This task is to identify the argument pairs with interactive relationship in online forum.
Given an original argument and five candidate arguments,
you are required to identify the correct one for the candidates.
For each argument, its context are provided as well.
We collect the original raw dataset from an online forum changemyview in reddit.com. We further extract all the "quotation-reply" argument pairs to form our experimental dataset. For each sample, we use q to represent the quotation argument, and cq to represent the context of the quotation argument, r1-r5 represent the candidate reply arguments, c1-c5 represent the candidate reply context respectively. The format of data file is as follows:
Accuracy is used as the evaluation metric.
Jian Yuan, 19210980107@fudan.edu.cn
Peer review and rebuttal, with rich interactions and argumentative discussions in between, are naturally a good resource to mine arguments. We introduce an argument pair extraction (APE) task on peer review and rebuttal in order to study the contents, the structure and the connections between them. Participants are required to detect the argument pairs from each passage pair of review and rebuttal.
We collect the peer review and rebuttals of ICLR 2013 - 2020
(except for 2015 that is unavailable) from openreview.net.
The data format is as follows:
<review comments / author reply> <BIO tag> - <review/reply> <BIO tag> - <Pairing Index> <review/reply> <Paper ID>.
Each entry is separated by a \t.
Each instance (that is, each pair of review comments and author reply) is separated by a blank line.
The newline character in the data is represented by <SEP>,
which is usually added at the beginning of the next paragraph.
Only B and I tags are followed by <review/reply> or <pairing number>,
and O tags are not followed by any other tags.
F1 score is used as the evaluation metric.
Liying Cheng, liying.cheng@alibaba-inc.com
The three tracks are ranked and awarded separately: