Вы находитесь на странице: 1из 5

Sponsored by :

AICS 2018 Workshop Challenge Problem:


To Believe or Not to Believe, Detecting Unreliable News
in the Information Age
In the Information Age consumers have unprecedented access to news and other
forms of non-fiction information. Never before has there existed a system by which a
wide diversity of information could be delivered to so many people with ease and at
such great speeds. Internet-delivered news has the potential to be an enormous boon
to society as it allows people from all walks of life equal access to information.
However, it comes with a disturbing downside. While a diversity of views benefits
public dialogue, malicious adversaries and other entities can use the system to
propagate false, misleading, otherwise unreliable information to forward their own
political, ideological, or financial agendas.

The use of news media for propaganda and disinformation operations is a serious
threat to society. The technology to automatically and reliably detect unreliable news
articles on the Internet does not currently exist. The goal of this challenge is to spur
the development of novel automated methods to classify Internet news articles as
unreliable or not.1 At its core, this Challenge is a two-class discrimination problem.
For the purpose of this Challenge, the categorization of an individual article as being
reliable or unreliable is solely based on the designations found within Melissa
Zimdars’ (Professor of Communications, Merrimack College) Open Source Unreliable
Media List (http://www.opensources.co/).

Toward this end we invite researchers and practitioners to submit new unreliable
news detection solutions that leverage machine learning and/or Big Data analytics.
For purposes of this Challenge a previously existing corpus of data containing news
articles from the Internet is available.2 The articles are marked with labels signifying
whether or not each article is reliable or unreliable as determined by the Open Source
Unreliable Media List. This dataset can be accessed by sending a request via email to
aics@ll.mit.edu. Participants can utilize this dataset to construct and test their
solutions and document their efforts in a paper that conforms to the AICS Workshop’s

1 Although previously cultivated, randomly selected internet content was chosen for
the corpus of data for this Challenge, the purpose of this academic Challenge is not
to characterize any of the data as either true or false, accurate or inaccurate. The
designation of a specific article as reliable or unreliable is independent of, and
immaterial to, the core two-class discrimination problem posed by the Challenge.
2 The pre-existing corpus for this Challenge fits within the Fair Use exception. The

corpus consists of published material freely and openly available, it purports to be


factual, it represents a small amount of the content available, and the use is for non-
profit research and educational purposes.
Sponsored by :
regular paper format. 3 Submitted challenge papers will be reviewed as full papers
and accepted challenge papers will be presented in their own session at the AICS 2018
Workshop; the accepted challenge papers will be published as part of the workshop’s
proceedings.

Task and Dataset


The proposed task in this challenge is to build automatic media reliability detectors
using novel and unique features and state-of-the-art machine learning algorithms.
The articles were collected from a variety of news sources on the Internet. As noted,
articles are labeled as reliable or unreliable based on Melissa Zimdars’ (Professor of
Communications, Merrimack College) Open Source Unreliable Media List
(http://www.opensources.co/).4

Training and test datasets are provided as .csv files. The data sets are collected from
what the Open Source Unreliable Media List designates as reliable and unreliable
news resources.

Training (training_df.csv) and test (test_df.csv) data sets:


 uid: Unique identifier of news article
 title: Title of news article
 text: Body text of news article
 normalizedText: Cleansed body text of news article (isolated symbols
removed, simple possessive apostrophe normalization performed, and dashes
and other word connectors removed)

Training label (training_label.csv) and test label (test_label.csv):


 uid: Unique identifier of news article
 label: Ground truth of news article
o 0: reliable article
o 1: unreliable article

Baseline Solutions
We provide several baseline solutions to the problem, computed using publicly
available machine learning toolkits (sklearn’s logistic classifier, gradient boosting
classifier, and random forest classifier) with default parameters. Our baseline
systems use the following features.

3 Use of this corpus of articles if for purposes of this Challenge only, and it may not
be duplicated or forwarded.
4 We make no claims about the accuracy of the Open Source Unreliable Media List,

and use of the List does not constitute validation of its representations about
individual articles or, derivatively, their author, the web site on which they are
found, or their publisher.
Sponsored by :

 Sentiment analysis of title and body text of news article


o Subjectivity of title and body text of news article
 0.0 is very objective
 1.0 is very subjective
 Number of sentences in body text of news article
 Number of words in body text of news article
 Number of words in title

Performance of each baseline is shown in Fig. 1 and Tables 1 and 2. The training
dataset was split into training (67% of all training data) and test (33% of all training
data) by using sklearn. As you can see the results from the figure and tables, random
forest classifier outperforms other classifiers in terms of AUC, which is the main
performance evaluation metric for this challenge, as well as the precision and recall
scores.

Figure 1: Receiver Operating Characteristic

Classifier Area Under Curve (AUC)


Logistic Regression 0.6808
Gradient Boosting 0.7526
Random Forest 0.8632
Table 1: Area Under Curve

Precision Recall
Classifier
Micro Macro Micro Macro
Sponsored by :

Logistic
0.7524 0.2878 0.7524 0.2247
Regression
Gradient
0.7663 0.2892 0.7663 0.2427
Boosting
Random
0.8338 0.4219 0.8338 0.3116
Forest
Table 2: Precision and Recall of Baseline Algorithms

Submission
Challenge submissions will be judged and selected for inclusion in the AICS workshop
as full published papers. Evaluation criteria such as novelty, practicality/efficiency,
technical quality, significance/impact, and presentation will be considered. Please
note that challenge paper submissions have the same submission deadlines as regular
papers.

AICS 2018 Workshop “To Believe or Not to Believe, Detecting Unreliable News in the
Information Age” Challenge Problem paper important dates:

 Papers due: 3 November 2017


 Accepted papers announced XX XXX 2017
 Camera-ready papers due: XX XXX 2017

Challenge problem winners will be announced at the workshop. 5 The baseline


solution described above is intended to serve as a guide that participants can use to
gauge the performance of their techniques.

This challenge is the second challenge offered by the AICS workshop. Workshop
challenges seek to stimulate research into the emerging field of artificial intelligence
applied to cyber security.

5Because the challenge dataset was collected by MIT researchers, MIT personnel
are not eligible to participate.
Sponsored by :

Citing
If you use this dataset in a publication, please cite the following paper: Authors,
“NAME OF PAPER”, 2018.
@InProceedings{unreliablenewsdata2017,
author = XXX,
title = XXX,
booktitle = XXX,
year = 2018,
month = Feb.,
publisher = AAAI
}