How The Guardian is pioneering data journalism with free
tools
http://www.niemanlab.org/2010/08/how-the-guardian-is-pioneering-data-journalism-with-free-tools/
The Guardian uses public, read-only Google Spreadsheets to
share the data they’ve collected, which require no special tools for viewing
and can be downloaded in just about any desired format. They post massive spreadsheets
and data graphs for all to see and often just let the data speak for itself.
This method and the popularity of it, shows that a lot of
people want the raw information to speak for itself, and it need not be dressed
up to catch attention. The data displayed has often gotten good traffic, with
the Data Blog logging a million hits a month during the recent election
coverage.
This is an interesting view on how The Guardian displays
data, and how data journalism is relevant and noteworthy. The Guardian will
likely be a focus of research in the project.
Four crowdsourcing lessons from the Guardian’s (spectacular)
expenses-scandal experiment
http://www.niemanlab.org/2009/06/four-crowdsourcing-lessons-from-the-guardians-spectacular-expenses-scandal-experiment/
The Guardian sifts through the massive amount of data they
deal with via tens of thousands of volunteers who are willing to help them. It’s
a rather interesting case of crowdsourced Data Journalism.
The four point used to keep the system working are: Your workers
are unpaid, so make it fun, Public attention is fickle so launch immediately,
Speed is mandatory so use a framework, and Participation will come in one big
burst so have servers ready.
The Guardian clearly has a good system in place, and again
they clearly should be a focus of my future efforts.
Hacks and Hackers talk computational journalism
http://www.stanforddaily.com/2015/10/28/hacks-and-hackers-talk-computational-journalism/
How we use technology to enhance news narratives.
Computational journalism uses data to find interesting trends to generate
stories and help complement them, such as through graphics. This meeting “H/H
@Stanford: Computational journalism with CIR, Vocativ and SmartNews” goes over
that subject.
Data visualization uses maps and graphs to display
information about a subject and help the reader understand it, though it is
subject to some generalizations and misleading information.
The SmartNews app chooses news recommendations for its users
by using an “exploration” mode, choosing articles outside of the user’s
preferences in order to enlarge their knowledge of the world. This type of
model contrasts with the “exploitation” model, which only recommends articles
within the user’s preferences and is the norm for most such systems
Finnaly, the Center for Investigative Reporting (CIR) described
work mining the data from the National Missing and Unidentified Persons System
(NamUs) to make a more user-friendly website to help solve cold cases.
This put forth a lot of interesting concepts, for what this
tech can and does do. Not sure it’s all closly related enough to include later
on, but all noteworthy none the less. The solving of cold-cases is especially noteworthy,
need to look into that and more related later.
Is that a fact? Checking politicians' statements just got a
whole lot easier
http://www.theguardian.com/commentisfree/2016/apr/19/is-that-a-fact-checking-politicians-statements-just-got-a-whole-lot-easier
ClaimBuster is a program that searches sentences for key
words and structures that are commonly found in factual statements. It found a
LOT of (Australian) political statements that rated either non-true or
otherwise disconnected from factual discussion.
Has interesting implications about the future of
fact-checking and the relationship between politics and data journalism. Wonder
if we will ever reach the point where politicians can’t bullshit us anymore
because their words will be fact-checked in real-time as they say them...I can
dream.