Use of Algorithms to Assess Accuracy of Tweets — Implications

A very timely article in Slate discusses the use of algorithms to assess the accuracy of tweets.  The immediate use is with the idea of helping law enforcement filter out the false from the accurate in rapidly developing situations.  The take away:

A 2010 paper from Yahoo Research analyzed tweets from that year’s 8.8 Chile earthquake and found that legitimate news—such as word that the Santiago airport had closed, that a supermarket in Concepcion was being looted, and that a tsunami had hit the coastal town of Iloca—propagated on Twitter differently than falsehoods, like the rumor that singer Ricardo Arjona had died or that a tsunami warning had been issued for Valparaiso. One key difference might sound obvious but is still quite useful: The false rumors were far more likely to be tweeted along with a question mark or some other indication of doubt or denial.
Building on that work, the authors of the 2010 study developed a machine-learning classifier that uses 16 features to assess the credibility of newsworthy tweets. Among the features that make information more credible:
– Tweets about it tend to be longer and include URLs.
– People tweeting it have higher follower counts.
– Tweets about it are negative rather than positive in tone.
– Tweets about it do not include question marks, exclamation marks, or first- or third-person pronouns.
Several of those findings were echoed in another recent study from researchers at India’s Institute of Information Technology who also found that credible tweets are less likely to contain swear words and significantly more likely to contain frowny emoticons than smiley faces.
In a new paper, to be published in the journal Internet Research next month, the authors of the Chile earthquake study—Carlos Castillo, Marcelo Mendoza, and Barbara Poblete—test out their algorithm on fresh data sets and find that it works pretty well. According to Meier, their machine-learning classifier had an AUC, or “area under the curve,” of 0.86. That means that, when presented with a random false tweet and a random true tweet, it would assess the true tweet as more credible 86 percent of the time. (An AUC of 1 is perfect; an AUC of 0.5 is no better than random chance.)

One obvious implication is just the general value (and verifiability) of algorithms.

More specifically, this research suggests that it may be possible to gain more information than we realize from crowd-sourced tweets.

  • Do people tweet about their experiences in the court system, and might it be possible to extract information about accessibility of, and public trust and confidence in, different courts?
  • What about their experience with legal aid programs?
  • Could we develop data about rejection rates for public benefit programs?
  • Could we get information about who is applying for public benefit programs?
  • Could we figure out if some areas see higher eviction rates?

And so on.

 

 

About richardzorza

I am deeply involved in access to justice and the patient voice movement.
This entry was posted in Outcome Measures, Research and Evalation, SRL Statistics. Bookmark the permalink.

1 Response to Use of Algorithms to Assess Accuracy of Tweets — Implications

  1. Claudia Johnson says:

    Interesting idea of creating twitter hash tags to collect feedback on different systems/services/issues. What does the case law look like in terms of mining twitter and facebook data? Anyone know?

Comments are closed.