11. our work
quantitative
motivations, large scale quant,
practices data mining
12. (CSCW 2010)
Twitter study
• What type of content do people post on
Twitter?
• 350 users, 3500 of their “tweets”
• Developed content categories, coded
messages
13. (CSCW 2010)
users clustered
• (Mainly) sharing information (20%)
Informers
• (Mainly) talking about themselves
(80%)
Meformers
30. research questions
can Twitter content around broadcast
news events inform journalistic inquiry?
what insights and analyses can we
enable through visual analytic tools?
[with postdoctoral fellow Nick Diakopoulos]
32. supporting analysis
direct attention to relevant information
automatic content analysis for filtering
– relevance
– uniqueness / novelty
– sentiment
– keyword extraction
33.
34.
35. how to evaluate?
directly evaluate the output of the
algorithms (quantitative)
deep, extensive evaluation of users’
interaction with the system (qualitative)
read more: Olsen (UIST ’07)
Naaman (MTAP ’12)
36. Vox evaluation goals
• How effective for generating story ideas?
• What kind of insights/analysis are
supported?
• Shortcomings and how features are
used?
37. Vox evaluation: framing
• “develop two story pitches for the event”
• open-ended questionnaires, content
analysis
• 18 participants
39. overview Vox Civitas
(VAST 2010)
SRSR
Multiplayer
40. research question
how can we help a journalists identify
reliable, knowledgeable sources for
remote breaking news events?
[with postdoctoral fellows Nick Diakopoulos,
Mummun De Choudhury]
60. experience
Her story allows us to see what was lost, [and]
gained, in the political, economic and social
transformations of the 18th and 19th centuries.