Narratives in crowdsourced evaluation of visualizations: A double-edged sword?

Evanthia Dimara, Anastasia Bezerianos, Pierre Dragicevic

Research output: Chapter in Book/Report/Conference proceedingConference contributionAcademicpeer-review

Abstract

We explore the effects of providing task context when evaluating visualization tools using crowdsourcing. We gave crowd-workers i) abstract information visualization tasks without any context, ii) tasks where we added semantics to the dataset, and iii) tasks with two types of backstory narratives: an analytic narrative and a decision-making narrative. Contrary to our expectations, we did not find evidence that adding data semantics increases accuracy, and further found that our backstory narratives can even decrease accuracy. Adding dataset semantics can however increase attention and provide subjective benefits in terms of confidence, perceived easiness, task enjoyability and perceived usefulness of the visualization. Nevertheless, our backstory narratives did not appear to provide additional subjective benefits. These preliminary findings suggest that narratives may have complex and unanticipated effects, calling for more studies in this area.
Original languageEnglish
Title of host publicationCHI '17: Proceedings of the 2017 CHI Conference on Human Factors in Computing Systems
PublisherAssociation for Computing Machinery
Pages5475-5484
Number of pages10
ISBN (Print)978-1-4503-4655-9
DOIs
Publication statusPublished - 2 May 2017

Keywords

  • Crowdsourcing
  • Decision making
  • Evaluation
  • Information visualization
  • Instructions
  • Narrative

Fingerprint

Dive into the research topics of 'Narratives in crowdsourced evaluation of visualizations: A double-edged sword?'. Together they form a unique fingerprint.

Cite this