Not logged in.

Contribution Details

Type Journal Article
Scope Discipline-based scholarship
Title PPLib: toward the automated generation of crowd computing programs using process recombination and auto-experimentation
Organization Unit
Authors
  • Patrick De Boer
  • Abraham Bernstein
Item Subtype Original Work
Refereed Yes
Status Published in final form
Language
  • English
Journal Title ACM Transactions on Intelligent Systems and Technology
Publisher Association for Computing Machinery
Geographical Reach international
ISSN 2157-6904
Volume 7
Number 4
Page Range 49
Date 2016
Abstract Text Crowdsourcing is increasingly being adopted to solve simple tasks such as image labeling and object tagging, as well as more complex tasks, where crowd workers collaborate in processes with interdependent steps. For the whole range of complexity, research has yielded numerous patterns for coordinating crowd workers in order to optimize crowd accuracy, efficiency, and cost. Process designers, however, often don't know which pattern to apply to a problem at hand when designing new applications for crowdsourcing. In this article, we propose to solve this problem by systematically exploring the design space of complex crowdsourced tasks via automated recombination and auto-experimentation for an issue at hand. Specifically, we propose an approach to finding the optimal process for a given problem by defining the deep structure of the problem in terms of its abstract operators, generating all possible alternatives via the (re)combination of the abstract deep structure with concrete implementations from a Process Repository, and then establishing the best alternative via auto-experimentation. To evaluate our approach, we implemented PPLib (pronounced “People Lib”), a program library that allows for the automated recombination of known processes stored in an easily extensible Process Repository. We evaluated our work by generating and running a plethora of process candidates in two scenarios on Amazon's Mechanical Turk followed by a meta-evaluation, where we looked at the differences between the two evaluations. Our first scenario addressed the problem of text translation, where our automatic recombination produced multiple processes whose performance almost matched the benchmark established by an expert translation. In our second evaluation, we focused on text shortening; we automatically generated 41 crowd process candidates, among them variations of the well-established Find-Fix-Verify process. While Find-Fix-Verify performed well in this setting, our recombination engine produced five processes that repeatedly yielded better results. We close the article by comparing the two settings where the Recombinator was used, and empirically show that the individual processes performed differently in the two settings, which led us to contend that there is no unifying formula, hence emphasizing the necessity for recombination.
Related URLs
Digital Object Identifier 10.1145/2897367
Other Identification Number merlin-id:13326
PDF File Download from ZORA
Export BibTeX
EP3 XML (ZORA)
Additional Information Special Issue on Crowd in Intelligent Systems