Almost an expert: The effects of rubrics and expertise on perceived value of crowdsourced design critiques

TitleAlmost an expert: The effects of rubrics and expertise on perceived value of crowdsourced design critiques
Publication TypeConference Paper
Year of Publication2016
AuthorsYuan, A., Luther K., Krause M., Vennix S. Isabel, Dow S. P., & Hartmann B.
Published inProceedings of the 19th ACM Conference on Computer-Supported Cooperative Work & Social Computing
Date Published02/2016
ISBN Number978-1-4503-3592-8

Expert feedback is valuable but hard to obtain for many designers. Online crowds can provide fast and affordable feedback, but workers may lack relevant domain knowledge and experience. Can expert rubrics address this issue and help novices provide expert-level feedback? To evaluate this, we conducted an experiment with a 2x2 factorial design. Student designers received feedback on a visual design from both experts and novices, who produced feedback using either an expert rubric or no rubric. We found that rubrics helped novice workers provide feedback that was rated nearly as valuable as expert feedback. A follow-up analysis on writing style showed that student designers found feedback most helpful when it was emotionally positive and specific, and that a rubric increased the occurrence of these characteristics in feedback. The analysis also found that expertise correlated with longer critiques, but not the other favorable characteristics. An informal evaluation indicates that experts may instead have produced value by providing clearer justifications.


The authors thank Evan Shelhamer, Jonathan Long, and other Berkeley Vision labmates for helpful discussions throughout this work. This work was supported by DARPA, AFRL, DoD MURI award N000141110688, NSF awards IIS-1427425 and IIS-1212798, and the Berkeley Artificial Intelligence Research laboratory. The GPUs used for this work were donated by NVIDIA.

ICSI Research Group

Networking and Security