Thanks to the hard work of my coauthors @x0xMaximus and @andrewsu , I was able to nab the award for the best presentation at The Seventh International Biocuration Conference from the International Society of Biocuration. The slides for the presentation and the poster are available from slideshare.
I think the presentation garnered the interest it did because many of the people in the audience had heard the term “crowdsourcing” before, but had never seen a real example of a specific application – let alone one in science. I was surprised by the number of people that I spoke to that had no idea what the Amazon Mechanical Turk was – nevermind that it might be applicable to some of the problems they were working on. We had a decent result to talk about, but much more importantly, we taught the audience about a powerful new tool that they might be able to use in their own work.
For those that do want to try scientific applications of microtask crowdsourcing I’d like to emphasize that its probably not going to be an easy process. The result we presented was from the third iteration of our system and represents several months of developer time. While resources are emerging that should make this process much faster to get started (e.g. [1-4]), expect to engage in an iterative cycle to get your system dialed in!
If you do want to give crowdsourcing a try for biocuration or other scientific objectives, (1) we would love to hear about it! and (2) it might be worth a quick look at our review of the domain [5]. Microtask systems such as the one we worked with here are just one of many ways that scientific challenges can be opened up to much broader communities.
- Our code: mark2cure
- Soltilab mention tagger for crowdflower
- GATE crowdsourcing plugin
- Crowd Watson from IBM
- Good, Benjamin M., and Andrew I. Su. “Crowdsourcing for bioinformatics” Bioinformatics 29.16 (2013): 1925-1933.