A good platform must always evolve and change to suit the needs to its users. This is why it's refreshing to see Amazon Mechanical Turk, the original online micro-workplace, release its Categorization App to better help its workers and providers talk to each other.
Article Contributed by Seth Weinstein from Tiny Work
For those unfamiliar, Mechanical Turk is an online marketplace where job providers (Requesters) post Human Interaction Tasks (HITs) for users of the platform (Workers) to complete. It sounds complicated, but it just means that workers are given tasks that computers can't quite do yet, such as identifying humans in a photo. Humans complete these simple tasks and get paid a small amount for doing so.
The new Categorization App is a step forward in that it takes the half-a-decade-or-more of experience Amazon has had with these tasks and addresses some of the most pressing issues.
Namely, the app makes it easier for Requesters to set up a series of tasks, simplifies the process of exporting the results, and takes the guesswork out of task pricing, worker qualification, and quality assurance. It then shoves everything into a nice aesthetic box that displays well on both computer screens or mobile devices.
The end result makes their platform super accessible for those who may not have dealt with it before, or for those who have but were unimpressed.
It's a great idea, to be sure.
I had checked out Mechanical Turk before they had released this app, and the apparent learning curve was admittedly intimidating. I don't think of myself as someone at all unfamiliar with computers, but the mTurk interface was enough to make even me have second thoughts about the user-friendliness of the platform.
I gradually obtained a better understanding of the marketplace, but I still see a huge advantage in having a universal app that is both easy to read and easy to program from the get-go. Both Workers and Requesters will be more apt to try the mTurk marketplace if they know that they won't have to go on a reading adventure just to get the results they're looking for.
Additionally, the app places more weight on Master Workers, qualified users who have completed certain tests to show they are better suited for certain tasks. The app automatically attempts to have at least two Masters working on a given task, since if their answers match it typically indicates accuracy. This encourages regular Workers to take tests and get qualified so they can work on these tasks, and the end result is a more talented Worker pool for Requesters to choose from. Quality is just one consideration as you'll hear from the VP of Amazon Mturk.
And, of course, at the bottom line is the fact that a streamlined HIT process provides more HITs, which is more money for Amazon. Well played.
If this app takes off, and I expect it will, it won't be long until we see more like it that seek to simplify the process of posting and completing tasks, and it won't be long before it affects the microtasking landscape either.
If you've had less-than-stellar experiences with MTurk in the past, let us know in the comments below what you think could stand to be improved.