We strive to provide our users with results that can rely on to take Data-Driven decisions and improve their unique KPIs.

A question we often hear is whether the participants of our experiments will perform their tasks well and give back likely results. Krzysztof Z. Gajosper a Computer Science professor at the Harvard Paulson School of Engineering answered this question with a published paper where he states with data that :

All statistically significant results detected in the lab were also observed on MTurk, the effect sizes were similar, and there were no significant differences in the raw task completion times, error rates, measures of performance consistency, or the rates of utilisation of the novel interface mechanisms introduced in each experiment.

Therefore we can say with confidence, that we provide reliable data double checked with our re-calibration sessions after every experiment.

Did this answer your question?