Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

I am one of the members managing EvalAI. We started EvalAI in order to meet the demands of our own research lab (https://github.com/batra-mlp-lab). In 2016, 2017, we were running the VQA competition on Codalab and it was not a pleasant experience -- the submissions would get stuck; the evaluation was slow; etc. We had the in-house expertise (two grad students) to build something better. We could manage the entire stack ourselves and easily add custom features (code-upload; custom metrics; private/remote evaluation; human-in-the-loop evaluation). Turns out that a lot of folks in the computer vision community were looking for something similar (130+ challenges; 30+ organizations). As we became more mature, multiple companies (Ebay, IBM, Mapillary) cloned EvalAI to host their own versions. They often contributed back their features and it was overall a net positive to have an open-source version. It also allows for faster iteration, experimentation (human-in-the-loop evaluation, Challenge Entries to Demo). For instance, recently PapersWithCode collaborated with us to deep-link leaderboard results on EvalAI with their own leaderboard tables https://twitter.com/paperswithcode/status/134108528597578957...). I agree that the comparison to Kaggle is a bit old and we have removed it (https://github.com/Cloud-CV/EvalAI/pull/3502). :-)


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: