Software run limit on test data set?

Hi all,

I am participating in the shared task Hyperpartisan News Detection (SemEval 2019).
Thanks to help from @kiesel, I managed to run and evaluate my software on the training data set.
If I run the software on the test data set, I see that I cannot download the evaluation results.
From a discussion here (https://www.tira.io/t/how-to-submit/354/4), I assume that the run has to be manually reviewed by a task moderator who then enables the evaluation results download.

I’d like to benchmark multiple models on the test data set.
Is there a limit on how many different softwares (models) I am allowed to run on the test data set?

Best Regards
David

Hi David,

There is no longer a limit of submissions for the task. And just write me a mail and I’ll review your submissions.

I now did so for the submission you already did. You should now be able to see it.

Regards,
Johannes

1 Like