Software run limit on test data set?

Hi all,

I am participating in the shared task Hyperpartisan News Detection (SemEval 2019).
Thanks to help from @johanneskiesel, I managed to run and evaluate my software on the training data set.
If I run the software on the test data set, I see that I cannot download the evaluation results.
From a discussion here (, I assume that the run has to be manually reviewed by a task moderator who then enables the evaluation results download.

I’d like to benchmark multiple models on the test data set.
Is there a limit on how many different softwares (models) I am allowed to run on the test data set?

Best Regards

Hi David,

There is no longer a limit of submissions for the task. And just write me a mail and I’ll review your submissions.

I now did so for the submission you already did. You should now be able to see it.


1 Like