Submission CLEF 2024

Can we upload only the prediction files in .tsv form instead of the model or the code? Thank you in advance.

Hi!

Yes, when you click on submit there is also to option to upload the predictions. For reproducibility, it is of course much better if you upload the model or code. But you can also upload the predictions now and upload the model later (we will specifically point out submissions with models in the overview paper).

We have uploaded some submission files. Do they have to be reviewed by the organizers? We would like to learn if our submissions are correctly formatted. Why can’t we view the results for the validation set?

Hi, I checked the submissions you did, but I could only find submissions to the test set (for which you indeed can not view the results yet).

I reviewed these submissions and the format is correct.

But you also submitted on the validation set? I can not see this. Can you try again?

We are encountering an error in hierocles-of-alexandria upload. The uploaded files do not appear and it is loading endlessly.

Dear all,

I found the problem, it seems to be a rendering bug while a submission is beeing evaluated, I will likely be able to fix this in the next hour.

Sorry for the inconvenience.

best regards,

Maik

Heyy for Human Value Detection 2024 task i dont see any option to just upload a tsv files of predictions. i have uploaded my tsv files in the runs itself.

Sorry for the confusion, but uploading a run (What you did) means to upload the predictions tsv.

Dear @ch.christodoulou,

The problem is now resolved, sorry for the inconvenience!
The problem was caused by an evaluator that could not be scheduled in the cluster and did hang for that reason and there was a bug that I found this way that occured when an approach already has evaluations on one dataset but has a pending evaluation on another dataset. This bug is now fixed :slight_smile:

Thanks and best regards,

Maik

1 Like

We have uploaded a submission file for the Ideology and Power Identification tasks. We would like to know if our submission is correctly submitted as we don’t get an automatic formal verification notice.

Dear @jirarn,

Thanks for reaching out!
I reviewed your submission and everything looks good, thank you for your participation!

Best regards,

Maik

Estonia is not provided in the power testset, but is should be in both the orientation and power dataset according to the README.txt file that is provided in the training dataset. Is this something we can disregard?

This is a mistake in the README file. We have only orientation data from Estonian parliament for the shared task. Please disregard it.

Thank you for reporting!

We have uploaded a submission file for the Ideology and Power Identification tasks. We would like to know if our submission is correctly submitted as we don’t get an automatic formal verification notice.

Dear Simhadri,

Thanks for your participation!
I reviewed yoru submission file and everything looks good.

Best regards,

Maik

Hi! We would like to know whether our (team PolicyParsingPanthers) docker submissions have worked correctly.

When following the guide for working with large models we get an error so we could not test mounting hf models locally. The issue, from what I can ascertain, is that since I am on windows a “:” ends up at the “C:” beginning of the filepath to the downloaded huggingface model, this breaks thet tira code since the LocalExecutionIntegration does not expect the extra “:”. I therefore had to --skip-local-test and would like to know if it worked anyway.

Also noticed that uploads will fail due to time-out but then complete instantly upon re-attempting.

Best regards,
Oscar

1 Like

Dear Oscar,

Thanks for reaching out!
Indeed, that is a good catch, I created a bug ticket to resolve this.

Besides that, everything seems to be fine, the submission finished already on the spot-check dataset and is in progress on the larger test dataset, so everything seems to be fine so far.

Best regards,

Maik