Submission Format

Good morning,

I would like to know what is the submission format for all labels and for incomplete labels,

Thanks in advanced,
Nordin.

Hi Nordin,

We have a description at the task’s web page:

Touché at SemEval 2023 - Human Value Detection

It it the same as for the labels-*.tsv files. But you leave out columns for value categories that you do not predict.

Does this answer your question?

Regards,
Johannes

Yes,

Thanks!

Regards,
Nordin

Good afternoon Kiesel,

I am trying to upload a predict with only one label on the validation.tsv but I get error “with 412”,
Should all the labels be on the file? Though the rest are empty?
Im just trying to upload one for using as an example to see the results.

Thanks in advanced,
Nordin

Hi Nordin,
Can you perhaps send me the file by mail or attach it here? I am not sure if I understand what you mean.

Thanks!
Johannes

(and I now also changed my username to include my full name)

I actually now saw a run you did in TIRA with only one label. But that one does look fine and the evaluator gave a result with precision, recall, an F for Self-direction: thought. I just reviewed it.

Ok thanks!

Where can I see these results?

When you see the table of your runs, you should see an “inspect” button that will show you the results in in a JSON format. Sorry that I can’t show screenshots, but it looks a bit different with my organizer’s account.

Just saw it,

Thanks,
Nordin.

Hello again,

Does “aristotle” results serve as an example of expected results for the given dataset?
Or these results are just random?

Thanks in advanced,
Nordin

Hi Nordin, true, I wrote this once in a mail but never told again afterwards. I now added the information to “Submission”:

It is easy to see which is which: the 1-baseline always achieves a recall of 1 :wink:

Thanks you very much!

Greetings,
Nordin

Hi!

Just want to make sure I understand, after out models predict the labels for each argument, are we expected to save them in a labels-*.tsv file, after which that file is passed through the evaluator?

Just want to check I’ve got the formatting correct before the final submission deadline :slightly_smiling_face:

Cheers,
Ethan

Hi Ethan,

The output file needs to have the same format as the labels tsv files, though the name can be anything that ends in “.tsv”.

You can check your approach before the final deadline by submitting a run on the validation dataset.

And if you want to use submission by Docker, you can have a look at the code of our baselines (e.g., touche-code/semeval23/human-value-detection/1-baseline at main · touche-webis-de/touche-code · GitHub ) for reference.