Request for baseline model details

Hi, we are trying to reproduce the best-performing baseline specified in the paper (https://aclanthology.org/2022.acl-long.484.pdf). We had a few queries:

  1. Can the hyperparameters and other details of the best-performing model mentioned in the paper be shared?
  2. Is this model available on HuggingFace?
  3. Are the performance metrics for the validation dataset available publicly?

Dear pbagaria,

the best-performing model mentioned in the paper is available on HuggingFace (the debertalarge-all-cbs20-both-checkpoint-1200 branch) and also available as baseline in this shared task on GitHub and DockerHub (descriptions on how to deploy this in TIRA are in the README of the repository and on clickbait.webis.de).

The model receives a BLEU score of 0.382 on the validation dataset. At the moment, this baseline was executed as princess-knight on the leaderboard, but now I see that this is not very clear; I will rename it accordingly.

I need a bit time to figure out the hyperparameters used during training, and will write back as soon as I have figured this out.

Thanks for participating!

Best regards,

Maik

We were now able to figure out the hyperparameters.

We used a batch_size of 8, a sequence_length of 384, and a learn_rate of 3e-5.
The model was first trained for one epoch on SQUAD, and then trained for three epochs on the clickbait spoiling training data.

The remaining hyperparameters of the Huggingface run_qa.py script were not changed during training.

Best regards,

Maik