Hi, we are trying to reproduce the best-performing baseline specified in the paper (https://aclanthology.org/2022.acl-long.484.pdf). We had a few queries:
- Can the hyperparameters and other details of the best-performing model mentioned in the paper be shared?
- Is this model available on HuggingFace?
- Are the performance metrics for the validation dataset available publicly?
the best-performing model mentioned in the paper is available on HuggingFace (the debertalarge-all-cbs20-both-checkpoint-1200 branch) and also available as baseline in this shared task on GitHub and DockerHub (descriptions on how to deploy this in TIRA are in the README of the repository and on clickbait.webis.de).
The model receives a BLEU score of 0.382 on the validation dataset. At the moment, this baseline was executed as princess-knight on the leaderboard, but now I see that this is not very clear; I will rename it accordingly.
I need a bit time to figure out the hyperparameters used during training, and will write back as soon as I have figured this out.
Thanks for participating!