Inquiry Regarding Deadlines for PAN Competition

Dears,

I am writing to ask about the upcoming deadlines. I would like clarification on the submission deadlines for code and full papers related to PAN 2025. While the PAN Events page does not have the latest updates, the conference page indicates the deadlines that are nearing closure:

  • Abstract submission of Long, Short, Best of 2024 Labs Papers: 6 May 2025
  • Full paper submission deadline: 13 May 2025

Additionally, I noticed that the Multi-Author Writing Style Analysis task page currently does not provide access to the datasets. Could you please confirm when the datasets will be made available to participants?

Your prompt response and guidance on these matters would be greatly appreciated as I prepare for the competition.
I look forward to your reply.

Dear Abeer,

Thanks for reaching out!
I will add the datasets today to TIRA (I am doing some last spot checks), and will reach out when they are available. (The training and evaluation data is available on Zenodo.)

The deadline for the participant notebook submission is on May 30. I will ask when the deadline for the software is, and will update you here as well. (I think it will be around May 20, but will ask and will update you on this.)

Thanks and best regards,

Maik

1 Like

Dear all,

We now updated the web page, the important dates for the submission are May 23, 2025 (software submission) and May 30, 2025 (participant notebook submission).

The test dataset for the multi-author writing style task is now also available in TIRA, so you can already make submissions to the test set.

Thanks and best regards,

Maik

1 Like

Thank you for your response and for providing the datasets for the task.

Datasets:
We have noted the availability of the datasets named multi-author-writing and multi-author-writing-spot-check. We were expecting to work with the three datasets mentioned in the main task description: dataset1, dataset2, and dataset3. Could you please provide clarification on how these two provided datasets map to the expected main task datasets?

Summation way:
We are planning to submit our solution using the “code submissions” option. We have followed the outlined steps for setting up the TIRA CLI. However, when running the command tira-cli verify-installation, we encountered an error related to Docker, even though our intention is to submit our code directly without relying on Docker containerization.

Furthermore, when attempting the code submission using the command tira-cli code-submission --path my-submission --task multi-author-writing-style-analysis-2025, we receive an error, despite having a Dockerfile, requirements.txt, ourcode.py, and model.pt in our submission folder.

We would greatly appreciate your guidance on how we can proceed with our submission without needing to download and utilize Docker. We have encountered difficulties with Docker in the past, which unfortunately led to system issues, and we are hesitant to repeat that experience on our new machine.

Thank you for your time and assistance in clarifying these points.

Sincerely,
Abeer

1 Like

Dear Abeer,

Thanks for reaching out!

Each dataset in TIRA has all the three subdatasets embedded.
The multi-author-writing-spot-check dataset is intended as a tiny dataset that you can use to verify that your solution produces valid outputs, it consists of three instances for each of the three datasets.

This is the dataset structure:

The multi-author-writing-spot-check is intended as the test dataset (I renamed the dataset now to multi-author-writing-test as this was indeed confusing), it has the same structure:

Very nice that you go for code submissions.

We currently need docker for code submissions, but a good way to resolve the problem that you have would be to use Continuous Integration. Would it be possible for you to upload your code to github (can be a private repository) and invite me (my Github account is mam10eks) to the repository? Then I could set this up, I think this would be a good solution as this way you do not need Docker on your machine.

Thanks in advance and best regards,

Maik

1 Like

Dear Maik,

Thank you for the dataset clarification. We also appreciate your collaboration in helping us resolve the previous submission issues.

As requested, I have invited you to our private GitHub repository (abeersaad0/PAN2025).

Additionally, I have added the TIRA_CLIENT_TOKEN as a secret to our GitHub repository, as I understand this is necessary for the setup.

We plan to submit our code as the first version. We will continue to work on enhancing our approach in the coming days, and we intend to submit an updated version if we achieve improved results.

Could you please outline the next steps after you have set it up? We would appreciate it if you could inform us which TIRA submission option we should use and which TIRA commands we will need to execute on our end to trigger the submission process. We apologize for our unfamiliarity with the TIRA submission process and appreciate your guidance.

Thank you again for your assistance.

Sincerely,
Abeer

Dear Abeer,

Thanks, I received the invitation and will take a look tomorrow, thanks!

Best regards,

Maik

1 Like

Dear Abeer,

I now configured the Github action in a fork and have re-organized the baseline so that it is similar to how you have organized your code (i.e., one submission per subtask).

The github action runs smothly (I tested it for my own account in my fork of the repo):

I think this should resolve the problem from above and we can continue the discussion in a private chat.

Best regards,

Maik

1 Like