Final Submission question

Sort of a few questions.

#1 how do you actually make the final submission?

I’ve ran
tira-run `

–input-dataset generative-ai-authorship-verification-panclef-2025/pan25-generative-ai-detection-smoke-test-20250428-training --image pan-2025-ben-v2
–command ‘python main.py --input_file $inputDataset/dataset.jsonl --output_dir $outputDir’ `
–push true
and pushed - im assuming that the software shows up on the tira site and i make a submission? When i check the docker submission instructions page (TIRA) and click next i dont go to page 3 it sends me back

#2 when running the command above i tried to push, but got a timeout error - im trying to run an offline version of Roberta Large and my image is about 7gb - is there a size limit? How do i get around this - i was able to get a prediction.jsonl on the test set

#3 i tried to use gpu when running the local test
device = torch.device(“cuda” if torch.cuda.is_available() and args.use_gpu else “cpu”)
but from the output log it looks like gpu is not utilized

Hi, can you invite me to the repository (my account is mam10eks on github)? I can have a look and help to finalize the submission.

If I recall correctly, the size limit is 15 GB per layer, so 7 GB should be fine.

Is the model publicly available on hugging face? Then, we could also mount it to the container, I can also help to configure this in the repo.

It might be that cuda is not installed in the docker image, then torch.cuda.is_available() returns false even when the host has a GPU.

Best regards,

Maik

Thanks so much maik!

I’ve added you to the repo im trying to submit please have a look when you can.

the model is not publicly on huggingface i’ve just tried to upload a docker submission

I will look into adding cuda into the docker image - i currently have a run being pushed thats been processing for 50 minutes - fingers crossed!

Perfect, for the cuda thing, the easiest option is to start from an image that already has it installed (otherwise installing cuda easily gets difficult), e.g., for the generative-authorship-verification baseline, we used:

FROM nvcr.io/nvidia/cuda:12.8.0-cudnn-devel-ubuntu24.04

I will have a look at the repo.

Best regards,

Maik