Integrating large LLMs

Hello,

I tried to submit using mounted hf model option according to the tutorial here

tira-cli code-submission --mount-hf-model tiiuae/falcon-7b google-t5/t5-large --path . --task generative-ai-authorship-verification-panclef-2025 --dataset pan25-generative-ai-detection-smoke-test-20250428-training --command "python /app/evaluate_tira.py --modelA tiiuae/falcon-7b --modelB google-t5/t5-large \$inputDataset/dataset.jsonl \$outputDir"

I have the models locally but the submission process get killed when it tries to load checkpoint shards. I suppose it happens because I am running on a machine without gpu and with small RAM – Is it a must to have the local hardware computability in order to do the code submission command?

The program is developed and tested on a cloud server where hardware is not a issue, however docker is not available there. Hence i have to do the tira submission on my local machine. Could you please offer any insight on this issue?

Thank you.

Best,
Joy

Hi Joy,

Thanks for reaching out and for all your efforts, we really appreciate this!

I think the best option for this is at the moment when you invite me (my account is mam10eks) to the git repository, and I do the submission from a machine with docker and a GPU installed.

Does this work for you?

Best regards,

Maik

Alternatively, you could make a submission with a tiny LLM, (e.g., TinyLlama/TinyLlama_v1.1), and we then could copy the submission and just mount a different LLM into the pod, this would also work.

Hi Maik,

okay thank you for the helpful response! We’ve adjusted our approach and are no longer facing those issues, but I still really appreciate your support!

Best,
Joy

1 Like