You need to agree to share your contact information to access this model

This repository is publicly accessible, but you have to accept the conditions to access its files and content.

Our models are intended for academic use only. If you are not affiliated with an academic institution, please provide a rationale for using our models. Please allow us a few business days to manually review subscriptions.

Log in or Sign Up to review the conditions and access this model content.

Model description

An xlm-roberta-large model fine-tuned on UK survey questions (in English) labeled with major topic codes from the CLOSER codecook https://ucldata.atlassian.net/wiki/spaces/CLOS/pages/37323020/Topics.

Fine-tuning procedure

This model was fine-tuned with the following key hyperparameters:

  • Number of Training Epochs: 10
  • Batch Size: 8
  • Learning Rate: 5e-06

Model performance

The model was evaluated on a test set of 4216 examples.
Model F1 score is 0.88.

Downloads last month
6
Inference Providers NEW
This model isn't deployed by any Inference Provider. πŸ™‹ Ask for provider support

Space using poltextlab/xlm-roberta-large_ontolisst_v1 1