Model Card for Model ID

This modelcard aims to be a base template for new models. It has been generated using this raw template.

Model Details

Model Description

  • Model type: roberta-base
  • Language(s) (NLP): Korean
  • Finetuned from model: klue/roberta-base

How to Get Started with the Model

MODEL = "IAmFlyingMonkey/Koroberta-base-model-text-tag-classification"
model = AutoModelForSequenceClassification.from_pretrained(MODEL)
model.to(device)
tokenizer = AutoTokenizer.from_pretrained(MODEL)

def get_tags(output):
    scores = output[0][0].detach().cpu().numpy() 
    scores = expit(scores)
    predictions = (scores >= 0.3) * 1
    pred = [i for i, tag in enumerate(predictions) if tag == 1]
    return pred

def get_pred(mode):
   result = []
   for index, row in test.iterrows():
     tokens = tokenizer(row["content"],truncation=True, max_length=512, return_tensors='pt')
     tokens = {k: v.to(device) for k, v in tokens.items()}
     output = model(**tokens)
     if mode == 0:
      pred = torch.argmax(output.logits).item()
     elif mode == 1:
      pred = get_tags(output)
     result.append(pred)
   return result
    
def accuracy(result, target):
    cnt = 0
    print("Model Pred Size: " + str(len(result)) + " and  Eval Set Size: " + str(len(target)))
    for i, pred in enumerate(result):
          if pred == target.loc[i, 'label']:
            cnt += 1
            continue
    
    score = cnt / len(result)
    return score

def modified_accuracy(result, target):
    cnt = 0
    print("Model Pred Size: " + str(len(result)) + " and  Eval Set Size: " + str(len(target)))
    for i, pred in enumerate(result):
        if target.loc[i, 'label'] in pred:
            cnt += 1
            continue
        
    score = cnt / len(result)
    return score

result = get_pred(0)
print("accuracy : " + str(accuracy(result, test)))
result = get_pred(1)
print("modified_accuracy : " + str(modified_accuracy(result, test)))

Training Details

Training Data

train split: 15451 examples

test split: 3863 examples

Training Hyperparameters

training_args = TrainingArguments(
    output_dir="./results",
    learning_rate=2e-5,
    per_device_train_batch_size=16,
    per_device_eval_batch_size=16,
    num_train_epochs=9,
    weight_decay=0.01,
    logging_dir='./logs', 
    logging_steps=100, 
)

Evaluation

Eval set: 1055 examples

Eval_Accuracy Eval_Modified Accuracy Train loss epoch step Train_acc Val_acc
0.8123222748815165 0.818957345971564 0.0068 8.8 8500 0.9982525402886545 0.9052549831736992
Downloads last month
2
Safetensors
Model size
111M params
Tensor type
F32
ยท
Inference Providers NEW
This model isn't deployed by any Inference Provider. ๐Ÿ™‹ Ask for provider support