tiny Llama trained on BRO dataset with NER tags, Labels and Tokens.
WCETrainer r100_O10_f100 , run lemon-fog-11 checkpoint-1623.
- EVAL
AVGf1 = 93%, overall_f1 = 82%
- TEST
'DIAG': {'precision': 0.710079275198188, 'recall': 0.7674418604651163, 'f1': 0.7376470588235293, 'number': 817},
'MED': {'precision': 0.9379084967320261, 'recall': 0.959866220735786, 'f1': 0.9487603305785124, 'number': 299},
'TREAT': {'precision': 0.8542914171656687, 'recall': 0.856, 'f1': 0.8551448551448552, 'number': 500},
'overall_precision': 0.7672955974842768, 'overall_recall': 0.8304455445544554, 'overall_f1': 0.7976225854383358, 'overall_accuracy': 0.9280119624038735}
average_f1 = 0.8471840815156323
- Prompt Format (see example):
### Context\n{Nachricht}\n\n### Answer
def context_text(text): return f"### Context\n{text}\n\n### Answer"
- Downloads last month
- 0
This model does not have enough activity to be deployed to Inference API (serverless) yet. Increase its social
visibility and check back later, or deploy to Inference Endpoints (dedicated)
instead.
Collection including MSey/tiny_BROLLLT_0001.1
Collection
1 item
•
Updated