MonoLR_xho_Latn_PR

This is the Xhosa (xho_Latn) Monolingual LoRA adapter from Semantic Evaluation of Multilingual Data-to-Text Generation via NLI Fine-Tuning: Precision, Recall and F1 scores used to compute Semantic Precision and Semantic Recall scores for RDF-to-Text generation.

Use

The following is minimal code to compute the Semantic Precision, Semantic Recall, and Semantic F1 of a generated Xhosa text:

from sentence_transformers import CrossEncoder

model = CrossEncoder('MoritzLaurer/mDeBERTa-v3-base-xnli-multilingual-nli-2mil7')
model.config.num_labels = 1
model.default_activation_function = torch.nn.Sigmoid()

model.model.load_adapter('WilliamSotoM/MonoLR_xho_Latn_PR',)

graph = '[S]Buzz_Aldrin[P]mission[O]Apollo_12[T][S]Buzz_Aldrin[P]birthPlace[O]Glen_Ridge,_New_Jersey'
text = 'UBuzz aldrin wayeyinxalenye yeqela le-Apollo 12.'

precision = model.predict([(graph, text)])[0]
recall = model.predict([(text, graph)])[0]

f1 = (2*precision*recall)/(precision+recall)

print(f'Precision: {precision:.4f}')
print(f'Recall: {recall:.4f}')
print(f'F1: {f1:.4f}')

Expected outpu:

Precision: 0.9985
Recall: 0.4142
F1: 0.5855

Analysis: High precision means all the content in the text comes from the graph (i.e. No additions / hallucinations). Half recall means half the conente from the graph is missing (i.e. Some omissions).

Downloads last month
35
Inference Providers NEW
This model isn't deployed by any Inference Provider. 🙋 Ask for provider support

Model tree for WilliamSotoM/MonoLR_xho_Latn_PR

Dataset used to train WilliamSotoM/MonoLR_xho_Latn_PR

Space using WilliamSotoM/MonoLR_xho_Latn_PR 1

Collection including WilliamSotoM/MonoLR_xho_Latn_PR