mongrz commited on
Commit
1f82468
·
verified ·
1 Parent(s): ddc7a7f

Update README.md

Browse files
Files changed (1) hide show
  1. README.md +70 -6
README.md CHANGED
@@ -8,14 +8,35 @@ metrics:
8
  - bleu
9
  model-index:
10
  - name: neutral_job_title_rephraser_pl
11
- results: []
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
12
  datasets:
13
  - ArielUW/jobtitles
14
  ---
15
 
16
- <!-- This model card has been generated automatically according to the information the Trainer had access to. You
17
- should probably proofread and complete it, then remove this comment. -->
18
-
19
  # model_output
20
 
21
  This model is a fine-tuned version of [facebook/m2m100_1.2B](https://huggingface.co/facebook/m2m100_1.2B) on [ArielUW/jobtitles](https://huggingface.co/datasets/ArielUW/jobtitles) dataset.
@@ -37,12 +58,55 @@ Sentences not containing such terms are not expected to change at all, for examp
37
 
38
  In terms of actual outcomes and errors in outputs, see our [readme](https://github.com/ArielUW/IMLLA-FinalProject/blob/main/README.md).
39
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
40
  ## Intended uses & limitations
41
- The model has only been fine-tuned for single-sentence inputs, so other types of inputs can be unstable. So far, it underperforms for low-frequency items, morphosyntactically complex cases and feminine nouns.
 
 
 
 
 
 
42
 
 
43
  ## Training and evaluation data
44
 
45
- The model's performance has been evaluated in terms of neutralization attempt precision and recall, as well as Levenshtein distance from gold-standard items. More information on the evaluation outcomes can be found in [our readme](https://github.com/ArielUW/IMLLA-FinalProject/blob/main/README.md).
 
 
 
 
 
 
 
46
 
47
  ## Training procedure
48
 
 
8
  - bleu
9
  model-index:
10
  - name: neutral_job_title_rephraser_pl
11
+ results:
12
+ - task:
13
+ type: text2text-generation
14
+ name: Gender-Neutral Job Title Rephrasing
15
+ dataset:
16
+ type: ArielUW/jobtitles
17
+ name: Job Titles Dataset
18
+ config: default
19
+ split: test
20
+ metrics:
21
+ - type: bleu
22
+ value: 93.9441
23
+ name: BLEU
24
+ - type: precision
25
+ value: 1.0
26
+ name: Attempted Noun Neutralisation Precision
27
+ - type: recall
28
+ value: 0.892
29
+ name: Attempted Noun Neutralisation Recall
30
+ - type: levenshtein
31
+ value: 0.0395
32
+ name: Normalized Levenshtein Distance (neutralization needed)
33
+ - type: levenshtein
34
+ value: 0.0001
35
+ name: Normalized Levenshtein Distance (neutralization not needed)
36
  datasets:
37
  - ArielUW/jobtitles
38
  ---
39
 
 
 
 
40
  # model_output
41
 
42
  This model is a fine-tuned version of [facebook/m2m100_1.2B](https://huggingface.co/facebook/m2m100_1.2B) on [ArielUW/jobtitles](https://huggingface.co/datasets/ArielUW/jobtitles) dataset.
 
58
 
59
  In terms of actual outcomes and errors in outputs, see our [readme](https://github.com/ArielUW/IMLLA-FinalProject/blob/main/README.md).
60
 
61
+ # Model usage
62
+
63
+ To use this model, you will need to install the transformers and sentencepiece libraries:
64
+
65
+ !pip install transformers sentencepiece
66
+
67
+ You can then use the model directly through the pipeline API, which provides a high-level interface for text generation:
68
+ from transformers import pipeline
69
+ pipe = pipeline("text2text-generation", model="mongrz/model_output")
70
+ gender_neutral_text = pipe("Pielęgniarki protestują pod sejmem.")
71
+ print(gender_neutral_text)
72
+ #expected output: [{'generated_text': 'Osoby pielęgniarskie protestują pod sejmem.'}]
73
+ This will create a pipeline object for text-to-text generation using your model. You can then pass the input text to the pipe object to generate the gender-neutral version. The output will be a list of dictionaries, each containing the generated text.
74
+ Alternatively, you can still load the tokenizer and model manually for more fine-grained control:
75
+ from transformers import AutoModelForSeq2SeqLM, AutoTokenizer
76
+
77
+ tokenizer = AutoTokenizer.from_pretrained("mongrz/model_output")
78
+ model = AutoModelForSeq2SeqLM.from_pretrained("mongrz/model_output")
79
+
80
+ text_to_translate = "Pielęgniarki protestują pod sejmem."
81
+ model_inputs = tokenizer(text_to_translate, return_tensors="pt")
82
+
83
+ #Generate gender-neutral text
84
+ gen_tokens = model.generate(**model_inputs, forced_bos_token_id=tokenizer.get_lang_id("pl"))
85
+
86
+ #Decode and print the generated text
87
+ print(tokenizer.batch_decode(gen_tokens, skip_special_tokens=True))
88
+ This approach allows you to access the tokenizer and model directly and customize the generation process further if needed. Choose the method that best suits your needs.
89
+
90
  ## Intended uses & limitations
91
+ While this model demonstrates promising results in generating gender-neutral job titles in Polish, it has certain limitations:
92
+
93
+ Low-Frequency Items: The model may struggle with less common job titles or words that were not frequently present in the training data. It might produce inaccurate or unexpected outputs for such cases.
94
+ Morphosyntactically Complex Cases: Items requiring rare or non-typical patterns of forming personatives can pose challenges for the model. The accuracy of the generated output may decrease in such scenarios.
95
+ Feminine Nouns: The model has shown to sometimes underperform when dealing with feminine nouns, potentially due to biases or patterns in the training data. Further investigation and fine-tuning are needed to address this limitation.
96
+ Single Sentence Input: The model is optimized for single-sentence inputs and might not produce the desired results for single-word items, longer texts or paragraphs. It might fail to maintain context, coherence and terminological consistency across multiple sentences. Its performance for single-word items has not been tested.
97
+ Domain Specificity: The model is trained on a specific dataset of single sentences with job titles and without them. It may not generalize well to other domains or contexts. It might need further fine-tuning to adapt to different types of text or specific vocabulary.
98
 
99
+ More information regarding issues, errors and limitations, see our [readme](https://github.com/ArielUW/IMLLA-FinalProject/blob/main/README.md).
100
  ## Training and evaluation data
101
 
102
+ This model was evaluated using several metrics to assess its performance:
103
+
104
+ BLEU (Bilingual Evaluation Understudy): BLEU is a widely used metric for evaluating machine translation quality. It measures the overlap between the generated text and the reference text in terms of n-grams. A higher BLEU score indicates better translation quality. The model achieved a BLEU score of 93.9441 on the evaluation set, indicating high accuracy in generating gender-neutral terms.
105
+ Attempted Noun Neutralisation Precision: This metric measures the proportion of correctly attempted neutralizations (i.e., items that required neutralization, not necessarily correctly formed neutral items) out of all attempted neutralizations. The model achieved a precision of 1, indicating that all attempted neutralizations were performed on items that required it.
106
+ Attempted Noun Neutralisation Recall: This metric measures the proportion of nouns that had a neutralization attempt present in the generated text out of all nouns that should have been neutralized. The model achieved a recall of 0.892, suggesting that it successfully recognized items requiring neutralization in the majority of cases.
107
+ Normalized Levenshtein's Distance: This metric calculates the edit distance between the generated text and the reference text, normalized by the length of the reference text. It provides a measure of similarity between the two texts. The model achieved a Levenshtein's distance of 0.0395 for sentences requiring neutralization and 0.0001 for the items that should not have been changed at all, indicating a high degree of similarity between the generated text and the reference text.
108
+
109
+ More information on the evaluation outcomes can be found in [our readme](https://github.com/ArielUW/IMLLA-FinalProject/blob/main/README.md).
110
 
111
  ## Training procedure
112