mongrz commited on
Commit
e80b4ea
·
verified ·
1 Parent(s): 1f82468

Update README.md

Browse files
Files changed (1) hide show
  1. README.md +28 -28
README.md CHANGED
@@ -37,7 +37,7 @@ datasets:
37
  - ArielUW/jobtitles
38
  ---
39
 
40
- # model_output
41
 
42
  This model is a fine-tuned version of [facebook/m2m100_1.2B](https://huggingface.co/facebook/m2m100_1.2B) on [ArielUW/jobtitles](https://huggingface.co/datasets/ArielUW/jobtitles) dataset.
43
  It achieves the following results on the evaluation set:
@@ -65,46 +65,46 @@ To use this model, you will need to install the transformers and sentencepiece l
65
  !pip install transformers sentencepiece
66
 
67
  You can then use the model directly through the pipeline API, which provides a high-level interface for text generation:
68
- from transformers import pipeline
69
- pipe = pipeline("text2text-generation", model="mongrz/model_output")
70
- gender_neutral_text = pipe("Pielęgniarki protestują pod sejmem.")
71
- print(gender_neutral_text)
72
- #expected output: [{'generated_text': 'Osoby pielęgniarskie protestują pod sejmem.'}]
73
  This will create a pipeline object for text-to-text generation using your model. You can then pass the input text to the pipe object to generate the gender-neutral version. The output will be a list of dictionaries, each containing the generated text.
74
  Alternatively, you can still load the tokenizer and model manually for more fine-grained control:
75
- from transformers import AutoModelForSeq2SeqLM, AutoTokenizer
76
-
77
- tokenizer = AutoTokenizer.from_pretrained("mongrz/model_output")
78
- model = AutoModelForSeq2SeqLM.from_pretrained("mongrz/model_output")
79
-
80
- text_to_translate = "Pielęgniarki protestują pod sejmem."
81
- model_inputs = tokenizer(text_to_translate, return_tensors="pt")
82
-
83
- #Generate gender-neutral text
84
- gen_tokens = model.generate(**model_inputs, forced_bos_token_id=tokenizer.get_lang_id("pl"))
85
-
86
- #Decode and print the generated text
87
- print(tokenizer.batch_decode(gen_tokens, skip_special_tokens=True))
88
  This approach allows you to access the tokenizer and model directly and customize the generation process further if needed. Choose the method that best suits your needs.
89
 
90
  ## Intended uses & limitations
91
  While this model demonstrates promising results in generating gender-neutral job titles in Polish, it has certain limitations:
92
 
93
- Low-Frequency Items: The model may struggle with less common job titles or words that were not frequently present in the training data. It might produce inaccurate or unexpected outputs for such cases.
94
- Morphosyntactically Complex Cases: Items requiring rare or non-typical patterns of forming personatives can pose challenges for the model. The accuracy of the generated output may decrease in such scenarios.
95
- Feminine Nouns: The model has shown to sometimes underperform when dealing with feminine nouns, potentially due to biases or patterns in the training data. Further investigation and fine-tuning are needed to address this limitation.
96
- Single Sentence Input: The model is optimized for single-sentence inputs and might not produce the desired results for single-word items, longer texts or paragraphs. It might fail to maintain context, coherence and terminological consistency across multiple sentences. Its performance for single-word items has not been tested.
97
- Domain Specificity: The model is trained on a specific dataset of single sentences with job titles and without them. It may not generalize well to other domains or contexts. It might need further fine-tuning to adapt to different types of text or specific vocabulary.
98
 
99
  More information regarding issues, errors and limitations, see our [readme](https://github.com/ArielUW/IMLLA-FinalProject/blob/main/README.md).
100
  ## Training and evaluation data
101
 
102
  This model was evaluated using several metrics to assess its performance:
103
 
104
- BLEU (Bilingual Evaluation Understudy): BLEU is a widely used metric for evaluating machine translation quality. It measures the overlap between the generated text and the reference text in terms of n-grams. A higher BLEU score indicates better translation quality. The model achieved a BLEU score of 93.9441 on the evaluation set, indicating high accuracy in generating gender-neutral terms.
105
- Attempted Noun Neutralisation Precision: This metric measures the proportion of correctly attempted neutralizations (i.e., items that required neutralization, not necessarily correctly formed neutral items) out of all attempted neutralizations. The model achieved a precision of 1, indicating that all attempted neutralizations were performed on items that required it.
106
- Attempted Noun Neutralisation Recall: This metric measures the proportion of nouns that had a neutralization attempt present in the generated text out of all nouns that should have been neutralized. The model achieved a recall of 0.892, suggesting that it successfully recognized items requiring neutralization in the majority of cases.
107
- Normalized Levenshtein's Distance: This metric calculates the edit distance between the generated text and the reference text, normalized by the length of the reference text. It provides a measure of similarity between the two texts. The model achieved a Levenshtein's distance of 0.0395 for sentences requiring neutralization and 0.0001 for the items that should not have been changed at all, indicating a high degree of similarity between the generated text and the reference text.
108
 
109
  More information on the evaluation outcomes can be found in [our readme](https://github.com/ArielUW/IMLLA-FinalProject/blob/main/README.md).
110
 
 
37
  - ArielUW/jobtitles
38
  ---
39
 
40
+ # neutral_job_title_rephraser_pl
41
 
42
  This model is a fine-tuned version of [facebook/m2m100_1.2B](https://huggingface.co/facebook/m2m100_1.2B) on [ArielUW/jobtitles](https://huggingface.co/datasets/ArielUW/jobtitles) dataset.
43
  It achieves the following results on the evaluation set:
 
65
  !pip install transformers sentencepiece
66
 
67
  You can then use the model directly through the pipeline API, which provides a high-level interface for text generation:
68
+ from transformers import pipeline
69
+ pipe = pipeline("text2text-generation", model="mongrz/model_output")
70
+ gender_neutral_text = pipe("Pielęgniarki protestują pod sejmem.")
71
+ print(gender_neutral_text)
72
+ #expected output: [{'generated_text': 'Osoby pielęgniarskie protestują pod sejmem.'}]
73
  This will create a pipeline object for text-to-text generation using your model. You can then pass the input text to the pipe object to generate the gender-neutral version. The output will be a list of dictionaries, each containing the generated text.
74
  Alternatively, you can still load the tokenizer and model manually for more fine-grained control:
75
+ from transformers import AutoModelForSeq2SeqLM, AutoTokenizer
76
+
77
+ tokenizer = AutoTokenizer.from_pretrained("mongrz/model_output")
78
+ model = AutoModelForSeq2SeqLM.from_pretrained("mongrz/model_output")
79
+
80
+ text_to_translate = "Pielęgniarki protestują pod sejmem."
81
+ model_inputs = tokenizer(text_to_translate, return_tensors="pt")
82
+
83
+ #Generate gender-neutral text
84
+ gen_tokens = model.generate(**model_inputs, forced_bos_token_id=tokenizer.get_lang_id("pl"))
85
+
86
+ #Decode and print the generated text
87
+ print(tokenizer.batch_decode(gen_tokens, skip_special_tokens=True))
88
  This approach allows you to access the tokenizer and model directly and customize the generation process further if needed. Choose the method that best suits your needs.
89
 
90
  ## Intended uses & limitations
91
  While this model demonstrates promising results in generating gender-neutral job titles in Polish, it has certain limitations:
92
 
93
+ - Low-Frequency Items: The model may struggle with less common job titles or words that were not frequently present in the training data. It might produce inaccurate or unexpected outputs for such cases.
94
+ - Morphosyntactically Complex Cases: Items requiring rare or non-typical patterns of forming personatives can pose challenges for the model. The accuracy of the generated output may decrease in such scenarios.
95
+ - Feminine Nouns: The model has shown to sometimes underperform when dealing with feminine nouns, potentially due to biases or patterns in the training data. Further investigation and fine-tuning are needed to address this limitation.
96
+ - Single Sentence Input: The model is optimized for single-sentence inputs and might not produce the desired results for single-word items, longer texts or paragraphs. It might fail to maintain context, coherence and terminological consistency across multiple sentences. Its performance for single-word items has not been tested.
97
+ - Domain Specificity: The model is trained on a specific dataset of single sentences with job titles and without them. It may not generalize well to other domains or contexts. It might need further fine-tuning to adapt to different types of text or specific vocabulary.
98
 
99
  More information regarding issues, errors and limitations, see our [readme](https://github.com/ArielUW/IMLLA-FinalProject/blob/main/README.md).
100
  ## Training and evaluation data
101
 
102
  This model was evaluated using several metrics to assess its performance:
103
 
104
+ - BLEU (Bilingual Evaluation Understudy): BLEU is a widely used metric for evaluating machine translation quality. It measures the overlap between the generated text and the reference text in terms of n-grams. A higher BLEU score indicates better translation quality. The model achieved a BLEU score of 93.9441 on the evaluation set, indicating high accuracy in generating gender-neutral terms.
105
+ - Attempted Noun Neutralisation Precision: This metric measures the proportion of correctly attempted neutralizations (i.e., items that required neutralization, not necessarily correctly formed neutral items) out of all attempted neutralizations. The model achieved a precision of 1, indicating that all attempted neutralizations were performed on items that required it.
106
+ - Attempted Noun Neutralisation Recall: This metric measures the proportion of nouns that had a neutralization attempt present in the generated text out of all nouns that should have been neutralized. The model achieved a recall of 0.892, suggesting that it successfully recognized items requiring neutralization in the majority of cases.
107
+ - Normalized Levenshtein's Distance: This metric calculates the edit distance between the generated text and the reference text, normalized by the length of the reference text. It provides a measure of similarity between the two texts. The model achieved a Levenshtein's distance of 0.0395 for sentences requiring neutralization and 0.0001 for the items that should not have been changed at all, indicating a high degree of similarity between the generated text and the reference text.
108
 
109
  More information on the evaluation outcomes can be found in [our readme](https://github.com/ArielUW/IMLLA-FinalProject/blob/main/README.md).
110