mongrz commited on
Commit
4cce5ad
·
verified ·
1 Parent(s): e80b4ea

Update README.md

Browse files
Files changed (1) hide show
  1. README.md +4 -1
README.md CHANGED
@@ -62,14 +62,16 @@ In terms of actual outcomes and errors in outputs, see our [readme](https://gith
62
 
63
  To use this model, you will need to install the transformers and sentencepiece libraries:
64
 
65
- !pip install transformers sentencepiece
66
 
67
  You can then use the model directly through the pipeline API, which provides a high-level interface for text generation:
 
68
  from transformers import pipeline
69
  pipe = pipeline("text2text-generation", model="mongrz/model_output")
70
  gender_neutral_text = pipe("Pielęgniarki protestują pod sejmem.")
71
  print(gender_neutral_text)
72
  #expected output: [{'generated_text': 'Osoby pielęgniarskie protestują pod sejmem.'}]
 
73
  This will create a pipeline object for text-to-text generation using your model. You can then pass the input text to the pipe object to generate the gender-neutral version. The output will be a list of dictionaries, each containing the generated text.
74
  Alternatively, you can still load the tokenizer and model manually for more fine-grained control:
75
  from transformers import AutoModelForSeq2SeqLM, AutoTokenizer
@@ -85,6 +87,7 @@ Alternatively, you can still load the tokenizer and model manually for more fine
85
 
86
  #Decode and print the generated text
87
  print(tokenizer.batch_decode(gen_tokens, skip_special_tokens=True))
 
88
  This approach allows you to access the tokenizer and model directly and customize the generation process further if needed. Choose the method that best suits your needs.
89
 
90
  ## Intended uses & limitations
 
62
 
63
  To use this model, you will need to install the transformers and sentencepiece libraries:
64
 
65
+ !pip install transformers sentencepiece
66
 
67
  You can then use the model directly through the pipeline API, which provides a high-level interface for text generation:
68
+
69
  from transformers import pipeline
70
  pipe = pipeline("text2text-generation", model="mongrz/model_output")
71
  gender_neutral_text = pipe("Pielęgniarki protestują pod sejmem.")
72
  print(gender_neutral_text)
73
  #expected output: [{'generated_text': 'Osoby pielęgniarskie protestują pod sejmem.'}]
74
+
75
  This will create a pipeline object for text-to-text generation using your model. You can then pass the input text to the pipe object to generate the gender-neutral version. The output will be a list of dictionaries, each containing the generated text.
76
  Alternatively, you can still load the tokenizer and model manually for more fine-grained control:
77
  from transformers import AutoModelForSeq2SeqLM, AutoTokenizer
 
87
 
88
  #Decode and print the generated text
89
  print(tokenizer.batch_decode(gen_tokens, skip_special_tokens=True))
90
+
91
  This approach allows you to access the tokenizer and model directly and customize the generation process further if needed. Choose the method that best suits your needs.
92
 
93
  ## Intended uses & limitations