wacc2 commited on
Commit
ca45111
·
verified ·
1 Parent(s): 1bd37fe

Update README.md

Browse files
Files changed (1) hide show
  1. README.md +105 -0
README.md CHANGED
@@ -47,3 +47,108 @@ parameters:
47
  dtype: bfloat16
48
 
49
  ```
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
47
  dtype: bfloat16
48
 
49
  ```
50
+
51
+ #### Usage
52
+
53
+ You can use Amadeus-Verbo--MI-Qwen2.5-7B-PT-BR-Instruct with the latest HuggingFace Transformers library and we advise you to use the latest version of Transformers.
54
+
55
+ With transformers<4.37.0, you will encounter the following error:
56
+
57
+ KeyError: 'qwen2'
58
+
59
+ Below, we have provided a simple example of how to load the model and generate text:
60
+
61
+ #### Quickstart
62
+ The following code snippet uses `pipeline`, `AutoTokenizer`, `AutoModelForCausalLM` and apply_chat_template to show how to load the tokenizer, the model, and how to generate content.
63
+
64
+ Using the pipeline:
65
+ ```python
66
+ from transformers import pipeline
67
+
68
+ messages = [
69
+ {"role": "user", "content": "Faça uma planilha nutricional para uma alimentação fitness e mediterrânea com todos os dias da semana"},
70
+ ]
71
+ pipe = pipeline("text-generation", model="amadeusai/AV-MI-Qwen2.5-7B-PT-BR-Instruct")
72
+ pipe(messages)
73
+ ```
74
+ OR
75
+ ```python
76
+ from transformers import AutoModelForCausalLM, AutoTokenizer
77
+
78
+ model_name = "amadeusai/AV-MI-Qwen2.5-7B-PT-BR-Instruct"
79
+
80
+ model = AutoModelForCausalLM.from_pretrained(
81
+ model_name,
82
+ torch_dtype="auto",
83
+ device_map="auto"
84
+ )
85
+ tokenizer = AutoTokenizer.from_pretrained(model_name)
86
+
87
+ prompt = "Faça uma planilha nutricional para uma alimentação fitness e mediterrânea com todos os dias da semana."
88
+ messages = [
89
+ {"role": "system", "content": "Você é um assistente útil."},
90
+ {"role": "user", "content": prompt}
91
+ ]
92
+ text = tokenizer.apply_chat_template(
93
+ messages,
94
+ tokenize=False,
95
+ add_generation_prompt=True
96
+ )
97
+ model_inputs = tokenizer([text], return_tensors="pt").to(model.device)
98
+
99
+ generated_ids = model.generate(
100
+ **model_inputs,
101
+ max_new_tokens=512
102
+ )
103
+ generated_ids = [
104
+ output_ids[len(input_ids):] for input_ids, output_ids in zip(model_inputs.input_ids, generated_ids)
105
+ ]
106
+
107
+ response = tokenizer.batch_decode(generated_ids, skip_special_tokens=True)[0]
108
+ ```
109
+ OR
110
+ ```python
111
+ from transformers import GenerationConfig, TextGenerationPipeline, AutoTokenizer, AutoModelForCausalLM
112
+ import torch
113
+
114
+ # Specify the model and tokenizer
115
+ model_id = "amadeusai/AV-MI-Qwen2.5-7B-PT-BR-Instruct"
116
+ tokenizer = AutoTokenizer.from_pretrained(model_id)
117
+ model = AutoModelForCausalLM.from_pretrained(model_id)
118
+
119
+ # Specify the generation parameters as you like
120
+ generation_config = GenerationConfig(
121
+ **{
122
+ "do_sample": True,
123
+ "max_new_tokens": 512,
124
+ "renormalize_logits": True,
125
+ "repetition_penalty": 1.2,
126
+ "temperature": 0.1,
127
+ "top_k": 50,
128
+ "top_p": 1.0,
129
+ "use_cache": True,
130
+ }
131
+ )
132
+
133
+ device = torch.device("cuda" if torch.cuda.is_available() else "cpu")
134
+ generator = TextGenerationPipeline(model=model, task="text-generation", tokenizer=tokenizer, device=device)
135
+
136
+ # Generate text
137
+ prompt = "Faça uma planilha nutricional para uma alimentação fitness e mediterrânea com todos os dias da semana"
138
+ completion = generator(prompt, generation_config=generation_config)
139
+ print(completion[0]['generated_text'])
140
+ ```
141
+
142
+
143
+ #### Citation
144
+
145
+ If you find our work helpful, feel free to cite it.
146
+ ```
147
+ @misc{Amadeus AI,
148
+ title = {Amadeus Verbo: A Brazilian Portuguese large language model.},
149
+ url = {https://amadeus-ai.com},
150
+ author = {Amadeus AI},
151
+ month = {November},
152
+ year = {2024}
153
+ }
154
+ ```