File size: 1,683 Bytes
a83c3af debce6e a83c3af |
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 41 42 43 44 45 46 47 48 49 50 51 52 53 54 55 56 57 58 59 60 61 62 63 64 65 66 67 |
---
license: apache-2.0
datasets:
- motexture/cData
language:
- en
- it
- es
base_model:
- meta-llama/Llama-3.2-3B-Instruct
pipeline_tag: text-generation
tags:
- coding
- coder
- model
- llama
---
# LlamaXCoder-3.2-3B-Instruct
## Introduction
LlamaXCoder-3.2-3B-Instruct is a fine-tuned version of meta-llama/Llama-3.2-3B-Instruct, trained on the cData coding dataset to improve its reasoning and coding ability.
## Quickstart
Here provides a code snippet with `apply_chat_template` to show you how to load the tokenizer and model and how to generate contents.
```python
from transformers import AutoModelForCausalLM, AutoTokenizer
device = "cuda" # the device to load the model onto
model = AutoModelForCausalLM.from_pretrained(
"motexture/LlamaXCoder-3.2-3B-Instruct",
torch_dtype="auto",
device_map="auto"
)
tokenizer = AutoTokenizer.from_pretrained("motexture/LlamaXCoder-3.2-3B-Instruct")
prompt = "Write a C++ program that prints Hello World!"
messages = [
{"role": "system", "content": "You are a helpful assistant."},
{"role": "user", "content": prompt}
]
text = tokenizer.apply_chat_template(
messages,
tokenize=False,
add_generation_prompt=True
)
model_inputs = tokenizer([text], return_tensors="pt").to(device)
generated_ids = model.generate(
model_inputs.input_ids,
max_new_tokens=4096,
do_sample=True,
temperature=0.3
)
generated_ids = [
output_ids[len(input_ids):] for input_ids, output_ids in zip(model_inputs.input_ids, generated_ids)
]
response = tokenizer.batch_decode(generated_ids, skip_special_tokens=True)[0]
```
## License
[Apache 2.0](https://www.apache.org/licenses/LICENSE-2.0) |