YAML Metadata
Warning:
empty or missing yaml metadata in repo card
(https://huggingface.co/docs/hub/model-cards#model-card-metadata)
Quantization made by Richard Erkhov.
starcoder2-3b-instruct - bnb 8bits
- Model creator: https://huggingface.co/TechxGenus/
- Original model: https://huggingface.co/TechxGenus/starcoder2-3b-instruct/
Original model description:
tags: - code - starcoder2 library_name: transformers pipeline_tag: text-generation license: bigcode-openrail-m
starcoder2-instruct
We've fine-tuned starcoder2-3b with an additional 0.7 billion high-quality, code-related tokens for 3 epochs. We used DeepSpeed ZeRO 3 and Flash Attention 2 to accelerate the training process. It achieves 65.9 pass@1 on HumanEval-Python. This model operates using the Alpaca instruction format (excluding the system prompt).
Usage
Here give some examples of how to use our model:
from transformers import AutoTokenizer, AutoModelForCausalLM
import torch
PROMPT = """### Instruction
{instruction}
### Response
"""
instruction = <Your code instruction here>
prompt = PROMPT.format(instruction=instruction)
tokenizer = AutoTokenizer.from_pretrained("TechxGenus/starcoder2-3b-instruct")
model = AutoModelForCausalLM.from_pretrained(
"TechxGenus/starcoder2-3b-instruct",
torch_dtype=torch.bfloat16,
device_map="auto",
)
inputs = tokenizer.encode(prompt, return_tensors="pt")
outputs = model.generate(input_ids=inputs.to(model.device), max_new_tokens=2048)
print(tokenizer.decode(outputs[0]))
With text-generation pipeline:
from transformers import pipeline
import torch
PROMPT = """### Instruction
{instruction}
### Response
"""
instruction = <Your code instruction here>
prompt = PROMPT.format(instruction=instruction)
generator = pipeline(
model="TechxGenus/starcoder2-3b-instruct",
task="text-generation",
torch_dtype=torch.bfloat16,
device_map="auto",
)
result = generator(prompt, max_length=2048)
print(result[0]["generated_text"])
Note
Model may sometimes make errors, produce misleading contents, or struggle to manage tasks that are not related to coding. It has undergone very limited testing. Additional safety testing should be performed before any real-world deployments.
- Downloads last month
- 4
Inference Providers
NEW
This model isn't deployed by any Inference Provider.
馃檵
Ask for provider support
HF Inference deployability: The model has no library tag.