FalconMind3b / README.md
CoolCreator's picture
Update README.md
c711d3e verified
|
raw
history blame
1.27 kB
metadata
tags:
  - autotrain
  - text-generation
  - peft
  - chain-of-though
  - finetuned
library_name: transformers
base_model: tiiuae/Falcon3-3B-Instruct
widget:
  - messages:
      - role: user
        content: What is your favorite condiment?

This is a model trained to think in chain of thought. It was trained to think step ny step about problems.

you can find the video explaining how this works, and more details below.

Model Trained Using AutoTrain This model was trained using AutoTrain. For more information, please visit AutoTrain.

Usage


from transformers import AutoModelForCausalLM, AutoTokenizer

model_path = "PATH_TO_THIS_REPO"

tokenizer = AutoTokenizer.from_pretrained(model_path)
model = AutoModelForCausalLM.from_pretrained(
    model_path,
    device_map="auto",
    torch_dtype='auto'
).eval()

# Prompt content: "hi"
messages = [
    {"role": "user", "content": "hi"}
]

input_ids = tokenizer.apply_chat_template(conversation=messages, tokenize=True, add_generation_prompt=True, return_tensors='pt')
output_ids = model.generate(input_ids.to('cuda'))
response = tokenizer.decode(output_ids[0][input_ids.shape[1]:], skip_special_tokens=True)

# Model response: "Hello! How can I assist you today?"
print(response)