TAT-R1
Github: https://github.com/jasonNLP/TAT-R1
Quickstart
Here provides a code snippet to show you how to load the tokenizer and model and how to generate contents.
from transformers import AutoModelForCausalLM, AutoTokenizer
model_name = "hhoh/TAT-R1"
model = AutoModelForCausalLM.from_pretrained(
model_name,
torch_dtype="auto",
device_map="auto"
)
tokenizer = AutoTokenizer.from_pretrained(model_name)
system_prompt = """A conversation between User and Assistant. The User asks a question, and the Assistant solves it. \
The Assistant first thinks about the reasoning process in the mind and then provides the User with the answer. \
The reasoning process is enclosed within <think> </think> and answer is enclosed within <answer> </answer> tags, respectively, \
i.e., <think> reasoning process here </think> <answer> answer here </answer>. \
User:
{}
Assistant:
"""
# For English to Chinese translation, use:
query = "把下面的文本翻译成中文,不要额外解释:\n{}"
# For Chinese to English translation, use:
# query = "把下面的文本翻译成英语,不要额外解释:\n{}"
src_text = "Plants make oxygen which humans breathe, and they take in carbon-dioxide which humans exhale (that is, breathe out)."
prompt = system_prompt.format(query.format(src_text))
model_inputs = tokenizer([prompt], return_tensors="pt").to(model.device)
generated_ids = model.generate(
**model_inputs,
max_new_tokens=2048
)
generated_ids = [
output_ids[len(input_ids):] for input_ids, output_ids in zip(model_inputs.input_ids, generated_ids)
]
response = tokenizer.batch_decode(generated_ids, skip_special_tokens=True)[0]
print(response)
- Downloads last month
- 9
Inference Providers
NEW
This model isn't deployed by any Inference Provider.
🙋
Ask for provider support