Edit model card

ChatGLM3-6B

此版本與 THUDM/ChatGLM3-6B 無異, 僅對tokenizer增加了tokenizer.chat_emplatespecial_tokens

修改後的版本能使tokenizer運用 hf tokenizer 的.apply_chat_template,統一API用法。

修正已經同步提交至THUDM/chatglm3-6b並且等待合併:

from transformers import AutoTokenizer,AutoModelForCausalLM

model_id_or_path = "p208p2002/chatglm3-6b-chat-template"
tokenizer = AutoTokenizer.from_pretrained(model_id_or_path,trust_remote_code=True)
model = AutoModelForCausalLM.from_pretrained(model_id_or_path,device_map="auto",trust_remote_code=True)
inputs = tokenizer.apply_chat_template([
    {"role":"system","content":"你是一位樂於助人、尊重他人且誠實的助理。請始終以最有幫助的方式回答問題。如果你對某個問題不知道答案,請不要提供虛假信息。"},
    {"role":"user","content":"如何減緩地球暖化?"}
],add_generation_prompt=True,tokenize=True,return_tensors="pt")

out = model.generate(inputs,max_new_tokens=256)
print(tokenizer.decode(out[0]))
Downloads last month
532
Safetensors
Model size
6.24B params
Tensor type
FP16
·
Inference API
Unable to determine this model’s pipeline type. Check the docs .