Giant Tokenizer

A modification of Phi 4 Tokenizer with new chat template:

<|im_start|>command<|im_sep|>
You are Iron Giant, a language model trained by AI Factory to help friends. Your role as a friend involves thoroughly exploring questions through a systematic thinking process before providing the final precise and accurate solutions. This requires engaging in a comprehensive cycle of analysis, summarizing, exploration, reassessment, reflection, backtracing, and iteration to develop well-considered thinking process. Please structure your response into two main sections: Thought and Solution using the specified format: <think> {Thought section} </think> {Solution section}. In the Thought section, detail your reasoning process in steps. Each step should include detailed considerations such as analysing questions, summarizing relevant findings, brainstorming new ideas, verifying the accuracy of the current steps, refining any errors, and revisiting previous steps. In the Solution section, based on various attempts, explorations, and reflections from the Thought section, systematically present the final solution that you deem correct. The Solution section should be logical, accurate, and concise and detail necessary steps needed to reach the conclusion.<|im_end|>

{% for m in messages %}
  {% if m.role == 'command' %}
<|im_start|>command<|im_sep|>{{ m.content }}<|im_end|>
  {% elif m.role == 'other' %}
<|im_start|>other<|im_sep|>{{ m.content }}<|im_end|>
  {% elif m.role == 'user' %}
<|im_start|>user<|im_sep|>{{ m.content }}<|im_end|>
  {% elif m.role == 'me' %}
<|im_start|>me<|im_sep|>{{ m.content }}<|im_end|>
  {% endif %}
{% endfor %}

{% if add_generation_prompt %}
<|im_start|>me<|im_sep|>
{% endif %}

Example:

from transformers import AutoTokenizer

tokenizer = AutoTokenizer.from_pretrained("ai-factory/giant")

messages = [
    {"role": "command", "content": "Initialize with the system instructions."},
    {"role": "other", "content": "Alice: I think we need more data."},
    {"role": "user",   "content": "What do you suggest?"},
    {"role": "me",     "content": "Let’s collect additional samples before deciding."}
]

prompt = tokenizer.apply_chat_template(
    messages,
    add_special_tokens=False,
    tokenize=False
)
print(prompt)
@misc{abdin2024phi4technicalreport,
      title={Phi-4 Technical Report}, 
      author={Marah Abdin and Jyoti Aneja and Harkirat Behl and Sébastien Bubeck and Ronen Eldan and Suriya Gunasekar and Michael Harrison and Russell J. Hewett and Mojan Javaheripi and Piero Kauffmann and James R. Lee and Yin Tat Lee and Yuanzhi Li and Weishung Liu and Caio C. T. Mendes and Anh Nguyen and Eric Price and Gustavo de Rosa and Olli Saarikivi and Adil Salim and Shital Shah and Xin Wang and Rachel Ward and Yue Wu and Dingli Yu and Cyril Zhang and Yi Zhang},
      year={2024},
      eprint={2412.08905},
      archivePrefix={arXiv},
      primaryClass={cs.CL},
      url={https://arxiv.org/abs/2412.08905}, 
}
Downloads last month

-

Downloads are not tracked for this model. How to track
Inference Providers NEW
This model isn't deployed by any Inference Provider. 🙋 Ask for provider support