Qwen2.5-72B-Instruct / Qwen_Qwen2.5-72B-Instruct.json
fatima113's picture
add AIBOM
e49bb8d verified
raw
history blame
4.13 kB
{
"bomFormat": "CycloneDX",
"specVersion": "1.6",
"serialNumber": "urn:uuid:cc0286ed-e90e-4bc1-8963-7fe3fc905c86",
"version": 1,
"metadata": {
"timestamp": "2025-06-05T09:38:13.084216+00:00",
"component": {
"type": "machine-learning-model",
"bom-ref": "Qwen/Qwen2.5-72B-Instruct-bdf82bad-49e1-5618-a932-a8d56c0d6da6",
"name": "Qwen/Qwen2.5-72B-Instruct",
"externalReferences": [
{
"url": "https://huggingface.co/Qwen/Qwen2.5-72B-Instruct",
"type": "documentation"
}
],
"modelCard": {
"modelParameters": {
"task": "text-generation",
"architectureFamily": "qwen2",
"modelArchitecture": "Qwen2ForCausalLM"
},
"properties": [
{
"name": "library_name",
"value": "transformers"
},
{
"name": "base_model",
"value": "Qwen/Qwen2.5-72B"
}
]
},
"authors": [
{
"name": "Qwen"
}
],
"licenses": [
{
"license": {
"name": "qwen",
"url": "https://huggingface.co/Qwen/Qwen2.5-72B-Instruct/blob/main/LICENSE"
}
}
],
"description": "Qwen2.5 is the latest series of Qwen large language models. For Qwen2.5, we release a number of base language models and instruction-tuned language models ranging from 0.5 to 72 billion parameters. Qwen2.5 brings the following improvements upon Qwen2:- Significantly **more knowledge** and has greatly improved capabilities in **coding** and **mathematics**, thanks to our specialized expert models in these domains.- Significant improvements in **instruction following**, **generating long texts** (over 8K tokens), **understanding structured data** (e.g, tables), and **generating structured outputs** especially JSON. **More resilient to the diversity of system prompts**, enhancing role-play implementation and condition-setting for chatbots.- **Long-context Support** up to 128K tokens and can generate up to 8K tokens.- **Multilingual support** for over 29 languages, including Chinese, English, French, Spanish, Portuguese, German, Italian, Russian, Japanese, Korean, Vietnamese, Thai, Arabic, and more.**This repo contains the instruction-tuned 72B Qwen2.5 model**, which has the following features:- Type: Causal Language Models- Training Stage: Pretraining & Post-training- Architecture: transformers with RoPE, SwiGLU, RMSNorm, and Attention QKV bias- Number of Parameters: 72.7B- Number of Paramaters (Non-Embedding): 70.0B- Number of Layers: 80- Number of Attention Heads (GQA): 64 for Q and 8 for KV- Context Length: Full 131,072 tokens and generation 8192 tokens- Please refer to [this section](#processing-long-texts) for detailed instructions on how to deploy Qwen2.5 for handling long texts.For more details, please refer to our [blog](https://qwenlm.github.io/blog/qwen2.5/), [GitHub](https://github.com/QwenLM/Qwen2.5), and [Documentation](https://qwen.readthedocs.io/en/latest/).",
"tags": [
"transformers",
"safetensors",
"qwen2",
"text-generation",
"chat",
"conversational",
"en",
"arxiv:2309.00071",
"arxiv:2407.10671",
"base_model:Qwen/Qwen2.5-72B",
"base_model:finetune:Qwen/Qwen2.5-72B",
"license:other",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
]
}
}
}