File size: 4,023 Bytes
e20c7d1
 
 
 
 
 
 
 
 
 
 
 
 
1c4a9bf
 
a151b75
 
b513149
a151b75
 
b513149
9414256
76f6dd3
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
b513149
a151b75
 
 
 
b513149
a151b75
 
 
 
 
b513149
a151b75
b513149
a151b75
 
 
b513149
a151b75
b513149
a151b75
 
 
 
b513149
a151b75
b513149
a151b75
b513149
a151b75
 
 
 
 
 
 
 
 
 
 
 
b513149
a151b75
b513149
a151b75
b513149
a151b75
b513149
a151b75
b513149
a151b75
b513149
a151b75
b513149
a151b75
b513149
a151b75
 
 
 
b513149
a151b75
b513149
a151b75
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
68
69
70
71
72
73
74
75
76
77
78
79
80
81
82
83
84
85
86
87
88
89
90
91
92
93
94
95
96
97
98
99
100
101
102
103
---
library_name: transformers
pipeline_tag: text-generation
tags:
- glm4_moe
- AWQ
- FP16Mix
- quantization fix
- vLLM
base_model:
  - zai-org/GLM-4.5-Air
base_model_relation: quantized
---

  
  # GLM-4.5-Air-GPTQ-Int4-Int8Mix
  Base model: [zai-org/GLM-4.5-Air](https://huggingface.co/zai-org/GLM-4.5-Air)

  ### 【vLLM Single Node with 8 GPUs Startup Command】
  <i>Note: You must use `--enable-expert-parallel` to start this model, otherwise the expert tensor TP will not divide evenly. This is required even for 2 GPUs.</i>

```
CONTEXT_LENGTH=32768

VLLM_USE_MODELSCOPE=true vllm serve \
    QuantTrio/GLM-4.5-Air-GPTQ-Int4-Int8Mix \
    --served-model-name GLM-4.5-Air-GPTQ-Int4-Int8Mix \
    --enable-expert-parallel \
    --swap-space 16 \
    --max-num-seqs 512 \
    --max-model-len $CONTEXT_LENGTH \
    --max-seq-len-to-capture $CONTEXT_LENGTH \
    --gpu-memory-utilization 0.9 \
    --tensor-parallel-size 8 \
    --trust-remote-code \
    --disable-log-requests \
    --host 0.0.0.0 \
    --port 8000
```

  ### 【Dependencies】
  ```
  vllm==0.10.0
  ```

  ### 【Model Update Date】
  ```
  2025-07-30
  1. Initial commit
  ```

  ### 【Model Files】

  | File Size | Last Updated |
  |-----------|--------------|
  | `67GB`    | `2025-07-30` |

  ### 【Model Download】

  ```python
  from huggingface_hub import snapshot_download
  snapshot_download('QuantTrio/GLM-4.5-Air-GPTQ-Int4-Int8Mix', cache_dir="your_local_path")
  ```

  ### 【Overview】

  # GLM-4.5

  <div align="center">
  <img src=https://raw.githubusercontent.com/zai-org/GLM-4.5/refs/heads/main/resources/logo.svg width="15%"/>
  </div>
  <p align="center">
      👋 Join our <a href="https://github.com/zai-org/GLM-4.5/blob/main/resources/WECHAT.md" target="_blank"> WeChat group </a>.
      <br>
      📖 Read the GLM-4.5 <a href="https://z.ai/blog/glm-4.5" target="_blank"> technical blog </a>.
      <br>
      📍 Use the GLM-4.5 API service on the <a href="https://docs.bigmodel.cn/cn/guide/models/text/glm-4.5"> ZhipuAI Open Platform </a>.
      <br>
      👉 Try <a href="https://chat.z.ai" >GLM-4.5 </a> online.
  </p>

  ## Model Introduction

  The **GLM-4.5** series is a foundation model family designed specifically for agents. GLM-4.5 has **355 billion** total parameters, including **32 billion** active parameters. GLM-4.5-Air features a more compact design with **106 billion** total parameters and **12 billion** active parameters. GLM-4.5 models unify reasoning, encoding, and agent capabilities to meet the complex demands of agent-based applications.

  Both GLM-4.5 and GLM-4.5-Air are hybrid reasoning models that offer two modes: a *thinking mode* for complex reasoning and tool use, and a *non-thinking mode* for instant response.

  We have open-sourced the base models, hybrid reasoning models, and FP8 versions of GLM-4.5 and GLM-4.5-Air. They are released under the MIT open-source license, available for commercial use and secondary development.

  In our comprehensive evaluation across 12 industry-standard benchmarks, GLM-4.5 achieved an outstanding score of **63.2**, ranking **3rd** among all proprietary and open-source models. Notably, GLM-4.5-Air maintained excellent efficiency while achieving a competitive score of **59.8**.

  ![bench](https://raw.githubusercontent.com/zai-org/GLM-4.5/refs/heads/main/resources/bench.png)

  For more evaluation results, case studies, and technical details, please visit our [technical blog](https://z.ai/blog/glm-4.5). The full technical report will be released soon.

  Model code, tool parsers, and inference parsers can be found in:
  - [transformers](https://github.com/huggingface/transformers/tree/main/src/transformers/models/glm4_moe)
  - [vLLM](https://github.com/vllm-project/vllm/blob/main/vllm/model_executor/models/glm4_moe_mtp.py)
  - [SGLang](https://github.com/sgl-project/sglang/blob/main/python/sglang/srt/models/glm4_moe.py)

  ## Quick Start

  Please refer to our [GitHub project](https://github.com/zai-org/GLM-4.5).