File size: 8,234 Bytes
c3b4411
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
9b7d181
c3b4411
 
 
 
 
 
 
 
3cdbb69
 
c3b4411
3cdbb69
 
 
c3b4411
9b7d181
3cdbb69
 
c3b4411
 
 
 
 
 
3cdbb69
c3b4411
 
 
 
 
 
 
 
 
3cdbb69
 
c3b4411
 
 
 
 
 
9b7d181
c3b4411
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
9b7d181
c3b4411
 
9b7d181
c3b4411
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
68
69
70
71
72
73
74
75
76
77
78
79
80
81
82
83
84
85
86
87
88
89
90
91
92
93
94
95
96
97
98
99
100
101
102
103
104
105
106
107
108
109
110
111
112
113
114
115
116
117
118
119
120
121
122
123
124
125
126
127
128
129
130
131
132
133
134
135
136
137
138
139
140
141
142
143
144
145
146
147
148
149
150
151
152
153
154
155
156
157
158
159
---
license: apache-2.0
language:
- en
pipeline_tag: text-generation
tags:
- chat
---
# InternLM3-8B-Instruct GGUF Model

## Introduction

The `internlm3-8b-instruct` model in GGUF format can be utilized by [llama.cpp](https://github.com/ggerganov/llama.cpp), a highly popular open-source framework for Large Language Model (LLM) inference, across a variety of hardware platforms, both locally and in the cloud.
This repository offers `internlm3-8b-instruct` models in GGUF format in both half precision and various low-bit quantized versions, including `q5_0`, `q5_k_m`, `q6_k`, and `q8_0`.

In the subsequent sections, we will first present the installation procedure, followed by an explanation of the model download process. 
And finally we will illustrate the methods for model inference and service deployment through specific examples.

## Installation

We recommend building `llama.cpp` from source. The following code snippet provides an example for the Linux CUDA platform. For instructions on other platforms, please refer to the [official guide](https://github.com/ggerganov/llama.cpp?tab=readme-ov-file#build).

- Step 1: create a conda environment and install cmake

```shell
conda create --name internlm3 python=3.10 -y
conda activate internlm3
pip install cmake
```

- Step 2: clone the source code and build the project 

```shell
git clone --depth=1 https://github.com/ggerganov/llama.cpp.git
cd llama.cpp
cmake -B build -DGGML_CUDA=ON
cmake --build build --config Release -j
```

All the built targets can be found in the sub directory `build/bin`

In the following sections, we assume that the working directory is at the root directory of `llama.cpp`.

## Download models

In the [introduction section](#introduction), we mentioned that this repository includes several models with varying levels of computational precision. You can download the appropriate model based on your requirements.
For instance, `internlm3-8b-instruct-fp16.gguf` can be downloaded as below:

```shell
pip install huggingface-hub
huggingface-cli download internlm/internlm3-8b-instruct-gguf internlm3-8b-instruct.gguf --local-dir . --local-dir-use-symlinks False
```

## Inference

You can use `llama-cli` for conducting inference. For a detailed explanation of `llama-cli`, please refer to [this guide](https://github.com/ggerganov/llama.cpp/blob/master/examples/main/README.md)

### chat example

Here is an example of using the thinking system prompt.

```shell

thinking_system_prompt="<|im_start|>system\nYou are an expert mathematician with extensive experience in mathematical competitions. You approach problems through systematic thinking and rigorous reasoning. When solving problems, follow these thought processes:\n## Deep Understanding\nTake time to fully comprehend the problem before attempting a solution. Consider:\n- What is the real question being asked?\n- What are the given conditions and what do they tell us?\n- Are there any special restrictions or assumptions?\n- Which information is crucial and which is supplementary?\n## Multi-angle Analysis\nBefore solving, conduct thorough analysis:\n- What mathematical concepts and properties are involved?\n- Can you recall similar classic problems or solution methods?\n- Would diagrams or tables help visualize the problem?\n- Are there special cases that need separate consideration?\n## Systematic Thinking\nPlan your solution path:\n- Propose multiple possible approaches\n- Analyze the feasibility and merits of each method\n- Choose the most appropriate method and explain why\n- Break complex problems into smaller, manageable steps\n## Rigorous Proof\nDuring the solution process:\n- Provide solid justification for each step\n- Include detailed proofs for key conclusions\n- Pay attention to logical connections\n- Be vigilant about potential oversights\n## Repeated Verification\nAfter completing your solution:\n- Verify your results satisfy all conditions\n- Check for overlooked special cases\n- Consider if the solution can be optimized or simplified\n- Review your reasoning process\nRemember:\n1. Take time to think thoroughly rather than rushing to an answer\n2. Rigorously prove each key conclusion\n3. Keep an open mind and try different approaches\n4. Summarize valuable problem-solving methods\n5. Maintain healthy skepticism and verify multiple times\nYour response should reflect deep mathematical understanding and precise logical thinking, making your solution path and reasoning clear to others.\nWhen you're ready, present your complete solution with:\n- Clear problem understanding\n- Detailed solution process\n- Key insights\n- Thorough verification\nFocus on clear, logical progression of ideas and thorough explanation of your mathematical reasoning. Provide answers in the same language as the user asking the question, repeat the final answer using a '\\boxed{}' without any units, you have [[8192]] tokens to complete the answer.\n<|im_end|>\n"

build/bin/llama-cli \
    --model internlm3-8b-instruct.gguf  \
    --predict 2048 \
    --ctx-size 8192 \
    --gpu-layers 48 \
    --temp 0.8 \
    --top-p 0.8 \
    --top-k 50 \
    --seed 1024 \
    --color \
    --prompt "$thinking_system_prompt" \
    --interactive \
    --multiline-input \
    --conversation \
    --verbose \
    --logdir workdir/logdir \
    --in-prefix "<|im_start|>user\n" \
    --in-suffix "<|im_end|>\n<|im_start|>assistant\n"
```

Then input your question like `Given the function\(f(x)=\mathrm{e}^{x}-ax - a^{3}\),\n(1) When \(a = 1\), find the equation of the tangent line to the curve \(y = f(x)\) at the point \((1,f(1))\).\n(2) If \(f(x)\) has a local minimum and the minimum value is less than \(0\), determine the range of values for \(a\).`.

### Function call example

`llama-cli` example:

```shell
build/bin/llama-cli \
    --model internlm3-8b-instruct.gguf \
    --predict 512 \
    --ctx-size 4096 \
    --gpu-layers 48 \
    --temp 0.8 \
    --top-p 0.8 \
    --top-k 50 \
    --seed 1024 \
    --color \
    --prompt '<|im_start|>system\nYou are InternLM-Chat, a harmless AI assistant.<|im_end|>\n<|im_start|>system name=<|plugin|>[{"name": "get_current_weather", "parameters": {"required": ["location"], "type": "object", "properties": {"location": {"type": "string", "description": "The city and state, e.g. San Francisco, CA"}, "unit": {"type": "string"}}}, "description": "Get the current weather in a given location"}]<|im_end|>\n<|im_start|>user\n' \
    --interactive \
    --multiline-input \
    --conversation \
    --verbose \
    --in-suffix "<|im_end|>\n<|im_start|>assistant\n" \
    --special
```

Conversation results:

```text
<s><|im_start|>system
You are InternLM-Chat, a harmless AI assistant.<|im_end|>
<|im_start|>system name=<|plugin|>[{"name": "get_current_weather", "parameters": {"required": ["location"], "type": "object", "properties": {"location": {"type": "string", "description": "The city and state, e.g. San Francisco, CA"}, "unit": {"type": "string"}}}, "description": "Get the current weather in a given location"}]<|im_end|>
<|im_start|>user

> I want to know today's weather in Shanghai
I need to use the get_current_weather function to get the current weather in Shanghai.<|action_start|><|plugin|>
{"name": "get_current_weather", "parameters": {"location": "Shanghai"}}<|action_end|>32
<|im_end|>

> <|im_start|>environment name=<|plugin|>\n{"temperature": 22}
The current temperature in Shanghai is 22 degrees Celsius.<|im_end|>

> 
```

## Serving

`llama.cpp` provides an OpenAI API compatible server - `llama-server`. You can deploy `internlm3-8b-instruct.gguf` into a service like this:

```shell
./build/bin/llama-server -m ./internlm3-8b-instruct.gguf -ngl 48
```

At the client side, you can access the service through OpenAI API:

```python
from openai import OpenAI
client = OpenAI(
    api_key='YOUR_API_KEY',
    base_url='http://localhost:8080/v1'
)
model_name = client.models.list().data[0].id
response = client.chat.completions.create(
  model=model_name,
  messages=[
    {"role": "system", "content": "You are a helpful assistant."},
    {"role": "user", "content": " provide three suggestions about time management"},
  ],
  temperature=0.8,
  top_p=0.8
)
print(response)
```