Upload fine-tuned Qwen-1.8B LoRA adapters and tokenizer for stock quant education
Browse files- README.md +139 -3
- adapter_config.json +30 -0
- adapter_model.safetensors +3 -0
- qwen.tiktoken +0 -0
- special_tokens_map.json +4 -0
- tokenizer_config.json +14 -0
- training_args.bin +3 -0
README.md
CHANGED
@@ -1,3 +1,139 @@
|
|
1 |
-
---
|
2 |
-
license: apache-2.0
|
3 |
-
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
1 |
+
---
|
2 |
+
license: apache-2.0
|
3 |
+
language:
|
4 |
+
- zh
|
5 |
+
- en
|
6 |
+
tags:
|
7 |
+
- qwen
|
8 |
+
- lora
|
9 |
+
- peft
|
10 |
+
- large-language-model
|
11 |
+
- quantitative-finance
|
12 |
+
- stock-market
|
13 |
+
- trading-strategy
|
14 |
+
- investment-education
|
15 |
+
- financial-explainer
|
16 |
+
- text-generation
|
17 |
+
- instruction-following
|
18 |
+
base_model: Qwen/Qwen-1_8B-Chat
|
19 |
+
pipeline_tag: text-generation
|
20 |
+
widget:
|
21 |
+
# Example of how to use the model with PEFT
|
22 |
+
# (You'll need to adjust this based on how Qwen LoRA models are typically loaded)
|
23 |
+
- text: "请用大白话解释什么是移动平均线?"
|
24 |
+
example_title: "Explain Moving Average"
|
25 |
+
---
|
26 |
+
|
27 |
+
# Qwen-1.8B-Chat LoRA for Stock Market Quantitative Education (股票量化投教LoRA模型)
|
28 |
+
|
29 |
+
This repository contains LoRA (Low-Rank Adaptation) adapters fine-tuned on the `Qwen/Qwen-1_8B-Chat` model.
|
30 |
+
The goal of this fine-tuning is to create an AI assistant that can explain stock market and quantitative trading concepts in plain language ("大白话"), making these topics more accessible to beginners.
|
31 |
+
|
32 |
+
## Model Description
|
33 |
+
|
34 |
+
This model is a PEFT-LoRA adaptation of the `Qwen/Qwen-1_8B-Chat` large language model. It has been fine-tuned on a small, custom dataset of ~20 instruction-response pairs focused on financial education. Due to the small dataset size, this model should be considered **experimental and for demonstration purposes**.
|
35 |
+
|
36 |
+
**Developed by:** 天算AI科技研发实验室 (Natural Algorithm AI R&D Lab) - jinv2
|
37 |
+
|
38 |
+
## Intended Uses & Limitations
|
39 |
+
|
40 |
+
**Intended Uses:**
|
41 |
+
|
42 |
+
* Educational tool for understanding basic stock market and quantitative trading terms.
|
43 |
+
* Generating simple explanations of financial concepts.
|
44 |
+
* Demonstrating the LoRA fine-tuning process on a chat model for a specific domain.
|
45 |
+
|
46 |
+
**Limitations:**
|
47 |
+
|
48 |
+
* **Not for Financial Advice:** The information provided by this model is strictly for educational purposes and should NOT be considered financial advice. Always consult with a qualified financial advisor before making investment decisions.
|
49 |
+
* **Limited Knowledge:** Fine-tuned on a very small dataset. Its knowledge is restricted and may not be comprehensive or entirely accurate.
|
50 |
+
* **Potential for Hallucinations:** Like all LLMs, it may generate incorrect or nonsensical information.
|
51 |
+
* **Overfitting:** Due to the small dataset, the model may be overfit to the training examples.
|
52 |
+
* **Bias:** The training data may contain biases, which could be reflected in the model's responses.
|
53 |
+
* **Requires Base Model:** These are LoRA adapters and require the original `Qwen/Qwen-1_8B-Chat` base model to be loaded first.
|
54 |
+
|
55 |
+
## How to Use with PEFT
|
56 |
+
|
57 |
+
You would typically load the base model and then apply these LoRA adapters using the PEFT library.
|
58 |
+
|
59 |
+
```python
|
60 |
+
from peft import PeftModel
|
61 |
+
from transformers import AutoModelForCausalLM, AutoTokenizer
|
62 |
+
import torch
|
63 |
+
|
64 |
+
base_model_name = "Qwen/Qwen-1_8B-Chat"
|
65 |
+
adapter_model_name = "jinv2/qwen-1.8b-chat-lora-stock-quant-edu" # Replace with your actual model name on Hub
|
66 |
+
|
67 |
+
# Load tokenizer
|
68 |
+
tokenizer = AutoTokenizer.from_pretrained(base_model_name, trust_remote_code=True)
|
69 |
+
if tokenizer.pad_token_id is None:
|
70 |
+
tokenizer.pad_token_id = tokenizer.eos_token_id # Or <|endoftext|> ID: 151643
|
71 |
+
|
72 |
+
# Load base model
|
73 |
+
base_model = AutoModelForCausalLM.from_pretrained(
|
74 |
+
base_model_name,
|
75 |
+
torch_dtype=torch.float16, # or "auto"
|
76 |
+
device_map="auto",
|
77 |
+
trust_remote_code=True
|
78 |
+
)
|
79 |
+
|
80 |
+
# Load LoRA adapter
|
81 |
+
model = PeftModel.from_pretrained(base_model, adapter_model_name)
|
82 |
+
model = model.eval() # Set to evaluation mode
|
83 |
+
|
84 |
+
# Example Inference (Qwen chat format)
|
85 |
+
prompt = "请用大白话解释什么是MACD指标?"
|
86 |
+
# For Qwen-Chat, using model.chat() is recommended
|
87 |
+
response, history = model.chat(tokenizer, prompt, history=None, system="You are a helpful financial education assistant.")
|
88 |
+
print(response)
|
89 |
+
|
90 |
+
# Alternative generic generation
|
91 |
+
# messages = [
|
92 |
+
# {"role": "system", "content": "You are a helpful financial education assistant."},
|
93 |
+
# {"role": "user", "content": prompt}
|
94 |
+
# ]
|
95 |
+
# text = tokenizer.apply_chat_template(messages, tokenize=False, add_generation_prompt=True)
|
96 |
+
# model_inputs = tokenizer([text], return_tensors="pt").to(model.device)
|
97 |
+
# generated_ids = model.generate(model_inputs.input_ids, max_new_tokens=512)
|
98 |
+
# generated_ids = [output_ids[len(input_ids):] for input_ids, output_ids in zip(model_inputs.input_ids, generated_ids)]
|
99 |
+
# response = tokenizer.batch_decode(generated_ids, skip_special_tokens=True)[0]
|
100 |
+
# print(response)
|
101 |
+
|
102 |
+
Training Details
|
103 |
+
|
104 |
+
Base Model: Qwen/Qwen-1_8B-Chat
|
105 |
+
|
106 |
+
Fine-tuning Method: LoRA (Low-Rank Adaptation) via PEFT and TRL SFTTrainer.
|
107 |
+
|
108 |
+
Dataset: ~20 custom instruction-response pairs for financial education.
|
109 |
+
|
110 |
+
Training Configuration (Key Parameters):
|
111 |
+
|
112 |
+
LoRA r: 8
|
113 |
+
|
114 |
+
LoRA alpha: 16
|
115 |
+
|
116 |
+
Target Modules: c_attn, c_proj, w1, w2
|
117 |
+
|
118 |
+
Optimizer: AdamW (default from Trainer)
|
119 |
+
|
120 |
+
Precision: FP32 (due to issues with FP16/BF16 GradScaler in the environment)
|
121 |
+
|
122 |
+
Epochs: ~17 (based on 80 steps)
|
123 |
+
|
124 |
+
Batch Size (effective): 4 (per_device_train_batch_size=1, gradient_accumulation_steps=4)
|
125 |
+
|
126 |
+
Learning Rate: 2e-4
|
127 |
+
|
128 |
+
Max Sequence Length: 512
|
129 |
+
|
130 |
+
Disclaimer
|
131 |
+
|
132 |
+
This model is provided "as-is" without any warranty. The developers are not responsible for any outcomes resulting from the use of this model. Always verify information and use at your own risk.
|
133 |
+
|
134 |
+
Copyright Information:
|
135 |
+
|
136 |
+
© 天算AI科技研发实验室 (Natural Algorithm AI R&D Lab) - jinv2
|
137 |
+
All rights reserved unless otherwise specified by the license.
|
138 |
+
|
139 |
+
|
adapter_config.json
ADDED
@@ -0,0 +1,30 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
1 |
+
{
|
2 |
+
"alpha_pattern": {},
|
3 |
+
"auto_mapping": null,
|
4 |
+
"base_model_name_or_path": "Qwen/Qwen-1_8B-Chat",
|
5 |
+
"bias": "none",
|
6 |
+
"fan_in_fan_out": false,
|
7 |
+
"inference_mode": true,
|
8 |
+
"init_lora_weights": true,
|
9 |
+
"layers_pattern": null,
|
10 |
+
"layers_to_transform": null,
|
11 |
+
"loftq_config": {},
|
12 |
+
"lora_alpha": 32,
|
13 |
+
"lora_dropout": 0.05,
|
14 |
+
"megatron_config": null,
|
15 |
+
"megatron_core": "megatron.core",
|
16 |
+
"modules_to_save": null,
|
17 |
+
"peft_type": "LORA",
|
18 |
+
"r": 16,
|
19 |
+
"rank_pattern": {},
|
20 |
+
"revision": null,
|
21 |
+
"target_modules": [
|
22 |
+
"w1",
|
23 |
+
"w2",
|
24 |
+
"c_proj",
|
25 |
+
"c_attn"
|
26 |
+
],
|
27 |
+
"task_type": "CAUSAL_LM",
|
28 |
+
"use_dora": false,
|
29 |
+
"use_rslora": false
|
30 |
+
}
|
adapter_model.safetensors
ADDED
@@ -0,0 +1,3 @@
|
|
|
|
|
|
|
|
|
1 |
+
version https://git-lfs.github.com/spec/v1
|
2 |
+
oid sha256:94677c156cb9a430296fbb57613430e30cd77f2b0442ff2c29f2ed94d7248be0
|
3 |
+
size 26867880
|
qwen.tiktoken
ADDED
The diff for this file is too large to render.
See raw diff
|
|
special_tokens_map.json
ADDED
@@ -0,0 +1,4 @@
|
|
|
|
|
|
|
|
|
|
|
1 |
+
{
|
2 |
+
"eos_token": "<|endoftext|>",
|
3 |
+
"pad_token": "<|endoftext|>"
|
4 |
+
}
|
tokenizer_config.json
ADDED
@@ -0,0 +1,14 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
1 |
+
{
|
2 |
+
"added_tokens_decoder": {},
|
3 |
+
"auto_map": {
|
4 |
+
"AutoTokenizer": [
|
5 |
+
"Qwen/Qwen-1_8B-Chat--tokenization_qwen.QWenTokenizer",
|
6 |
+
null
|
7 |
+
]
|
8 |
+
},
|
9 |
+
"clean_up_tokenization_spaces": true,
|
10 |
+
"eos_token": "<|endoftext|>",
|
11 |
+
"model_max_length": 8192,
|
12 |
+
"pad_token": "<|endoftext|>",
|
13 |
+
"tokenizer_class": "QWenTokenizer"
|
14 |
+
}
|
training_args.bin
ADDED
@@ -0,0 +1,3 @@
|
|
|
|
|
|
|
|
|
1 |
+
version https://git-lfs.github.com/spec/v1
|
2 |
+
oid sha256:a9de69189de29010330d67e0bb8539112856b4725f2bcc2cc26387be3e7b4a51
|
3 |
+
size 5457
|