agv_llm / README.md
mozihe's picture
Update README.md
d1d62cf verified
---
license: apache-2.0
tags:
- llm
- mozihe
- agv
- defect-detection
- chinese
- english
- transformers
- ollama
model_name: agv_llm
base_model: meta-llama/Llama-3.1-8B
library_name: transformers
---
# 📄 AGV-LLM
> **Small enough to self-host, smart enough to 写巡检报告,分析缺陷数据**
> 8 B bilingual model fine-tuned for **tunnel-defect description** & **work-order drafting**.
> Works in both **Transformers** and **Ollama**.
---
## ✨ Highlights
| Feature | Details |
| ------- | ------- |
| 🔧 **Domain-specific** | 56 K 巡检对话 / 工单指令数据 / 数据分析 |
| 🧑‍🏫 **LoRA fine-tuned** | QLoRA-NF4, Rank 8, α = 16 |
| 🈶 **Bilingual** | 中文 ↔ English |
| ⚡ **Fast** | ~15 tok/s on RTX 4090 (fp16) |
| 📦 **Drop-in** | `AutoModelForCausalLM` **or** `ollama pull mozihe/agv_llm` |
---
## 🛠️ Usage
### Transformers
```python
from transformers import AutoTokenizer, AutoModelForCausalLM
import torch, textwrap
tok = AutoTokenizer.from_pretrained("mozihe/agv_llm")
model = AutoModelForCausalLM.from_pretrained(
"mozihe/agv_llm", torch_dtype=torch.float16, device_map="auto"
)
prompt = (
"请根据以下检测框信息,生成缺陷描述和整改建议:\\n"
"位置:x=12.3,y=1.2,z=7.8\\n种类:裂缝\\n置信度:0.87"
)
inputs = tok(prompt, return_tensors="pt").to(model.device)
out = model.generate(**inputs, max_new_tokens=256, temperature=0.3)
print(textwrap.fill(tok.decode(out[0], skip_special_tokens=True), 80))
```
### Ollama
1. 构建本地模型并命名:
```bash
ollama create agv_llm -f Modelfile
```
2. 运行:
```
ollama run agv_llm
```
> 说明
> - `ADAPTER` 行既支持远程 Hugging Face 路径,也支持 `file://` 本地 .safetensors。
> - 更多 Modelfile 指令见 <https://github.com/ollama/ollama/blob/main/docs/modelfile.md>
---
## 📚 Training Details
| Item | Value |
| ---- | ----- |
| Base | Llama-3.1-8B |
| Method | QLoRA (bitsandbytes NF4) |
| Steps | 25 epochs |
| LR / Scheduler | 1e-4 / cosine |
| Context | 4 096 tokens |
| Precision | bfloat16 |
| Hardware | 4 × A100-80 GB |
---
## ✅ Intended Use
* YOLO 检出 → 结构化缺陷描述
* 生成整改建议 / 工单标题 / 优先级
* 巡检知识库问答(RAG + Ollama)
### ❌ Out-of-scope
* 医疗 / 法律结论
* 任何未经人工复核的安全决策
---
## ⚠️ Limitations
* 8 B 参数 ≠ GPT-4 级别推理深度
* 训练域集中在隧道场景,泛化到其他土木结构有限
* 多语种(非中英)支持较弱
---
## 📄 Citation
```text
@misc{mozihe2025agvllm,
title = {AGV-LLM: A Domain LLM for Tunnel Inspection},
author = {Zhu, Junheng},
year = {2025},
url = {https://huggingface.co/mozihe/agv_llm}
}
```
---
## 📝 License
Apache 2.0 — 商用、私有部署皆可,保留版权与许可证即可。
若本模型帮你省掉一次组会汇报(不包ppt),欢迎 ⭐!