基于 Meta-Llama-3.1-8B-Instruct 的中文医疗对话模型

本模型通过在 meta-llama/Llama-3.1-8B 基础模型上,使用 Flmc/DISC-Med-SFT 数据集进行监督微调(SFT)得到。该模型旨在为用户提供医疗相关的对话支持。

模型架构

本模型采用了 LoRA (Low-Rank Adaptation) 技术,训练后的 LoRA 适配器权重保存在 adapter_model.safetensors 文件中。

使用方法

1. 使用 peft (不建议,Unsloth 更高效):

from peft import AutoPeftModelForCausalLM
from transformers import AutoTokenizer

model = AutoPeftModelForCausalLM.from_pretrained(
    "lora_model", #  您的模型路径
    load_in_4bit = load_in_4bit,
)
tokenizer = AutoTokenizer.from_pretrained("lora_model")

2. 推荐使用 Unsloth 库:

from unsloth import FastLanguageModel

model, tokenizer = FastLanguageModel.from_pretrained(
    model_name = "lora_model", # 您的模型路径
    max_seq_length = max_seq_length,
    dtype = dtype,
    load_in_4bit = load_in_4bit,
)
FastLanguageModel.for_inference(model) # 启用原生 2 倍加速推理

messages = [
     {"role": "user", "content": "大夫,请问我最近脊椎靠近腰部的地方经常有疼痛感,请问是什么原因?"},
    {"role":"assistant","content":"您好,根据您的症状描述,我怀疑您可能患有腰椎间盘突出症。这个症状常见于中老年人,由于椎间盘损伤或退行性变引起。根据我的经验,您可以考虑进行MRI检查来确认诊断。'"},
    {"role": "user", "content": "大夫,我每天需要坐很久,是不是也和这个有关系?保持怎样的坐姿会改善呢?"},
]
inputs = tokenizer.apply_chat_template(
    messages,
    tokenize = True,
    add_generation_prompt = True, # 必须添加以用于生成
    return_tensors = "pt",
).to("cuda")

from transformers import TextStreamer
text_streamer = TextStreamer(tokenizer, skip_prompt = True)
_ = model.generate(input_ids = inputs, streamer = text_streamer, max_new_tokens = 128,
                   use_cache = True, temperature = 1.5, min_p = 0.1)

3. 使用 unsloth.Q4_K_M.gguf (用于 ollama 和 llama.cpp):

将 unsloth.Q4_K_M.gguf 与以下 Modelfile 放置在同一目录:

from unsloth.Q4_K_M.gguf

SYSTEM "你是一名专业的全科医生,回答的语气必须专业而亲切,需要根据患者提出的症状描述来回答问题,清晰专业的回答患者提出的问题。"

TEMPLATE """{{ if .Messages }}
{{- if or .System .Tools }}<|start_header_id|>system<|end_header_id|>
{{- if .System }}

{{ .System }}
{{- end }}
{{- if .Tools }}

You are a helpful assistant with tool calling capabilities. When you receive a tool call response, use the output to format an answer to the original use question.
{{- end }}
{{- end }}<|eot_id|>
{{- range $i, $_ := .Messages }}
{{- $last := eq (len (slice $.Messages $i)) 1 }}
{{- if eq .Role "user" }}<|start_header_id|>user<|end_header_id|>
{{- if and $.Tools $last }}

Given the following functions, please respond with a JSON for a function call with its proper arguments that best answers the given prompt.

Respond in the format {"name": function name, "parameters": dictionary of argument name and its value}. Do not use variables.

{{ $.Tools }}
{{- end }}

{{ .Content }}<|eot_id|>{{ if $last }}<|start_header_id|>assistant<|end_header_id|>

{{ end }}
{{- else if eq .Role "assistant" }}<|start_header_id|>assistant<|end_header_id|>
{{- if .ToolCalls }}

{{- range .ToolCalls }}{"name": "{{ .Function.Name }}", "parameters": {{ .Function.Arguments }}}{{ end }}
{{- else }}

{{ .Content }}{{ if not $last }}<|eot_id|>{{ end }}
{{- end }}
{{- else if eq .Role "tool" }}<|start_header_id|>ipython<|end_header_id|>

{{ .Content }}<|eot_id|>{{ if $last }}<|start_header_id|>assistant<|end_header_id|>

{{ end }}
{{- end }}
{{- end }}
{{- else }}
{{- if .System }}<|start_header_id|>system<|end_header_id|>

{{ .System }}<|eot_id|>{{ end }}{{ if .Prompt }}<|start_header_id|>user<|end_header_id|>

{{ .Prompt }}<|eot_id|>{{ end }}<|start_header_id|>assistant<|end_header_id|>

{{ end }}{{ .Response }}{{ if .Response }}<|eot_id|>{{ end }}"""
PARAMETER stop "<|start_header_id|>"
PARAMETER stop "<|end_header_id|>"
PARAMETER stop "<|eot_id|>"
PARAMETER stop "<|eom_id|>"
PARAMETER temperature 1.1
PARAMETER min_p 0.1

在安卓termux中使用(感谢snippets:https://gitlab.com/-/snippets/3682973)

Ollama on Termux

Install Dependancies

pkg upgrade
pkg install git cmake golang

Build Ollama from source

git clone --depth 1 https://github.com/ollama/ollama.git
cd ollama
go generate ./...
go build .
./ollama serve &
./ollama run lastmass/llama3.2-chinese

Cleanup

You may want to remove the 'go' folder that was just created in your home directory. If so here is how to do it.

chmod -R 700 ~/go
rm -r ~/go

Currently, termux does not have .local/bin in its PATH (though you can add it if you would prefer). If you would like to move the ollama binary to the bin folder you can do the following.

cp ollama/ollama /data/data/com.termux/files/usr/bin/

Now you can just run ollama in your terminal directly!

注意事项

  • 本模型基于公开数据集进行训练,可能存在一定的偏差或不准确性。
  • 模型输出的文本仅供参考,不能替代专业的医疗建议或诊断。
  • 请务必咨询医生或其他医疗专业人士以获取准确的医疗信息和治疗方案。
  • 在医疗场景中使用该模型时,请务必谨慎评估其输出,并结合其他医疗资源进行综合判断。

免责声明

本模型提供的医疗建议和信息仅为参考,不构成任何形式的医疗诊断或治疗建议。请在咨询专业医疗人员后,再进行任何医疗决策。对于因使用本模型所产生的任何后果,本模型提供者不承担任何责任。

Downloads last month
103
GGUF
Model size
8.03B params
Architecture
llama

4-bit

Inference Examples
This model does not have enough activity to be deployed to Inference API (serverless) yet. Increase its social visibility and check back later, or deploy to Inference Endpoints (dedicated) instead.

Model tree for lastmass/llama3.1-Medical-Assistant

Quantized
(301)
this model

Dataset used to train lastmass/llama3.1-Medical-Assistant