SandLogicTechnologies commited on
Commit
971e57f
1 Parent(s): 5bc68b9

Update README.md

Browse files
Files changed (1) hide show
  1. README.md +150 -3
README.md CHANGED
@@ -1,3 +1,150 @@
1
- ---
2
- license: llama3.2
3
- ---
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ ---
2
+ license: llama3.2
3
+ datasets:
4
+ - bigbio/med_qa
5
+ language:
6
+ - en
7
+ base_model:
8
+ - meta-llama/Llama-3.2-1B-Instruct
9
+ pipeline_tag: text-generation
10
+ tags:
11
+ - medical
12
+ - SandLogic
13
+ - Meta
14
+ - Conversational
15
+ ---
16
+ # SandLogic Technology - Quantized Llama-3.2-1B-Instruct-Medical-GGUF
17
+
18
+ ## Model Description
19
+
20
+ We have quantized the Llama-3.2-1B-Instruct-Medical-GGUF model into two variants:
21
+
22
+ 1. Q5_KM
23
+ 2. Q4_KM
24
+
25
+ These quantized models offer improved efficiency while maintaining performance in medical-related tasks.
26
+
27
+ Discover our full range of quantized language models by visiting our [SandLogic Lexicon](https://github.com/sandlogic/SandLogic-Lexicon) GitHub. To learn more about our company and services, check out our website at [SandLogic](https://www.sandlogic.com).
28
+
29
+ ## Original Model Information
30
+
31
+ - **Base Model**: [Meta Llama 3.2 1B Instruct](https://huggingface.co/meta-llama/Llama-3.2-1B-Instruct)
32
+ - **Developer**: Meta (base model)
33
+ - **Model Type**: Multilingual large language model (LLM)
34
+ - **Architecture**: Auto-regressive language model with optimized transformer architecture
35
+ - **Parameters**: 1 billion
36
+ - **Training Approach**: Supervised fine-tuning (SFT) and reinforcement learning with human feedback (RLHF)
37
+
38
+ ## Fine-tuning Details
39
+
40
+ - **Dataset**: [bigbio/med_qa](https://huggingface.co/datasets/bigbio/med_qa)
41
+ - **Languages**: English, simplified Chinese, and traditional Chinese
42
+ - **Dataset Size**:
43
+ - English: 12,723 questions
44
+ - Simplified Chinese: 34,251 questions
45
+ - Traditional Chinese: 14,123 questions
46
+ - **Data Type**: Free-form multiple-choice OpenQA for medical problems, collected from professional medical board exams
47
+
48
+ ## Model Capabilities
49
+
50
+ This model is optimized for medical-related dialogue and tasks, including:
51
+
52
+ - Answering medical questions
53
+ - Summarizing medical information
54
+ - Assisting with medical problem-solving
55
+
56
+ ## Intended Use in Medical Domain
57
+
58
+ 1. **Medical Education**: Assisting medical students in exam preparation and learning
59
+ 2. **Clinical Decision Support**: Providing quick references for healthcare professionals
60
+ 3. **Patient Education**: Explaining medical concepts in simple terms for patients
61
+ 4. **Medical Literature Review**: Summarizing and extracting key information from medical texts
62
+ 5. **Differential Diagnosis**: Assisting in generating potential diagnoses based on symptoms
63
+ 6. **Medical Coding**: Aiding in the accurate coding of medical procedures and diagnoses
64
+ 7. **Drug Information**: Providing information on medications, their uses, and potential interactions
65
+ 8. **Medical Translation**: Assisting with medical translations across supported languages
66
+
67
+ ## Quantized Variants
68
+
69
+ 1. **Q5_KM**: 5-bit quantization using the KM method
70
+ 2. **Q4_KM**: 4-bit quantization using the KM method
71
+
72
+ These quantized models aim to reduce model size and improve inference speed while maintaining performance as close to the original model as possible.
73
+
74
+
75
+
76
+ ## Usage
77
+
78
+ ```bash
79
+ pip install llama-cpp-python
80
+ ```
81
+ Please refer to the llama-cpp-python [documentation](https://llama-cpp-python.readthedocs.io/en/latest/) to install with GPU support.
82
+
83
+ ### Basic Text Completion
84
+ Here's an example demonstrating how to use the high-level API for basic text completion:
85
+
86
+ ```bash
87
+ from llama_cpp import Llama
88
+
89
+ llm = Llama(
90
+ model_path="./models/Llama-3.2-1B-Medical_Q4_KM.gguf",
91
+ verbose=False,
92
+ # n_gpu_layers=-1, # Uncomment to use GPU acceleration
93
+ # n_ctx=2048, # Uncomment to increase the context window
94
+ )
95
+
96
+ output = llm.create_chat_completion(
97
+ messages =[
98
+ {
99
+ "role": "system",
100
+ "content": """ You are a helpful, respectful and honest medical assistant. Yu are developed by SandLogic Technologies
101
+ Always answer as helpfully as possible, while being safe.
102
+ Your answers should not include any harmful, unethical, racist, sexist, toxic, dangerous, or illegal content.
103
+ Please ensure that your responses are socially unbiased and positive in nature. If a question does not make any sense, or is not factually coherent, explain why instead of answering something not correct.
104
+ If you don’t know the answer to a question, please don’t share false information."""
105
+
106
+ ,
107
+ },
108
+ {"role": "user", "content": "I have been experiencing a persistent cough for the last two weeks, along with a mild fever and fatigue. What could be the possible causes of these symptoms?"},
109
+ ]
110
+ )
111
+
112
+ print(output["choices"][0]['message']['content'])
113
+ ```
114
+
115
+ ## Download
116
+ You can download `Llama` models in `gguf` format directly from Hugging Face using the `from_pretrained` method. This feature requires the `huggingface-hub` package.
117
+
118
+ To install it, run: `pip install huggingface-hub`
119
+
120
+ ```bash
121
+ from llama_cpp import Llama
122
+
123
+ llm = Llama.from_pretrained(
124
+ repo_id="SandLogicTechnologies/Llama-3.2-1B-Instruct-Medical-GGUF",
125
+ filename="*Llama-3.2-1B-Medical_Q5_KM.gguf",
126
+ verbose=False
127
+ )
128
+ ```
129
+ By default, from_pretrained will download the model to the Hugging Face cache directory. You can manage installed model files using the huggingface-cli tool.
130
+
131
+ ## Ethical Considerations and Limitations
132
+
133
+ - This model is not a substitute for professional medical advice, diagnosis, or treatment
134
+ - Users should be aware of potential biases in the training data
135
+ - The model's knowledge cutoff date may limit its awareness of recent medical developments
136
+
137
+
138
+
139
+
140
+ ## Acknowledgements
141
+
142
+ We thank Meta for developing the original Llama-3.2-1B-Instruct model and the creators of the bigbio/med_qa dataset.
143
+ Special thanks to Georgi Gerganov and the entire llama.cpp development team for their outstanding contributions.
144
+ ## Contact
145
+
146
+ For any inquiries or support, please contact us at [email protected] or visit our [support page](https://www.sandlogic.com/LingoForge/support).
147
+
148
+ ## Explore More
149
+
150
+ For any inquiries or support, please contact us at [email protected] or visit our [support page](https://www.sandlogic.com/LingoForge/support).