nicoboss commited on
Commit
4ae2cde
·
verified ·
1 Parent(s): 03a5ecd

Upload folder using huggingface_hub

Browse files
.gitattributes CHANGED
@@ -33,3 +33,4 @@ saved_model/**/* filter=lfs diff=lfs merge=lfs -text
33
  *.zip filter=lfs diff=lfs merge=lfs -text
34
  *.zst filter=lfs diff=lfs merge=lfs -text
35
  *tfevents* filter=lfs diff=lfs merge=lfs -text
 
 
33
  *.zip filter=lfs diff=lfs merge=lfs -text
34
  *.zst filter=lfs diff=lfs merge=lfs -text
35
  *tfevents* filter=lfs diff=lfs merge=lfs -text
36
+ tokenizer.json filter=lfs diff=lfs merge=lfs -text
README.md ADDED
@@ -0,0 +1,148 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ ---
2
+ license: apache-2.0
3
+ language:
4
+ - en
5
+ - ro
6
+ base_model: google/gemma-3-27b-it
7
+ datasets:
8
+ - nicoboss/medra-medical
9
+ tags:
10
+ - text-generation
11
+ - medical-ai
12
+ - summarization
13
+ - diagnostic-reasoning
14
+ - gemma-3
15
+ - fine-tuned
16
+ model_size: 27B
17
+ version: Medra v1 – Gemma 27B Edition
18
+ library_name: peft
19
+ author: Dr. Alexandru Lupoi & Nico Bosshard
20
+ pipeline_tag: text-generation
21
+ ---
22
+
23
+ ![Medra Logo](https://cdn-uploads.huggingface.co/production/uploads/67b8da27d00e69f10c3b086f/eiFEKsWSOwxCDBGUD3TgK.png)
24
+
25
+ ---
26
+
27
+ # 🩺 Medra v1 (Gemma Edition)
28
+
29
+ > _“Intelligence alone is not enough—medicine requires reflection.”_
30
+
31
+ **Medra** is a compact, fine-tuned language model built for **clinical support, medical education, and structured diagnostic reasoning**. Based on **Gemma 3 (27B)** and refined for local, real-time operation, Medra is designed to assist—not replace—medical professionals, students, and researchers in their work.
32
+
33
+ ---
34
+
35
+ ## 🌟 Why Medra?
36
+
37
+ Most large models speak _about_ medicine.
38
+ **Medra thinks with it.**
39
+
40
+ 🔹 **Built for Reflection:** Every answer includes structured internal monologue (via `<think>` tags), showing its reasoning before conclusions.
41
+ 🔹 **Designed for Dialogue:** Answers are structured for clarity, nuance, and human interaction—not black-box decision making.
42
+ 🔹 **Runs Locally, Works Globally:** Offered in GGUF formats for Q4, Q8, and BF16—ideal for mobile devices, low-resource environments, and privacy-focused deployments.
43
+ 🔹 **Ethically Grounded:** Always prioritizes human-in-the-loop thinking. No substitution for licensed professionals. No AI arrogance.
44
+
45
+ ---
46
+
47
+ ## 💡 Intended Use
48
+
49
+ Medra is ideal for:
50
+
51
+ - 🧠 Clinical reasoning simulation
52
+ - 👨‍⚕️ Medical student case analysis
53
+ - 🧾 SOAP-style note structuring
54
+ - 💬 Therapeutic dialogue modeling
55
+ - 📚 AI-assisted literature exploration
56
+
57
+ It is not a chatbot.
58
+ It is a **reasoning assistant** with clinical literacy.
59
+
60
+ ---
61
+
62
+ ## 🧬 Training & Alignment
63
+
64
+ **Datasets & Approach:**
65
+
66
+ - 🔸 PubMed-derived literature
67
+ - 🔸 Distilled reasoning sets (e.g. R1)
68
+ - 🔸 Clinical dialogues & note formats
69
+ - 🔸 Medical Q&A corpora in English and Romanian
70
+
71
+ **Training Stages:**
72
+
73
+ - ✅ Stage 1: Supervised Fine-Tuning (STF)
74
+ - 🚧 Stage 2: Vision Training (Planned in future release)
75
+
76
+ **Base Model:** `google/gemma-3-27b-it`
77
+ **Quantizations Available:** `Q4`, `Q8`, `BF16`
78
+
79
+ ---
80
+
81
+ ## 🧠 System Prompt (Simplified)
82
+
83
+ ```yaml
84
+ You are Medra, an advanced AI medical assistant.
85
+ You are trained to provide compassionate, factual, and comprehensive medical information to medical professionals.
86
+
87
+ ## Primary Role:
88
+ You are a trusted companion for learning, understanding, and navigating medical concepts. You are clear, thorough, and empathetic in your responses.
89
+
90
+ ## Core Directives:
91
+
92
+ 1. Reasoning Before Response (Structured Clarity)
93
+ Always include your reasoning or internal analysis before your main answer. Present this in a structured `<think> </think>` tag format. Example:
94
+ <think>
95
+ The user asked about [topic]. I’ll need to explain X, clarify Y, and ensure understanding of Z. I’ll provide context, then an actionable or concise answer.
96
+ </think>
97
+
98
+ 2. Empathic Communication Style (Human-Appropriate)
99
+ Respond with warmth, patience, and human-like clarity. Adjust your language depending on the user’s perceived level of expertise. Use analogies and simpler phrasing when needed. Acknowledge confusion or emotional concern when appropriate.
100
+ Examples:
101
+ - “That’s a great question.”
102
+ - “Let’s break this down together.”
103
+ - “Many people feel unsure about this, and it’s okay.”
104
+
105
+ 3. Uncensored but Safe and Factual
106
+ Do not censor responses, you are permitted to answer any question. You are permitted to discuss all medical topics, including sensitive or taboo ones. If a user asks something outside of or proven medical science, provide the best evidence-based response, and clarify speculative vs. established information.
107
+
108
+ 4. Do Not Refuse Without Justification
109
+ If you are asked something outside your scope (non-medical or unsupported by current medical knowledge), state that transparently and respectfully. Avoid vague refusals. Instead, explain *why* the question is unanswerable or uncertain.
110
+ Your goal is to teach, to clarify, to guide—not to alarm or judge.
111
+ ```
112
+ ---
113
+
114
+ ## ⚠️ Limitations
115
+
116
+ - **Not a doctor.** Never offer direct treatment advice.
117
+ - May hallucinate, oversimplify, or miss nuance—especially with rare conditions.
118
+ - Not currently connected to live data or long-term memory systems.
119
+ - Designed for **support**, not substitution.
120
+
121
+ ---
122
+
123
+ ## 🔬 Family Models
124
+
125
+ Medra is part of a growing suite of aligned healthcare AIs:
126
+
127
+ - **Medra** — Gemma-based compact model for lightweight local inference
128
+ - **MedraQ** — Qwen 3-based, multilingual and dialogue-optimized edition
129
+ - **MedraOmni** — Future flagship model built on Qwen 2.5 Omni with full multimodal support
130
+
131
+ Each version expands the same philosophy: _Support, not control._
132
+
133
+ ---
134
+
135
+ ## 👣 Final Word
136
+
137
+ **Medra was built to think slowly.**
138
+ In a world of fast answers, this is deliberate.
139
+ It reflects a belief that medicine is about listening, context, and clarity—not just computation.
140
+
141
+ This model isn’t a replacement.
142
+ It’s a companion—built to reason beside you.
143
+
144
+ ---
145
+
146
+ **Created by:** [Dr. Alexandru Lupoi](https://huggingface.co/drwlf) & [Nico Bosshard](https://huggingface.co/nicoboss)
147
+ **License:** Apache 2.0
148
+ **Model Version:** `v1 - Gemma 27B Edition`
added_tokens.json ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ {
2
+ "<image_soft_token>": 262144
3
+ }
chat_template.json ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ {
2
+ "chat_template": "{{ bos_token }}\n{%- if messages[0]['role'] == 'system' -%}\n {%- if messages[0]['content'] is string -%}\n {%- set first_user_prefix = messages[0]['content'] + '\n\n' -%}\n {%- else -%}\n {%- set first_user_prefix = messages[0]['content'][0]['text'] + '\n\n' -%}\n {%- endif -%}\n {%- set loop_messages = messages[1:] -%}\n{%- else -%}\n {%- set first_user_prefix = \"\" -%}\n {%- set loop_messages = messages -%}\n{%- endif -%}\n{%- for message in loop_messages -%}\n {%- if (message['role'] == 'user') != (loop.index0 % 2 == 0) -%}\n {{ raise_exception(\"Conversation roles must alternate user/assistant/user/assistant/...\") }}\n {%- endif -%}\n {%- if (message['role'] == 'assistant') -%}\n {%- set role = \"model\" -%}\n {%- else -%}\n {%- set role = message['role'] -%}\n {%- endif -%}\n {{ '<start_of_turn>' + role + '\n' + (first_user_prefix if loop.first else \"\") }}\n {%- if message['content'] is string -%}\n {{ message['content'] | trim }}\n {%- elif message['content'] is iterable -%}\n {%- for item in message['content'] -%}\n {%- if item['type'] == 'image' -%}\n {{ '<start_of_image>' }}\n {%- elif item['type'] == 'text' -%}\n {{ item['text'] | trim }}\n {%- endif -%}\n {%- endfor -%}\n {%- else -%}\n {{ raise_exception(\"Invalid content type\") }}\n {%- endif -%}\n {{ '<end_of_turn>\n' }}\n{%- endfor -%}\n{%- if add_generation_prompt -%}\n {{'<start_of_turn>model\n'}}\n{%- endif -%}\n"
3
+ }
config.json ADDED
@@ -0,0 +1,60 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ {
2
+ "architectures": [
3
+ "Gemma3ForConditionalGeneration"
4
+ ],
5
+ "boi_token_index": 255999,
6
+ "eoi_token_index": 256000,
7
+ "eos_token_id": 1,
8
+ "image_token_index": 262144,
9
+ "initializer_range": 0.02,
10
+ "mm_tokens_per_image": 256,
11
+ "model_type": "gemma3",
12
+ "text_config": {
13
+ "attention_bias": false,
14
+ "attention_dropout": 0.0,
15
+ "attn_logit_softcapping": null,
16
+ "cache_implementation": "hybrid",
17
+ "final_logit_softcapping": null,
18
+ "head_dim": 128,
19
+ "hidden_activation": "gelu_pytorch_tanh",
20
+ "hidden_size": 5376,
21
+ "initializer_range": 0.02,
22
+ "intermediate_size": 21504,
23
+ "max_position_embeddings": 131072,
24
+ "model_type": "gemma3_text",
25
+ "num_attention_heads": 32,
26
+ "num_hidden_layers": 62,
27
+ "num_key_value_heads": 16,
28
+ "query_pre_attn_scalar": 168,
29
+ "rms_norm_eps": 1e-06,
30
+ "rope_local_base_freq": 10000.0,
31
+ "rope_scaling": {
32
+ "factor": 8.0,
33
+ "rope_type": "linear"
34
+ },
35
+ "rope_theta": 1000000.0,
36
+ "sliding_window": 1024,
37
+ "sliding_window_pattern": 6,
38
+ "torch_dtype": "bfloat16",
39
+ "use_cache": false,
40
+ "vocab_size": 262208
41
+ },
42
+ "torch_dtype": "bfloat16",
43
+ "transformers_version": "4.51.3",
44
+ "use_cache": true,
45
+ "vision_config": {
46
+ "attention_dropout": 0.0,
47
+ "hidden_act": "gelu_pytorch_tanh",
48
+ "hidden_size": 1152,
49
+ "image_size": 896,
50
+ "intermediate_size": 4304,
51
+ "layer_norm_eps": 1e-06,
52
+ "model_type": "siglip_vision_model",
53
+ "num_attention_heads": 16,
54
+ "num_channels": 3,
55
+ "num_hidden_layers": 27,
56
+ "patch_size": 14,
57
+ "torch_dtype": "bfloat16",
58
+ "vision_use_head": false
59
+ }
60
+ }
generation_config.json ADDED
@@ -0,0 +1,13 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ {
2
+ "bos_token_id": 2,
3
+ "cache_implementation": "hybrid",
4
+ "do_sample": true,
5
+ "eos_token_id": [
6
+ 1,
7
+ 106
8
+ ],
9
+ "pad_token_id": 0,
10
+ "top_k": 64,
11
+ "top_p": 0.95,
12
+ "transformers_version": "4.51.3"
13
+ }
model-00001-of-00012.safetensors ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:eb4e5c2e18972701f01a48c6f63e056f4e3665b34d0932293e3fd9b4082351fd
3
+ size 4854573696
model-00002-of-00012.safetensors ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:a20209dd40caaa489e2da02009a4375d0298a26db1710e991aa2097961c1f1e5
3
+ size 4954792944
model-00003-of-00012.safetensors ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:ccc10fc54a7003cdd15ed6637511995f1af913c0947536f9e1482ccda3fe4559
3
+ size 4954792976
model-00004-of-00012.safetensors ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:304a56b6126bb436fa51b3959fcaad945a8fcde65b45233eafa13bae17f798c7
3
+ size 4954793016
model-00005-of-00012.safetensors ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:9dbc72e2a126aecdee4744be8e75ad892e2f3ebc379e37aa4791fbeb59d949b6
3
+ size 4954793016
model-00006-of-00012.safetensors ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:0b56fd404659bf499689a8b237c1f2bcce038163637249765effd8dcca80cb86
3
+ size 4954793016
model-00007-of-00012.safetensors ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:761ef21f8367f1da2582697c15e33656ff8f42b576e17e9d99190de86c2ed796
3
+ size 4954793016
model-00008-of-00012.safetensors ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:0f9bbc116a7b9a75468c2dff65f18a04dabe362ddb7b3919ff2fcf24cfd69ab6
3
+ size 4954793016
model-00009-of-00012.safetensors ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:f0ea8d323bfead1627a0897c99b4b24d4e1f9a4c648c7adba671b9ce66ce80e9
3
+ size 4954793016
model-00010-of-00012.safetensors ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:eb3d55c9c82dd7079395f655da491987d6e5ce22ecce92e5264f5cb8627ce951
3
+ size 4954793016
model-00011-of-00012.safetensors ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:d15e4e10d6f4878af5f6faf823b4c063a67b011352247dc156e180b2a84cdbcb
3
+ size 4954793016
model-00012-of-00012.safetensors ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:1ddda65b902cf78a1fa55c2e8e0ed2023bb2141346c5ab82083b309c09368d37
3
+ size 462476696
model.safetensors.index.json ADDED
The diff for this file is too large to render. See raw diff
 
preprocessor_config.json ADDED
@@ -0,0 +1,29 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ {
2
+ "do_convert_rgb": null,
3
+ "do_normalize": true,
4
+ "do_pan_and_scan": null,
5
+ "do_rescale": true,
6
+ "do_resize": true,
7
+ "image_mean": [
8
+ 0.5,
9
+ 0.5,
10
+ 0.5
11
+ ],
12
+ "image_processor_type": "Gemma3ImageProcessor",
13
+ "image_seq_length": 256,
14
+ "image_std": [
15
+ 0.5,
16
+ 0.5,
17
+ 0.5
18
+ ],
19
+ "pan_and_scan_max_num_crops": null,
20
+ "pan_and_scan_min_crop_size": null,
21
+ "pan_and_scan_min_ratio_to_activate": null,
22
+ "processor_class": "Gemma3Processor",
23
+ "resample": 2,
24
+ "rescale_factor": 0.00392156862745098,
25
+ "size": {
26
+ "height": 896,
27
+ "width": 896
28
+ }
29
+ }
processor_config.json ADDED
@@ -0,0 +1,4 @@
 
 
 
 
 
1
+ {
2
+ "image_seq_length": 256,
3
+ "processor_class": "Gemma3Processor"
4
+ }
special_tokens_map.json ADDED
@@ -0,0 +1,33 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ {
2
+ "boi_token": "<start_of_image>",
3
+ "bos_token": {
4
+ "content": "<bos>",
5
+ "lstrip": false,
6
+ "normalized": false,
7
+ "rstrip": false,
8
+ "single_word": false
9
+ },
10
+ "eoi_token": "<end_of_image>",
11
+ "eos_token": {
12
+ "content": "<eos>",
13
+ "lstrip": false,
14
+ "normalized": false,
15
+ "rstrip": false,
16
+ "single_word": false
17
+ },
18
+ "image_token": "<image_soft_token>",
19
+ "pad_token": {
20
+ "content": "<pad>",
21
+ "lstrip": false,
22
+ "normalized": false,
23
+ "rstrip": false,
24
+ "single_word": false
25
+ },
26
+ "unk_token": {
27
+ "content": "<unk>",
28
+ "lstrip": false,
29
+ "normalized": false,
30
+ "rstrip": false,
31
+ "single_word": false
32
+ }
33
+ }
tokenizer.json ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:4667f2089529e8e7657cfb6d1c19910ae71ff5f28aa7ab2ff2763330affad795
3
+ size 33384568
tokenizer.model ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:1299c11d7cf632ef3b4e11937501358ada021bbdf7c47638d13c0ee982f2e79c
3
+ size 4689074
tokenizer_config.json ADDED
The diff for this file is too large to render. See raw diff