alphaaico commited on
Commit
62d6730
·
verified ·
1 Parent(s): 03cdfd2

Update README.md

Browse files
Files changed (1) hide show
  1. README.md +4 -4
README.md CHANGED
@@ -27,7 +27,7 @@ datasets:
27
  style="width: 500px; height: auto; object-position: center top;">
28
  </div>
29
 
30
- # Medial-Diagnosis-COT-Gemma3-270M
31
 
32
  **Alpha AI (www.alphaai.biz)** fine-tuned Gemma-3 270M for **medical question answering with explicit chain-of-thought (CoT)**. The model emits reasoning inside `<think> ... </think>` followed by a final answer, making it well-suited for research on verifiable medical reasoning and for internal tooling where transparent intermediate steps are desired.
33
 
@@ -141,7 +141,7 @@ def strip_think(text: str) -> str:
141
  ```python
142
  from transformers import AutoModelForCausalLM, AutoTokenizer
143
 
144
- repo = "alphaaico/Medial-Diagnosis-COT-Gemma3-270M"
145
  tok = AutoTokenizer.from_pretrained(repo)
146
  mdl = AutoModelForCausalLM.from_pretrained(repo, device_map="auto")
147
 
@@ -172,7 +172,7 @@ from transformers import AutoTokenizer, AutoModelForCausalLM
172
  from peft import PeftModel
173
 
174
  base = "google/gemma-3-270m-it" # requires accepting Gemma license on Hugging Face
175
- repo = "alphaaico/Medial-Diagnosis-COT-Gemma3-270M"
176
 
177
  tok = AutoTokenizer.from_pretrained(base)
178
  base_mdl = AutoModelForCausalLM.from_pretrained(base, device_map="auto")
@@ -262,7 +262,7 @@ For enterprise collaboration with **Alpha AI**, reach out via the organization p
262
  from transformers import AutoModelForCausalLM, AutoTokenizer
263
  import re
264
 
265
- repo = "alphaaico/Medial-Diagnosis-COT-Gemma3-270M"
266
  tok = AutoTokenizer.from_pretrained(repo)
267
  mdl = AutoModelForCausalLM.from_pretrained(repo, device_map="auto")
268
 
 
27
  style="width: 500px; height: auto; object-position: center top;">
28
  </div>
29
 
30
+ # Medical-Diagnosis-COT-Gemma3-270M
31
 
32
  **Alpha AI (www.alphaai.biz)** fine-tuned Gemma-3 270M for **medical question answering with explicit chain-of-thought (CoT)**. The model emits reasoning inside `<think> ... </think>` followed by a final answer, making it well-suited for research on verifiable medical reasoning and for internal tooling where transparent intermediate steps are desired.
33
 
 
141
  ```python
142
  from transformers import AutoModelForCausalLM, AutoTokenizer
143
 
144
+ repo = "alphaaico/Medical-Diagnosis-COT-Gemma3-270M"
145
  tok = AutoTokenizer.from_pretrained(repo)
146
  mdl = AutoModelForCausalLM.from_pretrained(repo, device_map="auto")
147
 
 
172
  from peft import PeftModel
173
 
174
  base = "google/gemma-3-270m-it" # requires accepting Gemma license on Hugging Face
175
+ repo = "alphaaico/Medical-Diagnosis-COT-Gemma3-270M"
176
 
177
  tok = AutoTokenizer.from_pretrained(base)
178
  base_mdl = AutoModelForCausalLM.from_pretrained(base, device_map="auto")
 
262
  from transformers import AutoModelForCausalLM, AutoTokenizer
263
  import re
264
 
265
+ repo = "alphaaico/Medical-Diagnosis-COT-Gemma3-270M"
266
  tok = AutoTokenizer.from_pretrained(repo)
267
  mdl = AutoModelForCausalLM.from_pretrained(repo, device_map="auto")
268