ShivomH commited on
Commit
6d3f5fd
·
verified ·
1 Parent(s): 8dfe748

Update README.md

Browse files
Files changed (1) hide show
  1. README.md +136 -7
README.md CHANGED
@@ -17,6 +17,9 @@ tags:
17
  - emotionalsupport
18
  - mentalsupport
19
  - advisor
 
 
 
20
  ---
21
 
22
  # Model Card for Model ID
@@ -25,7 +28,8 @@ tags:
25
 
26
  Falcon-1B-Mental-Health-Advisor is a fine-tuned version of the tiiuae/falcon-rw-1b model, adapted for providing empathetic and contextually relevant responses to mental health-related queries. The model has been trained on a curated dataset to assist in mental health conversations, offering advice, guidance, and support for individuals dealing with issues like stress, anxiety, and depression. It provides a compassionate approach to mental health queries while focusing on promoting emotional well-being and mental health awareness.
27
 
28
- ## Model Details
 
29
 
30
  # Falcon-1B Fine-Tuned for Mental Health (LoRA)
31
 
@@ -35,27 +39,151 @@ This is a LoRA adapter for the Falcon-RW-1B model. It was fine-tuned on the 'mar
35
 
36
  Since this model is an adapter, it **must** be loaded with the original Falcon-RW-1B model using PEFT:
37
 
 
 
 
 
 
 
38
  ```python
39
- from transformers import AutoModelForCausalLM, AutoTokenizer
40
  from peft import PeftModel
 
 
 
 
41
 
42
  base_model = "tiiuae/falcon-rw-1b"
43
- model = AutoModelForCausalLM.from_pretrained(base_model, device_map="auto")
44
- model = PeftModel.from_pretrained(model, "ShivomH/Falcon-1B-Finetuned-Mental-Health")
45
-
46
- tokenizer = AutoTokenizer.from_pretrained("ShivomH/Falcon-1B-Finetuned-Mental-Health")
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
47
 
48
  ### Model Description
49
 
50
  <!-- Provide a longer summary of what this model is. -->
51
 
52
-
53
  - **Developed by:** Shivom Hatalkar
54
  - **Model type:** Text-generation
55
  - **Language(s) (NLP):** English
56
  - **License:** apache-2.0
57
  - **Finetuned from model [optional]:** falcon-rw-1b
58
 
 
 
 
 
 
 
 
 
59
  ### Model Sources [optional]
60
 
61
  <!-- Provide the basic links for the model. -->
@@ -64,6 +192,7 @@ tokenizer = AutoTokenizer.from_pretrained("ShivomH/Falcon-1B-Finetuned-Mental-He
64
  - **Paper [optional]:** [More Information Needed]
65
  - **Demo [optional]:** [More Information Needed]
66
 
 
67
 
68
  - PEFT 0.14.0
69
  ```
 
17
  - emotionalsupport
18
  - mentalsupport
19
  - advisor
20
+ - medical
21
+ - not-for-all-audiences
22
+ pipeline_tag: text-generation
23
  ---
24
 
25
  # Model Card for Model ID
 
28
 
29
  Falcon-1B-Mental-Health-Advisor is a fine-tuned version of the tiiuae/falcon-rw-1b model, adapted for providing empathetic and contextually relevant responses to mental health-related queries. The model has been trained on a curated dataset to assist in mental health conversations, offering advice, guidance, and support for individuals dealing with issues like stress, anxiety, and depression. It provides a compassionate approach to mental health queries while focusing on promoting emotional well-being and mental health awareness.
30
 
31
+ # Important Note
32
+ Mental Health is a sensitive topic. Preferably, use the code snippet provided below in order to get optimal results.
33
 
34
  # Falcon-1B Fine-Tuned for Mental Health (LoRA)
35
 
 
39
 
40
  Since this model is an adapter, it **must** be loaded with the original Falcon-RW-1B model using PEFT:
41
 
42
+ ### Dependencies
43
+ ```bash
44
+ pip install transformers accelerate torch peft bitsandbytes language_tool_python
45
+ ```
46
+
47
+ ### Basic Usage
48
  ```python
 
49
  from peft import PeftModel
50
+ from transformers import AutoTokenizer, AutoModelForCausalLM, BitsAndBytesConfig, pipeline
51
+ import torch
52
+ import re
53
+ import language_tool_python
54
 
55
  base_model = "tiiuae/falcon-rw-1b"
56
+ peft_model = "ShivomH/Falcon-1B-Finetuned-Mental-Health"
57
+
58
+ # Load the base model (without LoRA weights initially)
59
+ model = AutoModelForCausalLM.from_pretrained(
60
+ base_model,
61
+ torch_dtype=torch.float16,
62
+ device_map="auto"
63
+ )
64
+
65
+ # Load LoRA weights into the model
66
+ model = PeftModel.from_pretrained(model, peft_model)
67
+
68
+ # Load the tokenizer
69
+ tokenizer = AutoTokenizer.from_pretrained(base_model)
70
+ tokenizer.pad_token = tokenizer.eos_token
71
+
72
+ ## How to Get Started with the Model
73
+
74
+ # Move the model to GPU if available
75
+ device = torch.device("cuda" if torch.cuda.is_available() else "cpu")
76
+ model.to(device)
77
+
78
+ # Load the grammar correction tool
79
+ tool = language_tool_python.LanguageTool("en-US")
80
+ def correct_grammar(text):
81
+ return tool.correct(text)
82
+
83
+ # --- Safety Filters ---
84
+ CRISIS_KEYWORDS = [
85
+ "suicide", "self-harm", "overdose", "addict", "abuse", "rape", "assault", "emergency", "suicidal"
86
+ ]
87
+ CRISIS_RESPONSE = (
88
+ "\n\nIf you're in crisis, please contact a professional immediately. "
89
+ "You can reach the National Suicide Prevention Lifeline at 988 or 112."
90
+ "Please reach out to a trusted friend, family member, or mental health professional. "
91
+ "If you're in immediate danger, consider calling a crisis helpline. Your life matters, and support is available. 🙏"
92
+ )
93
+
94
+ def filter_response(response: str, user_input: str) -> str:
95
+ # Remove URLs, markdown artifacts, and unwanted text
96
+ response = re.sub(r'http\S+', '', response)
97
+ response = re.sub(r'\[\w+\]|\(\w+\)|\*|\#', '', response)
98
+ response = response.split("http")[0].split("©")[0]
99
+
100
+ # Enforce brevity: Keep only the first two sentences
101
+ sentences = re.split(r'(?<=[.!?])\s+', response)
102
+ response = " ".join(sentences[:2]) # Keep only first 2 sentences
103
+
104
+ # Append crisis response if keywords detected
105
+ if any(keyword in user_input.lower() for keyword in CRISIS_KEYWORDS):
106
+ response += CRISIS_RESPONSE
107
+
108
+ # Correct grammar
109
+ response = correct_grammar(response)
110
+
111
+ return response
112
+
113
+ def chat():
114
+
115
+ print("Chat with your fine-tuned Falcon model (type 'exit' to quit):")
116
+
117
+ system_instruction = (
118
+ "You are an empathetic AI specialized in mental health support. "
119
+ "Provide short, supportive, and comforting responses. "
120
+ "Validate the user's emotions and offer non-judgmental support. "
121
+ "If a crisis situation is detected, suggest reaching out to a mental health professional immediately. "
122
+ "Your responses should be clear, concise, and free from speculation. "
123
+ # "Examples:\n"
124
+ # "User: I feel really anxious lately.\n"
125
+ # "AI: I'm sorry you're feeling this way. Anxiety can be overwhelming, but you're not alone. Would you like to try some grounding techniques together?\n\n"
126
+ # "User: I haven't been able to sleep well.\n"
127
+ # "AI: That sounds frustrating. Sleep troubles can be tough. Have you noticed anything that helps, like adjusting your bedtime routine?\n"
128
+ )
129
+
130
+ # Store short chat history for context
131
+ chat_history = []
132
+
133
+ while True:
134
+ user_input = input("\nYou: ")
135
+ if user_input.lower() == "exit":
136
+ break
137
+
138
+ # Maintain short chat history (last 2 exchanges)
139
+ chat_history.append(f"User: {user_input}")
140
+ chat_history = chat_history[-2:]
141
+
142
+ # Structure prompt
143
+ prompt = f"{system_instruction}\n" + "\n".join(chat_history) + "\nAI:"
144
+ inputs = tokenizer(prompt, return_tensors="pt").to("cuda" if torch.cuda.is_available() else "cpu")
145
+
146
+ with torch.no_grad():
147
+ output = model.generate(
148
+ **inputs,
149
+ max_new_tokens=75,
150
+ pad_token_id=tokenizer.eos_token_id,
151
+ temperature=0.3,
152
+ top_p=0.9,
153
+ repetition_penalty=1.4,
154
+ do_sample=True,
155
+ no_repeat_ngram_size=2,
156
+ early_stopping=True
157
+ )
158
+
159
+ response = tokenizer.decode(output[0], skip_special_tokens=True).split("AI:")[-1].strip()
160
+ response = filter_response(response, user_input)
161
+ print(f"AI: {response}")
162
+
163
+ # Store AI response in history
164
+ chat_history.append(f"AI: {response}")
165
+
166
+ chat()
167
+ ```
168
 
169
  ### Model Description
170
 
171
  <!-- Provide a longer summary of what this model is. -->
172
 
 
173
  - **Developed by:** Shivom Hatalkar
174
  - **Model type:** Text-generation
175
  - **Language(s) (NLP):** English
176
  - **License:** apache-2.0
177
  - **Finetuned from model [optional]:** falcon-rw-1b
178
 
179
+ ## Bias, Risks, and Limitations
180
+
181
+ Not a Substitute for Professional Care - This model is not a licensed mental health professional. Its responses may be incomplete, inaccurate, or unsuitable for serious conditions.
182
+ Inherent Biases - May reflect biases in training data (e.g., cultural assumptions, stigmatizing language).
183
+ Crisis Limitations - Not designed for crisis intervention (e.g., suicidal ideation, self-harm). Always direct users to human professionals or emergency services.
184
+ Over-Reliance Risk - Outputs could inadvertently worsen symptoms if users interpret them as definitive advice.
185
+ *Intended Use - Assist with general emotional support, not diagnosis or treatment.
186
+
187
  ### Model Sources [optional]
188
 
189
  <!-- Provide the basic links for the model. -->
 
192
  - **Paper [optional]:** [More Information Needed]
193
  - **Demo [optional]:** [More Information Needed]
194
 
195
+ ### Framework versions
196
 
197
  - PEFT 0.14.0
198
  ```