YAML Metadata Warning:empty or missing yaml metadata in repo card
Check out the documentation for more information.
π§ CareMinds-AI β Medical AI Assistant
CareMinds-AI is a lightweight, offline-capable medical AI system built fine-tuned with domain-specific healthcare data. It combines:
- β LLM (TinyLLaMA + LoRA fine-tuning)
- β Structured Data Analytics (CSV + Pandas)
- β RAG-style context retrieval (lightweight)
- β Patient record lookup system
π Model Details
π Model Description
CareMinds-AI is a hybrid AI system designed to:
- Answer general medical questions
- Analyze structured healthcare datasets
- Retrieve patient-specific records
- Perform real-time analytics using natural language
It operates fully offline, without requiring external APIs.
π¨βπ» Developed by
Vedhamani Prabakar A
π§ Model Type
- Base Model: CareMinds-AI (1.1B parameters)
- Fine-tuning: LoRA (PEFT)
- Architecture: Transformer-based causal language model
π Language(s)
- English
π Model Sources
- GitHub Repository:
https://github.com/VedhamaniprabakarA/CareMinds-AI.git
π Uses
β Direct Use
CareMinds-AI can be used as:
- π§ Medical chatbot (offline)
- π Healthcare data analyzer
- π₯ Patient record retrieval system
- π Natural language query engine
π Downstream Use
- Hospital management systems
- Healthcare dashboards
- Clinical data assistants
- AI-powered analytics tools
π§ Recommendations
- Use alongside verified medical systems
- Add RAG with trusted medical sources for production
- Validate outputs before real-world use
βοΈ How to Use the Model
from transformers import AutoModelForCausalLM, AutoTokenizer
import torch
model_path = "./CareMinds-AI"
model = AutoModelForCausalLM.from_pretrained(model_path)
tokenizer = AutoTokenizer.from_pretrained(model_path)
device = torch.device("cuda" if torch.cuda.is_available() else "cpu")
model.to(device)
prompt = """### Instruction:
Explain about diabetes
### Response:
"""
inputs = tokenizer(prompt, return_tensors="pt").to(device)
outputs = model.generate(**inputs, max_new_tokens=150)
print(tokenizer.decode(outputs[0], skip_special_tokens=True))
###test the model:
from transformers import AutoModelForCausalLM, AutoTokenizer
model = AutoModelForCausalLM.from_pretrained("vedhamani/CareMinds-AI")
tokenizer = AutoTokenizer.from_pretrained("vedhamani/CareMinds-AI")
prompt = "### Instruction:\nExplain about diabetes\n\n### Response:\n"
inputs = tokenizer(prompt, return_tensors="pt")
outputs = model.generate(**inputs, max_new_tokens=100)
print(tokenizer.decode(outputs[0]))
- Downloads last month
- -
Inference Providers NEW
This model isn't deployed by any Inference Provider. π Ask for provider support