YAML Metadata Warning:empty or missing yaml metadata in repo card

Check out the documentation for more information.

🧠 CareMinds-AI β€” Medical AI Assistant

CareMinds-AI is a lightweight, offline-capable medical AI system built fine-tuned with domain-specific healthcare data. It combines:

  • βœ… LLM (TinyLLaMA + LoRA fine-tuning)
  • βœ… Structured Data Analytics (CSV + Pandas)
  • βœ… RAG-style context retrieval (lightweight)
  • βœ… Patient record lookup system

πŸ“Œ Model Details

πŸ“– Model Description

CareMinds-AI is a hybrid AI system designed to:

  • Answer general medical questions
  • Analyze structured healthcare datasets
  • Retrieve patient-specific records
  • Perform real-time analytics using natural language

It operates fully offline, without requiring external APIs.


πŸ‘¨β€πŸ’» Developed by

Vedhamani Prabakar A

🧠 Model Type

  • Base Model: CareMinds-AI (1.1B parameters)
  • Fine-tuning: LoRA (PEFT)
  • Architecture: Transformer-based causal language model

🌐 Language(s)

  • English

πŸ”— Model Sources


πŸš€ Uses

βœ… Direct Use

CareMinds-AI can be used as:

  • 🧠 Medical chatbot (offline)
  • πŸ“Š Healthcare data analyzer
  • πŸ₯ Patient record retrieval system
  • πŸ” Natural language query engine

πŸ”„ Downstream Use

  • Hospital management systems
  • Healthcare dashboards
  • Clinical data assistants
  • AI-powered analytics tools

🧠 Recommendations

  • Use alongside verified medical systems
  • Add RAG with trusted medical sources for production
  • Validate outputs before real-world use

βš™οΈ How to Use the Model

from transformers import AutoModelForCausalLM, AutoTokenizer
import torch

model_path = "./CareMinds-AI"

model = AutoModelForCausalLM.from_pretrained(model_path)
tokenizer = AutoTokenizer.from_pretrained(model_path)

device = torch.device("cuda" if torch.cuda.is_available() else "cpu")
model.to(device)

prompt = """### Instruction:
Explain about diabetes

### Response:
"""
inputs = tokenizer(prompt, return_tensors="pt").to(device)

outputs = model.generate(**inputs, max_new_tokens=150)

print(tokenizer.decode(outputs[0], skip_special_tokens=True))


###test the model:
from transformers import AutoModelForCausalLM, AutoTokenizer

model = AutoModelForCausalLM.from_pretrained("vedhamani/CareMinds-AI")
tokenizer = AutoTokenizer.from_pretrained("vedhamani/CareMinds-AI")

prompt = "### Instruction:\nExplain about diabetes\n\n### Response:\n"

inputs = tokenizer(prompt, return_tensors="pt")
outputs = model.generate(**inputs, max_new_tokens=100)

print(tokenizer.decode(outputs[0]))
Downloads last month
-
Safetensors
Model size
1B params
Tensor type
F32
Β·
Inference Providers NEW
This model isn't deployed by any Inference Provider. πŸ™‹ Ask for provider support

Model tree for vedhamani/CareMinds-AI

Quantizations
1 model