πŸ”§ Mistral Fine-Tuned Chatbot for AI Tool Queries

This model is a fine-tuned version of TheBloke/OpenHermes-2.5-Mistral-7B-GGUF on a custom dataset of AI tool instructions. It's designed to behave as a conversational assistant that can answer technical queries related to popular AI tools.

🧠 Model Details

  • Base model: OpenHermes-2.5-Mistral-7B-GGUF
  • Fine-tuned on: Custom dataset of structured JSONL instructions
  • Training platform: Google Colab Pro (A100 GPU)
  • Fine-tuning method: Supervised fine-tuning using πŸ€— Transformers + Datasets

πŸ“‚ Example Use Cases

  • πŸ› οΈ Recommend and explain AI tools for different tasks
  • πŸ’¬ Simulate chatbot responses about ML libraries, APIs, and platforms
  • πŸ§ͺ Useful for education, technical support, and integration with AI assistants

πŸ’» Usage

from transformers import AutoTokenizer, AutoModelForCausalLM

model = AutoModelForCausalLM.from_pretrained("amalsp/mistral-finetuned-chatbot")
tokenizer = AutoTokenizer.from_pretrained("amalsp/mistral-finetuned-chatbot")

prompt = "What AI tool can I use for image generation?"
inputs = tokenizer(prompt, return_tensors="pt")
outputs = model.generate(**inputs, max_new_tokens=150)

print(tokenizer.decode(outputs[0], skip_special_tokens=True))
Downloads last month

-

Downloads are not tracked for this model. How to track
Inference Providers NEW
This model isn't deployed by any Inference Provider. πŸ™‹ Ask for provider support