π§ Mistral Fine-Tuned Chatbot for AI Tool Queries
This model is a fine-tuned version of TheBloke/OpenHermes-2.5-Mistral-7B-GGUF
on a custom dataset of AI tool instructions. It's designed to behave as a conversational assistant that can answer technical queries related to popular AI tools.
π§ Model Details
- Base model:
OpenHermes-2.5-Mistral-7B-GGUF
- Fine-tuned on: Custom dataset of structured JSONL instructions
- Training platform: Google Colab Pro (A100 GPU)
- Fine-tuning method: Supervised fine-tuning using π€ Transformers + Datasets
π Example Use Cases
- π οΈ Recommend and explain AI tools for different tasks
- π¬ Simulate chatbot responses about ML libraries, APIs, and platforms
- π§ͺ Useful for education, technical support, and integration with AI assistants
π» Usage
from transformers import AutoTokenizer, AutoModelForCausalLM
model = AutoModelForCausalLM.from_pretrained("amalsp/mistral-finetuned-chatbot")
tokenizer = AutoTokenizer.from_pretrained("amalsp/mistral-finetuned-chatbot")
prompt = "What AI tool can I use for image generation?"
inputs = tokenizer(prompt, return_tensors="pt")
outputs = model.generate(**inputs, max_new_tokens=150)
print(tokenizer.decode(outputs[0], skip_special_tokens=True))
Inference Providers
NEW
This model isn't deployed by any Inference Provider.
π
Ask for provider support