Model Card for Model ID

Model Name: AI Fashion Assistant Model

Model Details

Model Description

A specialized language model fine-tuned to serve as a personal stylist, generating tailored fashion advice and clothing recommendations.

  • Developed by: Soorya03
  • Model type: Causal Language Model with LoRA fine-tuning
  • Language(s) (NLP): English
  • Finetuned from model [optional]: Based on nathanReitinger/FASHION-vision from Hugging Face

Uses

Direct Use

This model generates tailored fashion advice and outfit suggestions based on input descriptions.

Out-of-Scope Use

The model may not accurately interpret niche styles or personal preferences outside the intended domain.

Bias, Risks, and Limitations

Due to the training data and fashion references, the model might reflect certain fashion trends or biases that may not align with all users’ preferences or inclusivity standards.

Recommendations

Users should be informed of possible biases and limitations in stylistic suggestions.

How to Get Started with the Model

Use the code below to get started with the model.

Load model directly

from transformers import AutoTokenizer, AutoModelForCausalLM

tokenizer = AutoTokenizer.from_pretrained("Soorya03/Mistral-7b-FashionAssistant") model = AutoModelForCausalLM.from_pretrained("Soorya03/Mistral-7b-FashionAssistant")

Example Input

input_text = "Generate fashion advice for a casual dinner." input_ids = tokenizer(input_text, return_tensors="pt").input_ids outputs = model.generate(input_ids) print(tokenizer.decode(outputs[0], skip_special_tokens=True))

Training Details

Training Data

The model was fine-tuned on a Text-based dataset tailored to provide diverse fashion advice for different scenarios. (nathanReitinger/FASHION-vision) to improve its domain knowledge in providing fashion-related responses.

Training Hyperparameters

  • epochs: 3
  • batch_size: 4
  • learning_rate: 2e-4

Environmental Impact

  • Hardware Type: T4 GPU
  • Cloud Provider: Google Colab

Technical Specifications

Model Architecture and Objective

Sequence-to-sequence transformer model designed for generating text-based fashion recommendations.

Compute Infrastructure

Hardware

T4 GPU in Google Colab

Software

Hugging Face Transformers library

Downloads last month
28
Safetensors
Model size
3.86B params
Tensor type
F32
·
FP16
·
U8
·
Inference Providers NEW
This model isn't deployed by any Inference Provider. 🙋 Ask for provider support