SalesA AI LoRA Adapter for Qwen/Qwen2.5-1.5B-Instruct

Model Summary

SalesA AI LoRA is a lightweight adapter trained using the LoRA (Low-Rank Adaptation) technique on top of the Qwen/Qwen2.5-1.5B-Instruct base model. It is designed to enhance sales-related conversational AI tasks, such as lead qualification, customer engagement, and sales automation, while maintaining efficiency and low resource requirements.


Model Details

  • Developed by: SalesA Team
  • Model type: LoRA Adapter for Causal Language Modeling
  • Language(s): English
  • License: Apache-2.0
  • Finetuned from model: Qwen/Qwen2.5-1.5B-Instruct
  • Framework: PEFT 0.16.0, Transformers

Model Sources


Uses

Direct Use

  • Sales chatbots and virtual assistants
  • Automated lead qualification
  • Customer support and engagement
  • Sales process automation

Downstream Use

  • Fine-tuning for specific sales domains (e.g., real estate, SaaS, retail)
  • Integration into CRM or sales platforms

Out-of-Scope Use

  • Any use outside the intended sales and business automation context
  • Generating harmful, biased, or misleading content

Bias, Risks, and Limitations

  • The model may reflect biases present in the training data.
  • Not suitable for critical decision-making without human oversight.
  • May generate incorrect or nonsensical responses in edge cases.

Recommendations

  • Always review outputs before acting on them in business-critical scenarios.
  • Retrain or further fine-tune with domain-specific data for best results.

How to Get Started

from transformers import AutoModelForCausalLM, AutoTokenizer
from peft import PeftModel, PeftConfig

base_model = "Qwen/Qwen2.5-1.5B-Instruct"
adapter_path = "Qybera/SalesAv1.0.0"

tokenizer = AutoTokenizer.from_pretrained(base_model)
model = AutoModelForCausalLM.from_pretrained(base_model)
model = PeftModel.from_pretrained(model, adapter_path)

prompt = "How can I help you with your sales inquiry today?"
inputs = tokenizer(prompt, return_tensors="pt")
outputs = model.generate(**inputs)
print(tokenizer.decode(outputs[0], skip_special_tokens=True))

Training Details

Training Data

  • A curated set of anonymized sales conversations and customer interactions. See SalesA_SMEs_Datasets.

Training Procedure

  • Preprocessing: Standard text cleaning, tokenization, and anonymization.
  • Training regime: LoRA fine-tuning with mixed precision (fp16).
  • Epochs: 10
  • Batch size: 1
  • Learning rate: Ranged from 9e-6 to 4.9e-5 (see log below)
  • Hardware: T4 GPU
  • LoRA Rank (r): 64
  • LoRA Alpha: 128
  • LoRA Dropout: 0.1

Evaluation

Testing Data

  • A held-out portion of the SalesA_SMEs_Datasets for validation.

Metrics

  • Perplexity
  • Human evaluation for sales relevance

Results

  • Best validation loss: 1.01 (at epoch 5.75, step 40)
  • Final training loss: 0.283 (at epoch 10, step 70)

Environmental Impact


Technical Specifications

  • Model Architecture: LoRA adapter on Qwen2.5-1.5B-Instruct (transformer-based)
  • Frameworks: PEFT 0.16.0, Transformers
  • Adapter Rank (r): 64
  • LoRA Alpha: 128
  • LoRA Dropout: 0.1
  • Target Modules: q_proj, o_proj, k_proj, up_proj, gate_proj, down_proj, v_proj

Citation

If you use this model, please cite:

@misc{salesa-lora,
  title={SalesA AI LoRA Adapter for Qwen/Qwen2.5-1.5B-Instruct},
  author={SalesA Team},
  year={2024},
  howpublished={\url{https://huggingface.co/Qybera/SalesAv1.0.0}},
}

Model Card Authors


More Information

For questions, issues, or contributions, please open an issue on the model repository or contact email.

Glossary

  • LoRA: Low-Rank Adaptation, a parameter-efficient fine-tuning method.
  • PEFT: Parameter-Efficient Fine-Tuning.

Framework versions

  • PEFT 0.16.0
  • Transformers
Downloads last month
5
Inference Providers NEW
This model isn't deployed by any Inference Provider. ๐Ÿ™‹ 1 Ask for provider support

Model tree for Qybera/SalesAv1.0.0

Base model

Qwen/Qwen2.5-1.5B
Adapter
(473)
this model