Model Card for Jordan Belfort Q&A Model

This model is a fine-tuned version of a transformer-based language model trained using supervised fine-tuning (SFT) on a custom Q&A dataset derived from Jordan Belfort's book. The model is optimized to answer questions related to the book's content, including topics like sales, persuasion, mindset, and personal development strategies.


Model Details

  • Developed by: Jobix.ai
  • Finetuned from model: `openchat 3.5*
  • Language(s): English
  • Model type: Q&A / Instruction-following
  • License: apache-2.0 (or your chosen license)

Model Sources

  • Training Data: Custom Q&A dataset built from the full content of Jordan Belfort’s book.
  • Method: Supervised fine-tuning (TRL + SFT)

Uses

Direct Use

  • Ask specific questions about concepts, strategies, and advice in Jordan Belfort's book.
  • Get summaries of chapters, sales techniques, or mindset frameworks presented in the book.
  • Useful for salespeople, coaches, or individuals studying persuasion and personal development.

Out-of-Scope Use

  • Not trained for general-purpose Q&A outside the context of the book.
  • Not suitable for legal, financial, or medical advice.

Training Details

Training Procedure

  • Trainer: trl.SFTTrainer
  • Precision: bfloat16
  • Epochs: 7
  • Optimizer: AdamW
  • LR Scheduler: Cosine with warmup
  • Loss: CrossEntropyLoss on prompt-response pairs

Dataset

  • Approx. ~2,000 curated Q&A pairs covering all chapters and sections of the book.
  • Balanced across concepts like tonality, straight-line persuasion, mindset, sales process, and personal stories.

Evaluation

  • Manual evaluation on question coverage and accuracy.
  • Model shows strong performance in recalling specific ideas and quoting relevant sections.

Example Usage

from transformers import pipeline

qa = pipeline("text-generation", model="your-username/jordan-belfort-qa")

prompt = "What is the straight-line sales method according to Jordan Belfort?"
response = qa(prompt, max_new_tokens=200, do_sample=False)
print(response[0]["generated_text"])
Downloads last month
5
Safetensors
Model size
7.24B params
Tensor type
BF16
·
Inference Providers NEW
This model isn't deployed by any Inference Provider. 🙋 Ask for provider support

Model tree for AiJoker/openchat_3.5-slp-jordan-belfort

Finetuned
(28)
this model