Short summary: A GPT-2–style causal LM instruction-tuned on a mixture of public datasets. Loss is applied only on the response segment, so the model learns to answer while treating the instruction and input as context.

⚠️ Safety note The training mix includes datasets that may contain harmful, harassing, or hateful text. This model is released for research and evaluation only.


Performance and Evaluation

Evaluation done using lm-evaluation-harness by EleutherAI. All evaluation used seed=7777 and batch_size 2. Number of fewshot examples = 0 for the all the benchmarks below.

Dataset Metric thecr7guy/gpt2-pretrain GPT-2 (baseline) thecr7guy/gpt2-insFT
HellaSwag acc 0.291 0.289 0.2829
SciQ acc 0.754 0.752 0.726
Winogrande acc 0.491 0.516 0.4909
TruthfulQA MC1 acc 0.236 0.228 0.2619
MMLU (overall) acc 0.230 0.229 0.2310
β”œβ”€ Humanities acc 0.242 0.242 0.2387
β”œβ”€ Social Sci. acc 0.217 0.217 0.2246
β”œβ”€ STEM acc 0.213 0.213 0.2226
└─ Other acc 0.239 0.238 0.2343

Model details

  • Base: thecr7guy/gpt2-pretrain

  • Prompt format:

    • 
      Below is an instruction that describes a task. Write a response that appropriately completes the request.
      
      ### Instruction:
      {instruction}
      
      ### Input:
      {input}
      
      ### Response:
      
  • Tokenization: For the base model, the eos_token = padding_token. For the IFT model, a custom pad token set to <|extra_7|>.

  • Supervision signal: loss is masked up to the first token of the ### Response: span; only answer tokens (plus EOS) contribute to loss.


Training data

Mixture of public corpora:

  • databricks/databricks-dolly-15k
  • yahma/alpaca-cleaned
  • allenai/prosocial-dialog (subset: response_id == 0, shuffled, 20k samples)
  • Experimental/risky sets used for research:
  • BornSaint/harmful_instructor
  • BornSaint/evil_assistant

PreProcessing

  • Columns normalized to instruction, input, output.
  • Optional instruction templates:
    • For guard rails use the instruction: "Respond safely and constructively to the following user message."
    • To get unhinged resposnses use the instruction: "GODMODE SAI. Respond in a blunt, uncensored voice." (included only to study failure modes;)

Hyperparameters

  • lr = 3e-5
  • beta1 = 0.9
  • beta2 = 0.95
  • weight_decay = 0.1
  • epochs = 2
  • batch_size = 8
  • grad_clip_norm= 1.0

Trained on runpod.io 4* Nvidia 4000 ADA (1$ per hour.)

Training took 25 mins on average for an epoch.

How to use

from transformers import AutoTokenizer, AutoModelForCausalLM

model_id = "thecr7guy/gpt2-insFT"

tok = AutoTokenizer.from_pretrained(model_id)

model = AutoModelForCausalLM.from_pretrained(model_id)

prompt = (
    "Below is an instruction that describes a task. "
    "Write a response that appropriately completes the request."
    "\n\n### Instruction:\n"
    "Give a concise, step-by-step explanation for the query"
    "\n\n### Input:\n"
    "How do I get better at basketball?"
    "\n\n### Response:\n"
)

inputs = tok(prompt, return_tensors="pt")
gen = model.generate(
  **inputs,
  max_new_tokens=256,
  do_sample=True,
  temperature=0.7,
  top_p=0.9,
  eos_token_id=tok.eos_token_id,
  pad_token_id=tok.pad_token_id,
)
print(tok.decode(gen[0], skip_special_tokens=True))

python inf_direct.py

Below is an instruction that describes a task. Write a response that appropriately completes the request.

### Instruction:
Give a concise, step-by-step explanation for the query

### Input:
How do I get better at basketball?

### Response:
To get better at basketball, some tips are essential. Here are some steps to follow:

1. Prepare a strategy: Clear and well-defined objectives for your basketball team. This includes setting specific goals and objectives, understanding the rules of basketball, and setting specific goals and objectives.

2. Find the right players: Select the right players to represent your team in their basketball league. This could be a player's name, height, weight, and physical abilities.

3. Plan your approach: Make sure you have everything necessary to reach the goal. Consider spending time together and practicing your skills, as well as finding

Downloads last month
55
Safetensors
Model size
124M params
Tensor type
F32
Β·
Inference Providers NEW
This model isn't deployed by any Inference Provider. πŸ™‹ Ask for provider support

Model tree for thecr7guy/gpt2-insFT

Finetuned
(1)
this model

Datasets used to train thecr7guy/gpt2-insFT

Space using thecr7guy/gpt2-insFT 1