nbeerbower's picture
Update README.md
ef50dca verified
metadata
library_name: transformers
base_model:
  - nbeerbower/llama-3-sauce-v1-8B
datasets:
  - ResplendentAI/NSFW_RP_Format_NoQuote
license: other
license_name: llama3
tags:
  - nsfw
  - not-for-all-audiences
  - experimental

llama-3-dragonmaid-8B

This model is based on Llama-3-8b, and is governed by META LLAMA 3 COMMUNITY LICENSE AGREEMENT

llama-3-dragon-bophades-8B finetuned on ResplendentAI/NSFW_RP_Format_NoQuote.

Method

Finetuned using an L4 on Google Colab.

Fine-Tune Your Own Llama 2 Model in a Colab Notebook

Configuration

LoRA, model, and training settings:

training_arguments = TrainingArguments(
    learning_rate=2e-4,
    lr_scheduler_type="linear",
    num_train_epochs=10,
    per_device_train_batch_size=10,
    per_device_eval_batch_size=10,
    gradient_accumulation_steps=1,
    evaluation_strategy="steps",
    eval_steps=0.2,
    logging_steps=1,
    optim="paged_adamw_8bit",
    warmup_steps=10,
    report_to="wandb",
    output_dir="./results",
)

trainer = SFTTrainer(
    model=model,
    train_dataset=dataset,
    eval_dataset=dataset.select(range(0,20)),
    peft_config=peft_config,
    dataset_text_field="input",
    max_seq_length=2048,
    tokenizer=tokenizer,
    args=training_arguments,
)