See axolotl config
axolotl version: 0.7.0
base_model: SicariusSicariiStuff/Negative_LLAMA_70B
model_type: AutoModelForCausalLM
tokenizer_type: AutoTokenizer
model_config:
trust_remote_code: true
tokenizer:
padding_side: "left"
datasets:
- path: data/data.jsonl
type: chat_template
field_messages: conversations
message_field_role: role
message_field_content: content
roles:
user: ["user"]
assistant: ["assistant"]
load_in_4bit: true
adapter: qlora
lora_r: 64
lora_alpha: 32
lora_dropout: 0.1
lora_target_linear: true
lora_modules_to_save:
- embed_tokens
- lm_head
bf16: true
flash_attention: true
flash_attn_implementation: "flash_attention_2"
gradient_checkpointing: true
deepspeed: deepspeed_configs/zero3_bf16.json
gradient_accumulation_steps: 4
micro_batch_size: 8
num_epochs: 3
optimizer: paged_adamw_8bit
lr_scheduler: cosine
learning_rate: 1e-5
warmup_ratio: 0.03
max_seq_length: 8192
pad_to_sequence_len: false
sample_packing: false
output_dir: ./output
save_steps: 50
logging_steps: 10
save_safetensors: true
output
This model is a fine-tuned version of SicariusSicariiStuff/Negative_LLAMA_70B on the data/data.jsonl dataset.
Model description
More information needed
Intended uses & limitations
More information needed
Training and evaluation data
More information needed
Training procedure
Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 1e-05
- train_batch_size: 8
- eval_batch_size: 8
- seed: 42
- distributed_type: multi-GPU
- gradient_accumulation_steps: 4
- total_train_batch_size: 32
- optimizer: Use paged_adamw_8bit with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments
- lr_scheduler_type: cosine
- lr_scheduler_warmup_steps: 29
- num_epochs: 3.0
Training results
Framework versions
- PEFT 0.14.0
- Transformers 4.48.3
- Pytorch 2.5.1+cu121
- Datasets 3.2.0
- Tokenizers 0.21.1
- Downloads last month
- -
Inference Providers
NEW
This model isn't deployed by any Inference Provider.
๐
Ask for provider support
Model tree for ReadyArt/Forgotten-Safeword-70B-v5.0-LoRA
Base model
meta-llama/Llama-3.1-70B
Finetuned
meta-llama/Llama-3.3-70B-Instruct
Finetuned
SicariusSicariiStuff/Negative_LLAMA_70B