FrancescoPeriti's picture
Update README.md
517ddd6 verified
|
raw
history blame
1.39 kB
metadata
library_name: peft

Training procedure

The following bitsandbytes quantization config was used during training:

  • quant_method: bitsandbytes
  • load_in_8bit: False
  • load_in_4bit: True
  • llm_int8_threshold: 6.0
  • llm_int8_skip_modules: None
  • llm_int8_enable_fp32_cpu_offload: False
  • llm_int8_has_fp16_weight: False
  • bnb_4bit_quant_type: nf4
  • bnb_4bit_use_double_quant: False
  • bnb_4bit_compute_dtype: float16

Framework versions

  • PEFT 0.5.0

Get it started

from peft import PeftModel, PeftConfig
from huggingface_hub import login
from transformers import AutoModelForCausalLM, AutoTokenizer, AddedToken

login("[YOUR HF TOKEN HERE FOR USING LLAMA]")
config = PeftConfig.from_pretrained("ChangeIsKey/llama-7b-lexical-substitution")
base_model = AutoModelForCausalLM.from_pretrained("meta-llama/Llama-2-7b-hf", device_map='auto')

tokenizer = AutoTokenizer.from_pretrained("meta-llama/Llama-2-7b-hf", use_fast=False, trust_remote_code=True)
tokenizer.add_special_tokens({ "additional_special_tokens":[AddedToken("<|s|>"), AddedToken("<|answer|>"), AddedToken("<|end|>")]})
if tokenizer.pad_token is None:
    tokenizer.add_special_tokens({'pad_token': '[PAD]'})
tokenizer.padding_side = 'left'
base_model.resize_token_embeddings(len(tokenizer))

model = PeftModel.from_pretrained(base_model, "ChangeIsKey/llama-7b-lexical-substitution")
model.eval()