PeftModel is the base model class for specifying the base Transformer model and configuration to apply a PEFT method to. The base PeftModel
contains methods for loading and saving models from the Hub, and supports the PromptEncoder for prompt learning.
( model peft_config: PeftConfig )
Parameters
Base model encompassing various Peft methods.
Attributes:
list
of str
) — The list of sub-module names to save when
saving the model.torch.Tensor
) — The virtual prompt tokens used for Peft if
using PromptLearningConfig.str
) — The name of the transformer
backbone in the base model if using PromptLearningConfig.torch.nn.Embedding
) — The word embeddings of the transformer backbone
in the base model if using PromptLearningConfig.Disables the adapter module.
Forward pass of the model.
( model model_id **kwargs )
Parameters
str
or os.PathLike
) —
The name of the Lora configuration to use. Can be either:model id
of a Lora configuration hosted inside a model repo on the Hugging Face
Hub.save_pretrained
method (./my_lora_config_directory/
).Instantiate a LoraModel from a pretrained Lora configuration and weights.
Returns the base model.
Returns the virtual prompts to use for Peft. Only applicable when peft_config.peft_type != PeftType.LORA
.
Returns the prompt embedding to save when saving the model. Only applicable when peft_config.peft_type != PeftType.LORA
.
Prints the number of trainable parameters in the model.
( save_directory **kwargs )
This function saves the adapter model and the adapter configuration files to a directory, so that it can be
reloaded using the LoraModel.from_pretrained
class method, and also used by the LoraModel.push_to_hub
method.
A PeftModel
for sequence classification tasks.
( model peft_config: PeftConfig )
Parameters
Peft model for sequence classification tasks.
Attributes:
str
) — The name of the classification layer.Example:
>>> from transformers import AutoModelForSequenceClassification
>>> from peft import PeftModelForSequenceClassification, get_peft_config
>>> config = {
... "peft_type": "PREFIX_TUNING",
... "task_type": "SEQ_CLS",
... "inference_mode": False,
... "num_virtual_tokens": 20,
... "token_dim": 768,
... "num_transformer_submodules": 1,
... "num_attention_heads": 12,
... "num_layers": 12,
... "encoder_hidden_size": 768,
... "prefix_projection": False,
... "postprocess_past_key_value_function": None,
... }
>>> peft_config = get_peft_config(config)
>>> model = AutoModelForSequenceClassification.from_pretrained("bert-base-cased")
>>> peft_model = PeftModelForSequenceClassification(model, peft_config)
>>> peft_model.print_trainable_parameters()
trainable params: 370178 || all params: 108680450 || trainable%: 0.3406113979101117
A PeftModel
for token classification tasks.
( model peft_config: PeftConfig )
Parameters
Peft model for token classification tasks.
Attributes:
str
) — The name of the classification layer.Example:
>>> from transformers import AutoModelForSequenceClassification
>>> from peft import PeftModelForTokenClassification, get_peft_config
>>> config = {
... "peft_type": "PREFIX_TUNING",
... "task_type": "TOKEN_CLS",
... "inference_mode": False,
... "num_virtual_tokens": 20,
... "token_dim": 768,
... "num_transformer_submodules": 1,
... "num_attention_heads": 12,
... "num_layers": 12,
... "encoder_hidden_size": 768,
... "prefix_projection": False,
... "postprocess_past_key_value_function": None,
... }
>>> peft_config = get_peft_config(config)
>>> model = AutoModelForTokenClassification.from_pretrained("bert-base-cased")
>>> peft_model = PeftModelForTokenClassification(model, peft_config)
>>> peft_model.print_trainable_parameters()
trainable params: 370178 || all params: 108680450 || trainable%: 0.3406113979101117
A PeftModel
for causal language modeling.
( model peft_config: PeftConfig )
Parameters
Peft model for causal language modeling.
Example:
>>> from transformers import AutoModelForCausalLM
>>> from peft import PeftModelForCausalLM, get_peft_config
>>> config = {
... "peft_type": "PREFIX_TUNING",
... "task_type": "CAUSAL_LM",
... "inference_mode": False,
... "num_virtual_tokens": 20,
... "token_dim": 1280,
... "num_transformer_submodules": 1,
... "num_attention_heads": 20,
... "num_layers": 36,
... "encoder_hidden_size": 1280,
... "prefix_projection": False,
... "postprocess_past_key_value_function": None,
... }
>>> peft_config = get_peft_config(config)
>>> model = AutoModelForCausalLM.from_pretrained("gpt2-large")
>>> peft_model = PeftModelForCausalLM(model, peft_config)
>>> peft_model.print_trainable_parameters()
trainable params: 1843200 || all params: 775873280 || trainable%: 0.23756456724479544
A PeftModel
for sequence-to-sequence language modeling.
( model peft_config: PeftConfig )
Parameters
Peft model for sequence-to-sequence language modeling.
Example:
>>> from transformers import AutoModelForSeq2SeqLM
>>> from peft import PeftModelForSeq2SeqLM, get_peft_config
>>> config = {
... "peft_type": "LORA",
... "task_type": "SEQ_2_SEQ_LM",
... "inference_mode": False,
... "r": 8,
... "target_modules": ["q", "v"],
... "lora_alpha": 32,
... "lora_dropout": 0.1,
... "merge_weights": False,
... "fan_in_fan_out": False,
... "enable_lora": None,
... "bias": "none",
... }
>>> peft_config = get_peft_config(config)
>>> model = AutoModelForSeq2SeqLM.from_pretrained("t5-base")
>>> peft_model = PeftModelForSeq2SeqLM(model, peft_config)
>>> peft_model.print_trainable_parameters()
trainable params: 884736 || all params: 223843584 || trainable%: 0.3952474242013566