library_name: peft | |
base_model: mistralai/Mistral-7B-v0.1 | |
pipeline_tag: text-generation | |
Description: Do the sentences have the same meaning?\ | |
Original dataset: https://huggingface.co/datasets/glue/viewer/mrpc \ | |
---\ | |
Try querying this adapter for free in Lora Land at https://predibase.com/lora-land! \ | |
The adapter_category is Academic Benchmarks and the name is Sentence Comparison (MRPC)\ | |
---\ | |
Sample input: You are given two sentences below, Sentence 1 and Sentence 2. If the two sentences are semantically equivalent, please return 1. Otherwise, please return 0.\n\n### Sentence 1: The association said 28.2 million DVDs were rented in the week that ended June 15 , compared with 27.3 million VHS cassettes .\n\n### Sentence 2: The Video Software Dealers Association said 28.2 million DVDs were rented out last week , compared to 27.3 million VHS cassettes .\n\n### Label: \ | |
---\ | |
Sample output: 1\ | |
---\ | |
Try using this adapter yourself! | |
``` | |
from transformers import AutoModelForCausalLM, AutoTokenizer | |
model_id = "mistralai/Mistral-7B-v0.1" | |
peft_model_id = "predibase/glue_mrpc" | |
model = AutoModelForCausalLM.from_pretrained(model_id) | |
model.load_adapter(peft_model_id) | |
``` |