ytkmiz11's picture
Update README.md
6d3acfd verified
metadata
tags:
  - autotrain
  - text-generation-inference
  - text-generation
  - peft
library_name: transformers
base_model: mistralai/Mistral-7B-Instruct-v0.3
widget:
  - messages:
      - role: user
        content: What is your favorite condiment?
license: other

Model Trained Using AutoTrain

This model was trained using AutoTrain. For more information, please visit AutoTrain.

Usage


from transformers import AutoModelForCausalLM, AutoTokenizer

model_path = "PATH_TO_THIS_REPO"

tokenizer = AutoTokenizer.from_pretrained(model_path)
model = AutoModelForCausalLM.from_pretrained(
    model_path,
    device_map="auto",
    torch_dtype='auto'
).eval()

# Prompt content
messages = [
    {"role": "user", "content": "You are a highly qualified expert trained to annotate machine learning training data.Your task is to analyze the socio-economic attributes in the TEXT below and label it with only one the three labels:employee, student, or retired. Do not provide any explanations and only respond with one of the labels as one word: employee, student, or retired. Do not provide any explanations and only respond with one of the labels as one word: employee, student, or retired. Do not provide any explanations and only respond with one of the labels as one word: employee, student, or retired. Your TEXT to analyse:The longitude of this person's residence is 10.0, the latitude of this person's residence is 36.0.The landprice of residence is 80037.703125. The pop density of residence is 40312.4 The longitude of this person's workplace or school is 0.0, the latitude of this person's workplace or school is 0.0. The landprice of workplace or school is -1.0. The pop density of workplace or school is -1.0. The person starts the first trip at 16:0, and ends the first trip at 16:1. The first trip starts at the coordinate of (0.7503498221564023,0.4600650377311516), ends at the coordinate of (0.7960930657056438,0.2034560130526333). The purpose of the first trip is entertainment or shopping. The person does not take the second trip. Do not provide any explanations and only respond with one of the labels as one word: employee, student, or retired.\n"}
] 

input_ids = tokenizer.apply_chat_template(conversation=messages, tokenize=True, add_generation_prompt=True, return_tensors='pt')
output_ids = model.generate(input_ids.to('cuda'))
response = tokenizer.decode(output_ids[0][input_ids.shape[1]:], skip_special_tokens=True)

# Model response: "employee"
print(response)