Mit1208 commited on
Commit
3898c0b
·
verified ·
1 Parent(s): c34e078

Update README.md

Browse files
Files changed (1) hide show
  1. README.md +48 -15
README.md CHANGED
@@ -29,22 +29,10 @@ This is the model card of a 🤗 transformers model that has been pushed on the
29
  - **Language(s) (NLP):** English
30
  - **Finetuned from model :** Phi-2
31
 
32
- ## Uses
33
-
34
- <!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
35
 
36
 
37
  ## Training Details
38
 
39
- ### Training Data
40
-
41
- <!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
42
-
43
- [More Information Needed]
44
-
45
- ### Training Procedure
46
-
47
- <!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
48
 
49
 
50
 
@@ -69,12 +57,57 @@ The following hyperparameters were used during training:
69
 
70
  <!-- This section describes the evaluation protocols and provides the results. -->
71
 
72
- ### Testing Data, Factors & Metrics
73
 
 
 
74
 
75
- ### Results
 
 
76
 
77
- [More Information Needed]
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
78
 
79
  #### Summary
80
 
 
29
  - **Language(s) (NLP):** English
30
  - **Finetuned from model :** Phi-2
31
 
 
 
 
32
 
33
 
34
  ## Training Details
35
 
 
 
 
 
 
 
 
 
 
36
 
37
 
38
 
 
57
 
58
  <!-- This section describes the evaluation protocols and provides the results. -->
59
 
60
+ ### Inference
61
 
62
+ ```
63
+ !pip install -q transformers==4.37.2 accelerate==0.27.0
64
 
65
+ import re
66
+ from transformers import AutoTokenizer, AutoModelForCausalLM, StoppingCriteria
67
+ import torch
68
 
69
+ tokenizer = AutoTokenizer.from_pretrained("Mit1208/phi-2-classification-sentiment-merged")
70
+ model = AutoModelForCausalLM.from_pretrained("Mit1208/phi-2-classification-sentiment-merged", device_map="auto", trust_remote_code=True).eval()
71
+
72
+ class EosListStoppingCriteria(StoppingCriteria):
73
+ def __init__(self, eos_sequence = tokenizer.encode("<|im_end|>")):
74
+ self.eos_sequence = eos_sequence
75
+
76
+ def __call__(self, input_ids: torch.LongTensor, scores: torch.FloatTensor, **kwargs) -> bool:
77
+ last_ids = input_ids[:,-len(self.eos_sequence):].tolist()
78
+ return self.eos_sequence in last_ids
79
+
80
+ inf_conv = [{'from': 'human',
81
+ 'value': "Text: In sales volume , Coca-Cola 's market share has decreased by 2.2 % to 24.2 % ."},
82
+ {'from': 'phi', 'value': "I've read this text."},
83
+ {'from': 'human',
84
+ 'value': 'Please determine the sentiment of the given text and choose from the options: Positive, Negative, Neutral, or Cannot be determined.'}]
85
+ # need to load because model doesn't has classifer head.
86
+
87
+ id2label = {0: 'negative', 1: 'neutral', 2: 'positive'}
88
+
89
+ inference_text = tokenizer.apply_chat_template(inf_conv, tokenize=False) + '<|im_start|>phi:\n'
90
+ inputs = tokenizer(inference_text, return_tensors="pt", return_attention_mask=False).to('cuda')
91
+ outputs = model.generate(inputs["input_ids"], max_new_tokens=1024, pad_token_id= tokenizer.eos_token_id,
92
+ stopping_criteria = [EosListStoppingCriteria()])
93
+
94
+ text = tokenizer.batch_decode(outputs)[0]
95
+
96
+ # print(text.split("The correct option is")[-1].replace("<|im_end|>", "").replace(".", ""))
97
+
98
+ # Define a dictionary to map values to labels
99
+ label_map = {"2": "positive", "0": "negative", "1": "neutral"}
100
+ answer = text.split("<|im_start|>phi:")[-1].replace("<|im_end|>", "").replace(".", "")
101
+
102
+ sentiment_label = re.search(r'(\d)', answer)
103
+ sentiment_score = int(sentiment_label.group(1))
104
+
105
+ if sentiment_label:
106
+ sentiment_score = int(sentiment_label.group(1))
107
+ print(id2label.get(sentiment_score, "none"))
108
+ else:
109
+ print("none")
110
+ ```
111
 
112
  #### Summary
113