Sefika commited on
Commit
f2a5f01
·
verified ·
1 Parent(s): b595850

Update README.md

Browse files
Files changed (1) hide show
  1. README.md +27 -68
README.md CHANGED
@@ -27,26 +27,34 @@ This is the model card of a 🤗 transformers model that has been pushed on the
27
 
28
  ## Uses
29
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
30
 
31
- ### Direct Use
32
-
33
- <!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
34
-
35
- [More Information Needed]
36
-
37
-
38
-
39
- ### Recommendations
40
-
41
- <!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
42
-
43
- Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
44
-
45
- ## How to Get Started with the Model
46
-
47
- Use the code below to get started with the model.
48
-
49
- [More Information Needed]
50
 
51
  ## Training Details
52
 
@@ -58,65 +66,16 @@ semeval-2010-task8
58
  ### Training Procedure
59
 
60
  5 fold cross validation with sentence and relation types. Input is sentence and the output is relation types
61
- #### Preprocessing [optional]
62
-
63
- [More Information Needed]
64
-
65
 
66
  #### Training Hyperparameters
67
 
68
  Epoch:5, BS:16 and others are default.
69
- #### Speeds, Sizes, Times [optional]
70
-
71
- <!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
72
-
73
- [More Information Needed]
74
-
75
-
76
-
77
- #### Metrics
78
-
79
- <!-- These are the evaluation metrics being used, ideally with a description of why. -->
80
-
81
- [More Information Needed]
82
-
83
- ### Results
84
-
85
- [More Information Needed]
86
-
87
- #### Summary
88
-
89
-
90
-
91
- ## Model Examination [optional]
92
-
93
- <!-- Relevant interpretability work for the model goes here -->
94
-
95
- [More Information Needed]
96
-
97
- ## Environmental Impact
98
-
99
- <!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
100
-
101
-
102
-
103
- ## Technical Specifications [optional]
104
-
105
- ### Model Architecture and Objective
106
-
107
- [More Information Needed]
108
-
109
- ### Compute Infrastructure
110
-
111
- [More Information Needed]
112
 
113
  #### Hardware
114
 
115
  Colab Pro+ A100.
116
 
117
- #### Software
118
 
119
- [More Information Needed]
120
 
121
  ## Citation [optional]
122
 
 
27
 
28
  ## Uses
29
 
30
+ ```python
31
+ import json
32
+ import torch
33
+ from transformers import AutoTokenizer
34
+ from transformers import AutoModelForCausalLM
35
+ from datetime import datetime
36
+ from transformers import T5Tokenizer, T5ForConditionalGeneration
37
+
38
+ tokenizer = T5Tokenizer.from_pretrained("google/flan-t5-xl")
39
+ model_id = "Sefika/semeval_base_1"
40
+ model = T5ForConditionalGeneration.from_pretrained(model_id,
41
+ device_map="auto",
42
+ load_in_8bit=False,
43
+ torch_dtype=torch.float16)
44
+ prompt = """Example Sentence:The purpose of the <e1>audit</e1> was to report on the <e2>financial statements</e2>.\n"""+\
45
+ """Sentence: Query Sentence:The most common <e1>audits</e1> were about <e2>waste</e2> and recycling.\n"""+\
46
+ """What is the relation type between e1: audits. and e2 : waste. according to given relation types below in the sentence?\n"""+\
47
+ """Relation types: Relation types: Cause-Effect(e2,e1), Content-Container(e1,e2), Member-Collection(e1,e2), Instrument-Agency(e1,e2), Product-Producer(e2,e1), Member-Collection(e2,e1), Message-Topic(e1,e2), Entity-Origin(e2,e1), Message-Topic(e2,e1), Instrument-Agency(e2,e1), Content-Container(e2,e1), Product-Producer(e1,e2), Entity-Origin(e1,e2), Component-Whole(e1,e2), Entity-Destination(e1,e2), Other, Cause-Effect(e1,e2), Component-Whole(e2,e1), Entity-Destination(e2,e1). \n"""
48
+ inputs = self.tokenizer(prompt, add_special_tokens=True, max_length=526,return_tensors="pt").input_ids.to("cuda")
49
+
50
+ outputs = self.model.generate(inputs, max_new_tokens=length, pad_token_id=self.tokenizer.eos_token_id)
51
+
52
+ response = self.tokenizer.batch_decode(outputs, skip_special_tokens=True)
53
+ print(response[0])
54
+ #"Cause-Effect(e1,e2)"
55
+
56
+ ```
57
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
58
 
59
  ## Training Details
60
 
 
66
  ### Training Procedure
67
 
68
  5 fold cross validation with sentence and relation types. Input is sentence and the output is relation types
 
 
 
 
69
 
70
  #### Training Hyperparameters
71
 
72
  Epoch:5, BS:16 and others are default.
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
73
 
74
  #### Hardware
75
 
76
  Colab Pro+ A100.
77
 
 
78
 
 
79
 
80
  ## Citation [optional]
81