Update README.md
Browse files
README.md
CHANGED
@@ -70,7 +70,7 @@ config = AutoConfig.from_pretrained(model_path, trust_remote_code=True)
|
|
70 |
tokenizer = AutoTokenizer.from_pretrained(model_path, trust_remote_code=True)
|
71 |
|
72 |
|
73 |
-
#
|
74 |
quantization_config=BitsAndBytesConfig(
|
75 |
load_in_4bit=True,
|
76 |
llm_int8_threshold=6.0,
|
@@ -270,7 +270,10 @@ split_num_mapper = {
|
|
270 |
|
271 |
Since predicting all schemas in the label set at once is too challenging and not easily scalable, OneKE uses a batched approach during training. It divides the number of schemas asked in the instructions, querying a fixed number of schemas at a time. Hence, if the label set of a piece of data is too long, it will be split into multiple instructions that the model will address in turns.
|
272 |
|
273 |
-
|
|
|
|
|
|
|
274 |
|
275 |
```python
|
276 |
NER: ["Person Name", "Education", "Position", "Nationality"] # List of strings
|
|
|
70 |
tokenizer = AutoTokenizer.from_pretrained(model_path, trust_remote_code=True)
|
71 |
|
72 |
|
73 |
+
# 4-bit Quantized OneKE
|
74 |
quantization_config=BitsAndBytesConfig(
|
75 |
load_in_4bit=True,
|
76 |
llm_int8_threshold=6.0,
|
|
|
270 |
|
271 |
Since predicting all schemas in the label set at once is too challenging and not easily scalable, OneKE uses a batched approach during training. It divides the number of schemas asked in the instructions, querying a fixed number of schemas at a time. Hence, if the label set of a piece of data is too long, it will be split into multiple instructions that the model will address in turns.
|
272 |
|
273 |
+
|
274 |
+
**Schema Format**:
|
275 |
+
|
276 |
+
|
277 |
|
278 |
```python
|
279 |
NER: ["Person Name", "Education", "Position", "Nationality"] # List of strings
|