Update README.md
Browse files
README.md
CHANGED
|
@@ -3,23 +3,23 @@ license: apache-2.0
|
|
| 3 |
inference: false
|
| 4 |
---
|
| 5 |
|
| 6 |
-
# SLIM-
|
| 7 |
|
| 8 |
<!-- Provide a quick summary of what the model is/does. -->
|
| 9 |
|
| 10 |
-
**slim-
|
| 11 |
|
| 12 |
-
`{'question': ['What were earnings per share in the most recent quarter?'] }
|
| 13 |
|
| 14 |
This model is finetuned on top of phi-3-mini-4k-instruct base.
|
| 15 |
|
| 16 |
-
For fast inference use, we would recommend the 'quantized tool' version, e.g., [**'slim-
|
| 17 |
|
| 18 |
|
| 19 |
## Prompt format:
|
| 20 |
|
| 21 |
`function = "generate"`
|
| 22 |
-
`params = "{'question', 'boolean', or 'multiple choice'}"`
|
| 23 |
`prompt = "<human> " + {text} + "\n" + `
|
| 24 |
`"<{function}> " + {params} + "</{function}>" + "\n<bot>:"`
|
| 25 |
|
|
@@ -27,8 +27,8 @@ For fast inference use, we would recommend the 'quantized tool' version, e.g.,
|
|
| 27 |
<details>
|
| 28 |
<summary>Transformers Script </summary>
|
| 29 |
|
| 30 |
-
model = AutoModelForCausalLM.from_pretrained("llmware/slim-
|
| 31 |
-
tokenizer = AutoTokenizer.from_pretrained("llmware/slim-
|
| 32 |
|
| 33 |
function = "generate"
|
| 34 |
params = "boolean"
|
|
@@ -53,7 +53,7 @@ For fast inference use, we would recommend the 'quantized tool' version, e.g.,
|
|
| 53 |
|
| 54 |
print("output only: ", output_only)
|
| 55 |
|
| 56 |
-
[OUTPUT]: {'llm_response': {'question': ['Did Telsa stock decline more than
|
| 57 |
|
| 58 |
# here's the fun part
|
| 59 |
try:
|
|
@@ -72,7 +72,7 @@ For fast inference use, we would recommend the 'quantized tool' version, e.g.,
|
|
| 72 |
<summary>Using as Function Call in LLMWare</summary>
|
| 73 |
|
| 74 |
from llmware.models import ModelCatalog
|
| 75 |
-
slim_model = ModelCatalog().load_model("llmware/slim-
|
| 76 |
response = slim_model.function_call(text,params=["boolean"], function="generate")
|
| 77 |
|
| 78 |
print("llmware - llm_response: ", response)
|
|
|
|
| 3 |
inference: false
|
| 4 |
---
|
| 5 |
|
| 6 |
+
# SLIM-QA-GEN-PHI-3
|
| 7 |
|
| 8 |
<!-- Provide a quick summary of what the model is/does. -->
|
| 9 |
|
| 10 |
+
**slim-qa-gen-phi-3** implements a specialized function-calling question and answer generation from a context passage, with output in the form of a python dictionary, e.g.,
|
| 11 |
|
| 12 |
+
`{'question': ['What were earnings per share in the most recent quarter?'], 'answer': ['$2.39'] }
|
| 13 |
|
| 14 |
This model is finetuned on top of phi-3-mini-4k-instruct base.
|
| 15 |
|
| 16 |
+
For fast inference use, we would recommend the 'quantized tool' version, e.g., [**'slim-qa-gen-phi-3-tool'**](https://huggingface.co/llmware/slim-qa-gen-phi-3-tool).
|
| 17 |
|
| 18 |
|
| 19 |
## Prompt format:
|
| 20 |
|
| 21 |
`function = "generate"`
|
| 22 |
+
`params = "{'question, answer', 'boolean', or 'multiple choice'}"`
|
| 23 |
`prompt = "<human> " + {text} + "\n" + `
|
| 24 |
`"<{function}> " + {params} + "</{function}>" + "\n<bot>:"`
|
| 25 |
|
|
|
|
| 27 |
<details>
|
| 28 |
<summary>Transformers Script </summary>
|
| 29 |
|
| 30 |
+
model = AutoModelForCausalLM.from_pretrained("llmware/slim-qa-gen-phi-3")
|
| 31 |
+
tokenizer = AutoTokenizer.from_pretrained("llmware/slim-qa-gen-phi-3")
|
| 32 |
|
| 33 |
function = "generate"
|
| 34 |
params = "boolean"
|
|
|
|
| 53 |
|
| 54 |
print("output only: ", output_only)
|
| 55 |
|
| 56 |
+
[OUTPUT]: {'llm_response': {'question': ['Did Telsa stock decline more than 5% yesterday?'], 'answer':['yes'] } }
|
| 57 |
|
| 58 |
# here's the fun part
|
| 59 |
try:
|
|
|
|
| 72 |
<summary>Using as Function Call in LLMWare</summary>
|
| 73 |
|
| 74 |
from llmware.models import ModelCatalog
|
| 75 |
+
slim_model = ModelCatalog().load_model("llmware/slim-qa-gen-phi-3", sample=True, temperature=0.5)
|
| 76 |
response = slim_model.function_call(text,params=["boolean"], function="generate")
|
| 77 |
|
| 78 |
print("llmware - llm_response: ", response)
|