Kabster commited on
Commit
b6f6be7
·
verified ·
1 Parent(s): de14745

Update README.md

Browse files
Files changed (1) hide show
  1. README.md +30 -5
README.md CHANGED
@@ -5,24 +5,24 @@ base_model:
5
  tags:
6
  - mergekit
7
  - merge
8
-
9
  ---
10
- # merge
11
 
12
- This is a merge of pre-trained language models created using [mergekit](https://github.com/cg123/mergekit).
13
 
14
  ## Merge Details
15
  ### Merge Method
16
 
17
  This model was merged using the SLERP merge method.
18
 
19
- ### Models Merged
20
 
21
  The following models were included in the merge:
22
  * [BioMistral/BioMistral-7B](https://huggingface.co/BioMistral/BioMistral-7B)
23
  * [HuggingFaceH4/zephyr-7b-beta](https://huggingface.co/HuggingFaceH4/zephyr-7b-beta)
24
 
25
- ### Configuration
26
 
27
  The following YAML configuration was used to produce this model:
28
 
@@ -46,3 +46,28 @@ parameters:
46
  dtype: bfloat16
47
 
48
  ```
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
5
  tags:
6
  - mergekit
7
  - merge
8
+ license: apache-2.0
9
  ---
10
+ # BioMistral-Zephyr-Beta-SLERP
11
 
12
+ BioMistral-Zephyr-Beta-SLERP is a merge of pre-trained language models created using [mergekit](https://github.com/cg123/mergekit).
13
 
14
  ## Merge Details
15
  ### Merge Method
16
 
17
  This model was merged using the SLERP merge method.
18
 
19
+ ### 🤖💬 Models Merged
20
 
21
  The following models were included in the merge:
22
  * [BioMistral/BioMistral-7B](https://huggingface.co/BioMistral/BioMistral-7B)
23
  * [HuggingFaceH4/zephyr-7b-beta](https://huggingface.co/HuggingFaceH4/zephyr-7b-beta)
24
 
25
+ ### 🧩 Configuration
26
 
27
  The following YAML configuration was used to produce this model:
28
 
 
46
  dtype: bfloat16
47
 
48
  ```
49
+
50
+ ### 💻 Usage
51
+
52
+ ```python
53
+ !pip install -qU transformers accelerate
54
+
55
+ from transformers import AutoTokenizer
56
+ import transformers
57
+ import torch
58
+
59
+ model = "Kabster/BioMistral-Zephyr-Beta-SLERP"
60
+ messages = [{"role": "user", "content": "Can bisoprolol cause insomnia?"}]
61
+
62
+ tokenizer = AutoTokenizer.from_pretrained(model)
63
+ prompt = tokenizer.apply_chat_template(messages, tokenize=False, add_generation_prompt=True)
64
+ pipeline = transformers.pipeline(
65
+ "text-generation",
66
+ model=model,
67
+ torch_dtype=torch.float16,
68
+ device_map="auto",
69
+ )
70
+
71
+ outputs = pipeline(prompt, max_new_tokens=256, do_sample=True, temperature=0.2, top_k=100, top_p=0.95)
72
+ print(outputs[0]["generated_text"])
73
+ ```