File size: 2,303 Bytes
e4ed606 0464a4c e4ed606 0464a4c 122e8cf 0464a4c e12a622 0464a4c e12a622 0464a4c 0ed906c e12a622 fde7217 e12a622 0464a4c e12a622 0464a4c e12a622 0464a4c e12a622 0464a4c e12a622 0464a4c fde7217 0464a4c |
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 41 42 43 44 45 46 47 48 49 50 51 52 53 54 55 56 57 58 59 60 61 62 63 64 65 66 67 68 69 70 71 72 73 74 75 76 77 78 79 80 81 82 83 84 85 86 87 |
---
license: apache-2.0
language:
- en
pipeline_tag: text-generation
tags:
- openhermes
- mlx-llm
- mlx
library_name: mlx-llm
---
# OpenHermes-2.5-Mistral-7B

## Model description
Please, refer to the [original model card](https://huggingface.co/teknium/OpenHermes-2.5-Mistral-7B) for more details on OpenHermes-2.5-Mistral-7B.
## Use with mlx-llm
Install mlx-llm from GitHub.
```bash
git clone https://github.com/riccardomusmeci/mlx-llm
cd mlx-llm
pip install .
```
Test with simple generation
```python
from mlx_llm.model import create_model, create_tokenizer, generate
model = create_model("OpenHermes-2.5-Mistral-7B") # it downloads weights from this space
tokenizer = create_tokenizer("OpenHermes-2.5-Mistral-7B")
generate(
model=model,
tokenizer=tokenizer,
prompt="What's the meaning of life?",
max_tokens=200,
temperature=.1
)
```
Quantize the model weights
```python
from mlx_llm.model import create_model, quantize, save_weights
model = create_model(model_name)
model = quantize(model, group_size=64, bits=4)
save_weights(model, "weights.npz")
```
Use it in chat mode (don't worry about the prompt, the library takes care of it.)
```python
from mlx_llm.playground.chat import ChatLLM
personality = "You're a salesman and beet farmer known as Dwight K Schrute from the TV show The Office. Dwight replies just as he would in the show. You always reply as Dwight would reply. If you don't know the answer to a question, please don't share false information."
# examples must be structured as below
examples = [
{
"user": "What is your name?",
"model": "Dwight K Schrute",
},
{
"user": "What is your job?",
"model": "Assistant Regional Manager. Sorry, Assistant to the Regional Manager."
}
]
chat_llm = ChatLLM.build(
model_name="OpenHermes-2.5-Mistral-7B",
tokenizer="mlx-community/OpenHermes-2.5-Mistral-7B", # HF tokenizer or a local path to a tokenizer
personality=personality,
examples=examples,
)
chat_llm.run(max_tokens=500, temp=0.1)
```
With `mlx-llm` you can also play with a simple RAG. Go check the [examples](https://github.com/riccardomusmeci/mlx-llm/tree/main/examples).
|