danielhanchen commited on
Commit
5a14f94
·
verified ·
1 Parent(s): a59bf0e

Upload folder using huggingface_hub

Browse files
Files changed (1) hide show
  1. README.md +8 -4
README.md CHANGED
@@ -21,10 +21,6 @@ tags:
21
  - lfm2
22
  - edge
23
  ---
24
- > [!NOTE]
25
- > Includes our **chat template fixes**! <br> For `llama.cpp`, use `--jinja`
26
- >
27
-
28
  <div>
29
  <p style="margin-top: 0;margin-bottom: 0;">
30
  <em><a href="https://docs.unsloth.ai/basics/unsloth-dynamic-v2.0-gguf">Unsloth Dynamic 2.0</a> achieves superior accuracy & outperforms other leading quants.</em>
@@ -161,6 +157,10 @@ The candidate with ID 12345 is currently in the "Interview Scheduled" stage for
161
 
162
  ## 🏃 How to run LFM2
163
 
 
 
 
 
164
  To run LFM2, you need to install Hugging Face [`transformers`](https://github.com/huggingface/transformers) from source (v4.54.0.dev0).
165
  You can update or install it with the following command: `pip install "transformers @ git+https://github.com/huggingface/transformers.git@main"`.
166
 
@@ -209,6 +209,10 @@ print(tokenizer.decode(output[0], skip_special_tokens=False))
209
 
210
  You can directly run and test the model with this [Colab notebook](https://colab.research.google.com/drive/1_q3jQ6LtyiuPzFZv7Vw8xSfPU5FwkKZY?usp=sharing).
211
 
 
 
 
 
212
  ## 🔧 How to fine-tune LFM2
213
 
214
  We recommend fine-tuning LFM2 models on your use cases to maximize performance.
 
21
  - lfm2
22
  - edge
23
  ---
 
 
 
 
24
  <div>
25
  <p style="margin-top: 0;margin-bottom: 0;">
26
  <em><a href="https://docs.unsloth.ai/basics/unsloth-dynamic-v2.0-gguf">Unsloth Dynamic 2.0</a> achieves superior accuracy & outperforms other leading quants.</em>
 
157
 
158
  ## 🏃 How to run LFM2
159
 
160
+ You can run LFM2 with transformers and llama.cpp. vLLM support is coming.
161
+
162
+ ### 1. Transformers
163
+
164
  To run LFM2, you need to install Hugging Face [`transformers`](https://github.com/huggingface/transformers) from source (v4.54.0.dev0).
165
  You can update or install it with the following command: `pip install "transformers @ git+https://github.com/huggingface/transformers.git@main"`.
166
 
 
209
 
210
  You can directly run and test the model with this [Colab notebook](https://colab.research.google.com/drive/1_q3jQ6LtyiuPzFZv7Vw8xSfPU5FwkKZY?usp=sharing).
211
 
212
+ ### 2. Llama.cpp
213
+
214
+ You can run LFM2 with llama.cpp using its [GGUF checkpoint](https://huggingface.co/LiquidAI/LFM2-700M-GGUF). Find more information in the model card.
215
+
216
  ## 🔧 How to fine-tune LFM2
217
 
218
  We recommend fine-tuning LFM2 models on your use cases to maximize performance.