bibproj commited on
Commit
572b66e
·
verified ·
1 Parent(s): a5ca5fc

Upload README.md with huggingface_hub

Browse files
Files changed (1) hide show
  1. README.md +59 -0
README.md ADDED
@@ -0,0 +1,59 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ ---
2
+ library_name: transformers
3
+ license: gpl-3.0
4
+ language:
5
+ - as
6
+ - bn
7
+ - brx
8
+ - doi
9
+ - gom
10
+ - gu
11
+ - en
12
+ - hi
13
+ - kn
14
+ - ks
15
+ - mai
16
+ - ml
17
+ - mni
18
+ - mr
19
+ - ne
20
+ - or
21
+ - pa
22
+ - sa
23
+ - sat
24
+ - sd
25
+ - ta
26
+ - te
27
+ - ur
28
+ base_model: sarvamai/sarvam-translate
29
+ base_model_relation: finetune
30
+ pipeline_tag: translation
31
+ tags:
32
+ - mlx
33
+ ---
34
+
35
+ # bibproj/sarvam-translate-mlx-fp16
36
+
37
+ The Model [bibproj/sarvam-translate-mlx-fp16](https://huggingface.co/bibproj/sarvam-translate-mlx-fp16) was converted to MLX format from [sarvamai/sarvam-translate](https://huggingface.co/sarvamai/sarvam-translate) using mlx-lm version **0.22.3**.
38
+
39
+ ## Use with mlx
40
+
41
+ ```bash
42
+ pip install mlx-lm
43
+ ```
44
+
45
+ ```python
46
+ from mlx_lm import load, generate
47
+
48
+ model, tokenizer = load("bibproj/sarvam-translate-mlx-fp16")
49
+
50
+ prompt="hello"
51
+
52
+ if hasattr(tokenizer, "apply_chat_template") and tokenizer.chat_template is not None:
53
+ messages = [{"role": "user", "content": prompt}]
54
+ prompt = tokenizer.apply_chat_template(
55
+ messages, tokenize=False, add_generation_prompt=True
56
+ )
57
+
58
+ response = generate(model, tokenizer, prompt=prompt, verbose=True)
59
+ ```