Upload README.md with huggingface_hub
Browse files
README.md
CHANGED
@@ -40,7 +40,7 @@ pipeline_tag: image-text-to-text
|
|
40 |
|
41 |
## <span style="color: #7F7FFF;">Model Generation Details</span>
|
42 |
|
43 |
-
This model was generated using [llama.cpp](https://github.com/ggerganov/llama.cpp) at commit [`
|
44 |
|
45 |
|
46 |
|
@@ -676,4 +676,4 @@ print(outputs[0].outputs[0].text)
|
|
676 |
|
677 |
Transformers-compatible model weights are also uploaded (thanks a lot @cyrilvallez).
|
678 |
However the transformers implementation was **not throughly tested**, but only on "vibe-checks".
|
679 |
-
Hence, we can only ensure 100% correct behavior when using the original weight format with vllm (see above).
|
|
|
40 |
|
41 |
## <span style="color: #7F7FFF;">Model Generation Details</span>
|
42 |
|
43 |
+
This model was generated using [llama.cpp](https://github.com/ggerganov/llama.cpp) at commit [`92ecdcc0`](https://github.com/ggerganov/llama.cpp/commit/92ecdcc06a4c405a415bcaa0cb772bc560aa23b1).
|
44 |
|
45 |
|
46 |
|
|
|
676 |
|
677 |
Transformers-compatible model weights are also uploaded (thanks a lot @cyrilvallez).
|
678 |
However the transformers implementation was **not throughly tested**, but only on "vibe-checks".
|
679 |
+
Hence, we can only ensure 100% correct behavior when using the original weight format with vllm (see above).
|