YAML Metadata Warning: empty or missing yaml metadata in repo card (https://huggingface.co/docs/hub/model-cards#model-card-metadata)

DeepSeek Math - GGUF Optimisé

Ce modèle est optimisé pour une exécution rapide en CPU et compatible avec llama.cpp et ctransformers.

🛠️ Utilisation

from ctransformers import AutoModelForCausalLM

model = AutoModelForCausalLM.from_pretrained(
    "ton_utilisateur_HF/deepseek-math-gguf", model_file="model.gguf"
)
response = model("Que vaut 2 + 2 ?")
print(response)
Downloads last month

-

Downloads are not tracked for this model. How to track
Inference Providers NEW
This model is not currently available via any of the supported third-party Inference Providers, and HF Inference API was unable to determine this model's library.