⚠️ I apologize for not providing any files here. This is just a generated text.

TinyLlama PHP Fine-tuned GGUF

This is a GGUF conversion of the TinyLlama model fine-tuned for PHP code generation.

Model Details

  • Base Model: TinyLlama
  • Fine-tuned for: PHP code generation
  • Format: GGUF (quantized to q4_0)
  • Use with: llama.cpp, Ollama, or other GGUF-compatible inference engines

Usage

With llama.cpp:

./main -m model.gguf -p "Write a PHP function to"

With Ollama:

  1. Create a Modelfile:
FROM ./model.gguf
  1. Create the model:
ollama create tinyllama-php -f Modelfile
ollama run tinyllama-php
Downloads last month

-

Downloads are not tracked for this model. How to track
Inference Providers NEW
This model isn't deployed by any Inference Provider. 🙋 Ask for provider support