⚠️ I apologize for not providing any files here. This is just a generated text.
TinyLlama PHP Fine-tuned GGUF
This is a GGUF conversion of the TinyLlama model fine-tuned for PHP code generation.
Model Details
- Base Model: TinyLlama
- Fine-tuned for: PHP code generation
- Format: GGUF (quantized to q4_0)
- Use with: llama.cpp, Ollama, or other GGUF-compatible inference engines
Usage
With llama.cpp:
./main -m model.gguf -p "Write a PHP function to"
With Ollama:
- Create a Modelfile:
FROM ./model.gguf
- Create the model:
ollama create tinyllama-php -f Modelfile
ollama run tinyllama-php
Inference Providers
NEW
This model isn't deployed by any Inference Provider.
🙋
Ask for provider support