File size: 758 Bytes
20266a7
 
 
 
 
 
 
 
 
31f4adc
1aa3381
20266a7
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
---
license: apache-2.0
tags:
- llama
- gguf
- quantized
library_name: transformers
---

# ⚠️ I apologize for not providing any files here. This is just a generated text.

# TinyLlama PHP Fine-tuned GGUF

This is a GGUF conversion of the TinyLlama model fine-tuned for PHP code generation.

## Model Details
- **Base Model**: TinyLlama
- **Fine-tuned for**: PHP code generation
- **Format**: GGUF (quantized to q4_0)
- **Use with**: llama.cpp, Ollama, or other GGUF-compatible inference engines

## Usage

### With llama.cpp:
```bash
./main -m model.gguf -p "Write a PHP function to"
```

### With Ollama:
1. Create a Modelfile:
```
FROM ./model.gguf
```
2. Create the model:
```bash
ollama create tinyllama-php -f Modelfile
ollama run tinyllama-php
```