chore(doc): Separate section on runpod (#860)
Browse files
README.md
CHANGED
|
@@ -111,7 +111,6 @@ accelerate launch -m axolotl.cli.inference examples/openllama-3b/lora.yml \
|
|
| 111 |
```bash
|
| 112 |
docker run --gpus '"all"' --rm -it winglian/axolotl:main-py3.10-cu118-2.0.1
|
| 113 |
```
|
| 114 |
-
- `winglian/axolotl-runpod:main-latest`: for runpod or use this [direct link](https://runpod.io/gsc?template=v2ickqhz9s&ref=6i7fkpdz)
|
| 115 |
|
| 116 |
Or run on the current files for development:
|
| 117 |
|
|
@@ -154,6 +153,10 @@ accelerate launch -m axolotl.cli.inference examples/openllama-3b/lora.yml \
|
|
| 154 |
```
|
| 155 |
Get the token at huggingface.co/settings/tokens
|
| 156 |
|
|
|
|
|
|
|
|
|
|
|
|
|
| 157 |
#### LambdaLabs
|
| 158 |
<details>
|
| 159 |
|
|
|
|
| 111 |
```bash
|
| 112 |
docker run --gpus '"all"' --rm -it winglian/axolotl:main-py3.10-cu118-2.0.1
|
| 113 |
```
|
|
|
|
| 114 |
|
| 115 |
Or run on the current files for development:
|
| 116 |
|
|
|
|
| 153 |
```
|
| 154 |
Get the token at huggingface.co/settings/tokens
|
| 155 |
|
| 156 |
+
#### Runpod
|
| 157 |
+
|
| 158 |
+
Use `winglian/axolotl-runpod:main-latest` or use this [direct link](https://runpod.io/gsc?template=v2ickqhz9s&ref=6i7fkpdz)
|
| 159 |
+
|
| 160 |
#### LambdaLabs
|
| 161 |
<details>
|
| 162 |
|