Update README.md
Browse files
README.md
CHANGED
@@ -14,14 +14,14 @@ language:
|
|
14 |
- en
|
15 |
---
|
16 |
|
17 |
-
# Model Card for gemma-
|
18 |
|
19 |
This model is a fine-tuned version of [google/gemma-3-1b-it](https://huggingface.co/google/gemma-3-1b-it).
|
20 |
It has been trained using [TRL](https://github.com/huggingface/trl). It is trained on [inclinedadarsh/nl-to-regex](https://huggingface.co/datasets/inclinedadarsh/nl-to-regex) dataset.
|
21 |
|
22 |
## Training notebook
|
23 |
|
24 |
-
You can find the notebook that was used to train this model at https://
|
25 |
|
26 |
## Quick start
|
27 |
|
@@ -29,7 +29,7 @@ You can find the notebook that was used to train this model at https://huggingfa
|
|
29 |
from transformers import pipeline
|
30 |
|
31 |
question = "If you had a time machine, but could only go to the past or the future once and never return, which would you choose and why?"
|
32 |
-
generator = pipeline("text-generation", model="inclinedadarsh/gemma-
|
33 |
output = generator([{"role": "user", "content": question}], max_new_tokens=128, return_full_text=False)[0]
|
34 |
print(output["generated_text"])
|
35 |
```
|
|
|
14 |
- en
|
15 |
---
|
16 |
|
17 |
+
# Model Card for gemma-3-1b-nl-to-regex
|
18 |
|
19 |
This model is a fine-tuned version of [google/gemma-3-1b-it](https://huggingface.co/google/gemma-3-1b-it).
|
20 |
It has been trained using [TRL](https://github.com/huggingface/trl). It is trained on [inclinedadarsh/nl-to-regex](https://huggingface.co/datasets/inclinedadarsh/nl-to-regex) dataset.
|
21 |
|
22 |
## Training notebook
|
23 |
|
24 |
+
You can find the notebook that was used to train this model at https://www.kaggle.com/code/inclinedadarsh/gemma-finetune-nl-to-regex
|
25 |
|
26 |
## Quick start
|
27 |
|
|
|
29 |
from transformers import pipeline
|
30 |
|
31 |
question = "If you had a time machine, but could only go to the past or the future once and never return, which would you choose and why?"
|
32 |
+
generator = pipeline("text-generation", model="inclinedadarsh/gemma-3-1b-nl-to-regex", device="cuda")
|
33 |
output = generator([{"role": "user", "content": question}], max_new_tokens=128, return_full_text=False)[0]
|
34 |
print(output["generated_text"])
|
35 |
```
|