Xenova HF Staff commited on
Commit
2950c41
·
verified ·
1 Parent(s): c4e7304

Add sample code

Browse files
Files changed (1) hide show
  1. README.md +39 -0
README.md CHANGED
@@ -72,6 +72,45 @@ for everyone.
72
  }
73
  ```
74
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
75
  ## Model Data
76
 
77
  Data used for model training and how the data was processed.
 
72
  }
73
  ```
74
 
75
+ ## Usage
76
+
77
+ ### Transformers.js
78
+
79
+ If you haven't already, you can install the [Transformers.js](https://huggingface.co/docs/transformers.js) JavaScript library from [NPM](https://www.npmjs.com/package/@huggingface/transformers) using:
80
+ ```bash
81
+ npm i @huggingface/transformers
82
+ ```
83
+
84
+ You can then use the model like this:
85
+ ```js
86
+ import { pipeline, TextStreamer } from "@huggingface/transformers";
87
+
88
+ // Create a text generation pipeline
89
+ const generator = await pipeline(
90
+ "text-generation",
91
+ "onnx-community/gemma-3-270m-it-ONNX",
92
+ { dtype: "fp32" },
93
+ );
94
+
95
+ // Define the list of messages
96
+ const messages = [
97
+ { role: "system", content: "You are a helpful assistant." },
98
+ { role: "user", content: "Write a poem about machine learning." },
99
+ ];
100
+
101
+ // Generate a response
102
+ const output = await generator(messages, {
103
+ max_new_tokens: 512,
104
+ do_sample: false,
105
+ streamer: new TextStreamer(generator.tokenizer, {
106
+ skip_prompt: true,
107
+ skip_special_tokens: true,
108
+ // callback_function: (text) => { /* Optional callback function */ },
109
+ }),
110
+ });
111
+ console.log(output[0].generated_text.at(-1).content);
112
+ ```
113
+
114
  ## Model Data
115
 
116
  Data used for model training and how the data was processed.