kurianbenoy commited on
Commit
f7012ed
·
verified ·
1 Parent(s): 6d72e16

Update README.md

Browse files
Files changed (1) hide show
  1. README.md +28 -0
README.md CHANGED
@@ -82,6 +82,34 @@ print("reasoning content:", reasoning_content)
82
  print("content:", content)
83
  ```
84
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
85
  # VLLM Deployment
86
 
87
  For easy deployment, we can use `vllm>=0.8.5` and create an OpenAI-compatible API endpoint with `vllm serve sarvamai/sarvam-m`
 
82
  print("content:", content)
83
  ```
84
 
85
+ # How to use with Sarvam APIs
86
+
87
+ ```python
88
+ from openai import OpenAI
89
+
90
+ base_url = "https://api.sarvam.ai/v1"
91
+ model_name = "sarvam-m"
92
+ api_key = "Your-API-Key" # get it from https://dashboard.sarvam.ai/
93
+
94
+
95
+ client = OpenAI(
96
+ base_url=base_url,
97
+ api_key=api_key,
98
+ ).with_options(max_retries=1)
99
+
100
+ response = client.chat.completions.create(
101
+ model=model_name,
102
+ messages=[
103
+ {"role": "system", "content": "say hi"},
104
+ {"role": "user", "content": "say hi"},
105
+ ],
106
+ stream=False,
107
+ max_completion_tokens=2048,
108
+ # reasoning_effort="low", # set either of 3 values to enable reasoning
109
+ )
110
+ print(response.choices[0].message.content)
111
+ ```
112
+
113
  # VLLM Deployment
114
 
115
  For easy deployment, we can use `vllm>=0.8.5` and create an OpenAI-compatible API endpoint with `vllm serve sarvamai/sarvam-m`