Chat UI documentation

OpenAI

Hugging Face's logo
Join the Hugging Face community

and get access to the augmented documentation experience

to get started

OpenAI

Feature Available
Tools No
Multimodal No

Chat UI can be used with any API server that supports OpenAI API compatibility, for example text-generation-webui, LocalAI, FastChat, llama-cpp-python, and ialacol and vllm.

The following example config makes Chat UI works with text-generation-webui, the endpoint.baseUrl is the url of the OpenAI API compatible server, this overrides the baseUrl to be used by OpenAI instance. The endpoint.completion determine which endpoint to be used, default is chat_completions which uses /chat/completions, change to endpoint.completion to completions to use the /completions endpoint.

MODELS=`[
  {
    "name": "text-generation-webui",
    "id": "text-generation-webui",
    "parameters": {
      "temperature": 0.9,
      "top_p": 0.95,
      "repetition_penalty": 1.2,
      "top_k": 50,
      "truncate": 1000,
      "max_new_tokens": 1024,
      "stop": []
    },
    "endpoints": [{
      "type" : "openai",
      "baseURL": "http://localhost:8000/v1"
    }]
  }
]`

The openai type includes official OpenAI models. You can add, for example, GPT4/GPT3.5 as a “openai” model:

OPENAI_API_KEY=#your openai api key here
MODELS=`[{
  "name": "gpt-4",
  "displayName": "GPT 4",
  "endpoints" : [{
    "type": "openai",
    "apiKey": "or your openai api key here"
  }]
},{
  "name": "gpt-3.5-turbo",
  "displayName": "GPT 3.5 Turbo",
  "endpoints" : [{
    "type": "openai",
    "apiKey": "or your openai api key here"
  }]
}]`

We also support models in the o1 family. You need to add a few more options ot the config: Here is an example for o1-mini:

MODELS=`[
  {
      "name": "o1-mini",
      "description": "ChatGPT o1-mini",
      "systemRoleSupported": false,
      "parameters": {
        "max_new_tokens": 2048,
      },
      "endpoints" : [{
        "type": "openai",
        "useCompletionTokens": true,
      }]
  }
]

You may also consume any model provider that provides compatible OpenAI API endpoint. For example, you may self-host Portkey gateway and experiment with Claude or GPTs offered by Azure OpenAI. Example for Claude from Anthropic:

MODELS=`[{
  "name": "claude-2.1",
  "displayName": "Claude 2.1",
  "description": "Anthropic has been founded by former OpenAI researchers...",
  "parameters": {
    "temperature": 0.5,
    "max_new_tokens": 4096,
  },
  "endpoints": [
    {
      "type": "openai",
      "baseURL": "https://gateway.example.com/v1",
      "defaultHeaders": {
        "x-portkey-config": '{"provider":"anthropic","api_key":"sk-ant-abc...xyz"}'
      }
    }
  ]
}]`

Example for GPT 4 deployed on Azure OpenAI:

MODELS=`[{
  "id": "gpt-4-1106-preview",
  "name": "gpt-4-1106-preview",
  "displayName": "gpt-4-1106-preview",
  "parameters": {
    "temperature": 0.5,
    "max_new_tokens": 4096,
  },
  "endpoints": [
    {
      "type": "openai",
      "baseURL": "https://{resource-name}.openai.azure.com/openai/deployments/{deployment-id}",
      "defaultHeaders": {
        "api-key": "{api-key}"
      },
      "defaultQuery": {
        "api-version": "2023-05-15"
      }
    }
  ]
}]`

DeepInfra

Or try Mistral from Deepinfra:

Note, apiKey can either be set custom per endpoint, or globally using OPENAI_API_KEY variable.

MODELS=`[{
  "name": "mistral-7b",
  "displayName": "Mistral 7B",
  "description": "A 7B dense Transformer, fast-deployed and easily customisable. Small, yet powerful for a variety of use cases. Supports English and code, and a 8k context window.",
  "parameters": {
    "temperature": 0.5,
    "max_new_tokens": 4096,
  },
  "endpoints": [
    {
      "type": "openai",
      "baseURL": "https://api.deepinfra.com/v1/openai",
      "apiKey": "abc...xyz"
    }
  ]
}]`

Other

Some other providers and their baseURL for reference.

Groq: https://api.groq.com/openai/v1 Fireworks: https://api.fireworks.ai/inference/v1

< > Update on GitHub