name
stringlengths
10
47
display_name
stringlengths
5
41
short_display_name
stringlengths
5
41
description
stringlengths
3
534
creator_organization
stringlengths
4
22
access
stringclasses
3 values
todo
bool
2 classes
release_date
stringdate
2019-10-23 00:00:00
2024-12-11 00:00:00
num_parameters
float64
1
1,210B
huggingface/santacoder
SantaCoder (1.1B)
null
SantaCoder (1.1B parameters) model trained on the Python, Java, and JavaScript subset of The Stack (v1.1) ([model card](https://huggingface.co/bigcode/santacoder)).
BigCode
open
false
null
null
huggingface/starcoder
StarCoder (15.5B)
null
The StarCoder (15.5B parameter) model trained on 80+ programming languages from The Stack (v1.2) ([model card](https://huggingface.co/bigcode/starcoder)).
BigCode
open
false
null
null
HuggingFaceM4/idefics-80b
IDEFICS (80B)
null
IDEFICS (80B parameters) is an open-source model based on DeepMind's Flamingo. ([blog](https://huggingface.co/blog/idefics))
HuggingFace
open
false
2023/8/22
80,000,000,000
HuggingFaceM4/idefics-80b-instruct
IDEFICS instruct (80B)
null
IDEFICS instruct (80B parameters) is an open-source model based on DeepMind's Flamingo. ([blog](https://huggingface.co/blog/idefics))
HuggingFace
open
false
2023/8/22
80,000,000,000
HuggingFaceM4/idefics-9b
IDEFICS (9B)
null
IDEFICS (9B parameters) is an open-source model based on DeepMind's Flamingo. ([blog](https://huggingface.co/blog/idefics))
HuggingFace
open
false
2023/8/22
9,000,000,000
HuggingFaceM4/idefics-9b-instruct
IDEFICS instruct (9B)
null
IDEFICS instruct (9B parameters) is an open-source model based on DeepMind's Flamingo. ([blog](https://huggingface.co/blog/idefics))
HuggingFace
open
false
2023/8/22
9,000,000,000
HuggingFaceM4/idefics2-8b
IDEFICS 2 (8B)
IDEFICS 2 (8B)
IDEFICS 2 (8B parameters) is an open multimodal model that accepts arbitrary sequences of image and text inputs and produces text outputs. ([blog](https://huggingface.co/blog/idefics2)).
HuggingFace
open
false
2024/4/15
8,000,000,000
lightningai/lit-gpt
Lit-GPT
null
Lit-GPT is an optimized collection of open-source LLMs for finetuning and inference. It supports – Falcon, Llama 2, Vicuna, LongChat, and other top-performing open-source large language models.
Lightning AI
open
false
2023/4/4
1
lmsys/vicuna-13b-v1.3
Vicuna v1.3 (13B)
null
Vicuna v1.3 (13B) is an open-source chatbot trained by fine-tuning LLaMA on user-shared conversations collected from ShareGPT.
LMSYS
open
false
2023/6/22
13,000,000,000
lmsys/vicuna-13b-v1.5
Vicuna 13B (v1.5)
Vicuna 13B (v1.5)
Vicuna is a large language model trained on 13 billion parameters. ([blog](https://lmsys.com/vicuna-13b-v1.5/))
LMSYS
open
false
2024/4/18
13,000,000,000
lmsys/vicuna-7b-v1.3
Vicuna v1.3 (7B)
null
Vicuna v1.3 (7B) is an open-source chatbot trained by fine-tuning LLaMA on user-shared conversations collected from ShareGPT.
LMSYS
open
false
2023/6/22
7,000,000,000
lmsys/vicuna-7b-v1.5
Vicuna 7B (v1.5)
Vicuna 7B (v1.5)
Vicuna is a large language model trained on 7 billion parameters. ([blog](https://lmsys.com/vicuna-7b-v1.5/))
LMSYS
open
false
2024/4/18
7,000,000,000
meta-llama/Llama-2-13b-chat-hf
Llama 2 13B Chat
Llama 2 13B Chat
Llama 2 13B Chat is a large language model trained on 13 billion parameters. ([blog](https://meta-llama.com/llama-2-13b-chat-hf/))
Meta Llama
open
false
2024/4/18
13,000,000,000
meta-llama/Llama-2-70b-chat-hf
Llama 2 70B Chat
Llama 2 70B Chat
Llama 2 70B Chat is a large language model trained on 70 billion parameters. ([blog](https://meta-llama.com/llama-2-70b-chat-hf/))
Meta Llama
open
false
2024/4/18
70,000,000,000
meta-llama/Llama-2-7b-chat-hf
Llama 2 7B Chat
Llama 2 7B Chat
Llama 2 7B Chat is a large language model trained on 7 billion parameters. ([blog](https://meta-llama.com/llama-2-7b-chat-hf/))
Meta Llama
open
false
2024/4/18
7,000,000,000
meta/llama-13b
LLaMA (13B)
null
LLaMA is a collection of foundation language models ranging from 7B to 65B parameters.
Meta
open
false
2023/2/24
13,000,000,000
meta/llama-2-13b
Llama 2 (13B)
null
Llama 2 pretrained models are trained on 2 trillion tokens, and have double the context length than Llama 1.
Meta
open
false
2023/7/18
13,000,000,000
meta/llama-2-70b
Llama 2 (70B)
null
Llama 2 pretrained models are trained on 2 trillion tokens, and have double the context length than Llama 1.
Meta
open
false
2023/7/18
70,000,000,000
meta/llama-2-7b
Llama 2 (7B)
null
Llama 2 pretrained models are trained on 2 trillion tokens, and have double the context length than Llama 1.
Meta
open
false
2023/7/18
7,000,000,000
meta/llama-3-70b
Llama 3 (70B)
Llama 3 (70B)
Llama 3 is a family of language models that have been trained on more than 15 trillion tokens, and use Grouped-Query Attention (GQA) for improved inference scalability. ([paper](https://ai.meta.com/research/publications/the-llama-3-herd-of-models/)
Meta
open
false
2024/4/18
70,000,000,000
meta/llama-3-70b-chat
Llama 3 Instruct (70B)
Llama 3 Instruct (70B)
Llama 3 is a family of language models that have been trained on more than 15 trillion tokens, and use Grouped-Query Attention (GQA) for improved inference scalability. It used SFT, rejection sampling, PPO and DPO for post-training. ([paper](https://ai.meta.com/research/publications/the-llama-3-herd-of-models/)
Meta
open
false
2024/4/18
70,000,000,000
meta/llama-3-8b
Llama 3 (8B)
Llama 3 (8B)
Llama 3 is a family of language models that have been trained on more than 15 trillion tokens, and use Grouped-Query Attention (GQA) for improved inference scalability. ([paper](https://ai.meta.com/research/publications/the-llama-3-herd-of-models/)
Meta
open
false
2024/4/18
8,000,000,000
meta/llama-3-8b-chat
Llama 3 Instruct (8B)
Llama 3 Instruct (8B)
Llama 3 is a family of language models that have been trained on more than 15 trillion tokens, and use Grouped-Query Attention (GQA) for improved inference scalability. It used SFT, rejection sampling, PPO and DPO for post-training. ([paper](https://ai.meta.com/research/publications/the-llama-3-herd-of-models/)
Meta
open
false
2024/4/18
8,000,000,000
meta/llama-3.1-405b-instruct-turbo
Llama 3.1 Instruct Turbo (405B)
Llama 3.1 Instruct Turbo (405B)
Llama 3.1 (405B) is part of the Llama 3 family of dense Transformer models that that natively support multilinguality, coding, reasoning, and tool usage. ([paper](https://ai.meta.com/research/publications/the-llama-3-herd-of-models/), [blog](https://ai.meta.com/blog/meta-llama-3-1/)) Turbo is Together's implementation, providing a near negligible difference in quality from the reference implementation with faster performance and lower cost, currently using FP8 quantization. ([blog](https://www.together.ai/blog/llama-31-quality))
Meta
open
false
2024/7/23
405,000,000,000
meta/llama-3.1-70b-instruct-turbo
Llama 3.1 Instruct Turbo (70B)
Llama 3.1 Instruct Turbo (70B)
Llama 3.1 (70B) is part of the Llama 3 family of dense Transformer models that that natively support multilinguality, coding, reasoning, and tool usage. ([paper](https://ai.meta.com/research/publications/the-llama-3-herd-of-models/), [blog](https://ai.meta.com/blog/meta-llama-3-1/)) Turbo is Together's implementation, providing a near negligible difference in quality from the reference implementation with faster performance and lower cost, currently using FP8 quantization. ([blog](https://www.together.ai/blog/llama-31-quality))
Meta
open
false
2024/7/23
70,000,000,000
meta/llama-3.1-8b-instruct-turbo
Llama 3.1 Instruct Turbo (8B)
Llama 3.1 Instruct Turbo (8B)
Llama 3.1 (8B) is part of the Llama 3 family of dense Transformer models that that natively support multilinguality, coding, reasoning, and tool usage. ([paper](https://ai.meta.com/research/publications/the-llama-3-herd-of-models/), [blog](https://ai.meta.com/blog/meta-llama-3-1/)) Turbo is Together's implementation, providing a near negligible difference in quality from the reference implementation with faster performance and lower cost, currently using FP8 quantization. ([blog](https://www.together.ai/blog/llama-31-quality))
Meta
open
false
2024/7/23
8,000,000,000
meta/llama-3.2-11b-vision-instruct-turbo
Llama 3.2 Vision Instruct Turbo (11B)
Llama 3.2 Vision Instruct Turbo (11B)
The Llama 3.2 Vision collection of multimodal large language models (LLMs) is a collection of pretrained and instruction-tuned image reasoning generative models in 11B and 90B sizes. ([blog](https://ai.meta.com/blog/llama-3-2-connect-2024-vision-edge-mobile-devices/)) Turbo is Together's implementation, providing a near negligible difference in quality from the reference implementation with faster performance and lower cost, currently using FP8 quantization. ([blog](https://www.together.ai/blog/llama-31-quality))
Meta
open
false
2024/9/25
10,700,000,000
meta/llama-3.2-90b-vision-instruct-turbo
Llama 3.2 Vision Instruct Turbo (90B)
Llama 3.2 Vision Instruct Turbo (90B)
The Llama 3.2 Vision collection of multimodal large language models (LLMs) is a collection of pretrained and instruction-tuned image reasoning generative models in 11B and 90B sizes. ([blog](https://ai.meta.com/blog/llama-3-2-connect-2024-vision-edge-mobile-devices/)) Turbo is Together's implementation, providing a near negligible difference in quality from the reference implementation with faster performance and lower cost, currently using FP8 quantization. ([blog](https://www.together.ai/blog/llama-31-quality))
Meta
open
false
2024/9/25
88,600,000,000
meta/llama-3.3-70b-instruct-turbo
Llama 3.3 Instruct Turbo (70B)
Llama 3.3 Instruct Turbo (70B)
Llama 3.3 (70B) is part of the Llama 3 family of dense Transformer models that that natively support multilinguality, coding, reasoning, and tool usage. ([paper](https://ai.meta.com/research/publications/the-llama-3-herd-of-models/)) Turbo is Together's implementation, providing a near negligible difference in quality from the reference implementation with faster performance and lower cost, currently using FP8 quantization. ([blog](https://www.together.ai/blog/llama-31-quality))
Meta
open
false
2024/12/6
70,000,000,000
meta/llama-30b
LLaMA (30B)
null
LLaMA is a collection of foundation language models ranging from 7B to 65B parameters.
Meta
open
false
2023/2/24
30,000,000,000
meta/llama-65b
LLaMA (65B)
null
LLaMA is a collection of foundation language models ranging from 7B to 65B parameters.
Meta
open
false
2023/2/24
65,000,000,000
meta/llama-7b
LLaMA (7B)
null
LLaMA is a collection of foundation language models ranging from 7B to 65B parameters.
Meta
open
false
2023/2/24
7,000,000,000
microsoft/llava-1.5-13b-hf
LLaVA 1.5 (13B)
LLaVA 1.5 (13B)
LLaVa is an open-source chatbot trained by fine-tuning LlamA/Vicuna on GPT-generated multimodal instruction-following data. ([paper](https://arxiv.org/abs/2304.08485))
Microsoft
open
false
2023/10/5
13,000,000,000
microsoft/phi-2
Phi-2
Phi-2
Phi-2 is a Transformer with 2.7 billion parameters. It was trained using the same data sources as Phi-1.5, augmented with a new data source that consists of various NLP synthetic texts and filtered websites (for safety and educational value)
Microsoft
open
false
2023/10/5
13,000,000,000
microsoft/phi-3-medium-4k-instruct
Phi-3 (14B)
Phi-3 (14B)
Phi-3-Medium-4K-Instruct is a lightweight model trained with synthetic data and filtered publicly available website data with a focus on high-quality and reasoning dense properties. ([paper](https://arxiv.org/abs/2404.14219), [blog](https://azure.microsoft.com/en-us/blog/new-models-added-to-the-phi-3-family-available-on-microsoft-azure/))
Microsoft
open
false
2024/5/21
14,000,000,000
microsoft/phi-3-small-8k-instruct
Phi-3 (7B)
Phi-3 (7B)
Phi-3-Small-8K-Instruct is a lightweight model trained with synthetic data and filtered publicly available website data with a focus on high-quality and reasoning dense properties. ([paper](https://arxiv.org/abs/2404.14219), [blog](https://azure.microsoft.com/en-us/blog/new-models-added-to-the-phi-3-family-available-on-microsoft-azure/))
Microsoft
open
false
2024/5/21
7,000,000,000
microsoft/TNLGv2_530B
TNLG v2 (530B)
null
TNLG v2 (530B parameters) autoregressive language model trained on a filtered subset of the Pile and CommonCrawl ([paper](https://arxiv.org/pdf/2201.11990.pdf)).
Microsoft/NVIDIA
closed
false
2022/1/28
530,000,000,000
microsoft/TNLGv2_7B
TNLG v2 (6.7B)
null
TNLG v2 (6.7B parameters) autoregressive language model trained on a filtered subset of the Pile and CommonCrawl ([paper](https://arxiv.org/pdf/2201.11990.pdf)).
Microsoft/NVIDIA
closed
false
2022/1/28
6,700,000,000
mistralai/mistral-7b-instruct-v0.1
Mistral Instruct v0.1 (7B)
Mistral Instruct v0.1 (7B)
Mistral v0.1 Instruct 7B is a 7.3B parameter transformer model that uses Grouped-Query Attention (GQA) and Sliding-Window Attention (SWA). The instruct version was fined-tuned using publicly available conversation datasets. ([blog post](https://mistral.ai/news/announcing-mistral-7b/))
Mistral AI
open
false
2023/9/27
7,300,000,000
mistralai/Mistral-7B-Instruct-v0.2
Mistral 7B Instruct v0.2
Mistral 7B Instruct v0.2
Mistral 7B Instruct v0.2 is a multimodal model trained on 7B parameters with a 32K token sequence length. ([blog](https://mistral.ai/mistral-7b-instruct-v0-2/))
Mistral AI
open
false
2024/4/18
7,000,000,000
mistralai/mistral-7b-instruct-v0.3
Mistral Instruct v0.3 (7B)
Mistral Instruct v0.3 (7B)
Mistral v0.3 Instruct 7B is a 7.3B parameter transformer model that uses Grouped-Query Attention (GQA). Compared to v0.1, v0.2 has a 32k context window and no Sliding-Window Attention (SWA). ([blog post](https://mistral.ai/news/la-plateforme/))
Mistral AI
open
false
2024/5/22
7,300,000,000
mistralai/mistral-7b-v0.1
Mistral v0.1 (7B)
null
Mistral 7B is a 7.3B parameter transformer model that uses Grouped-Query Attention (GQA) and Sliding-Window Attention (SWA).
Mistral AI
open
false
2023/9/27
7,300,000,000
mistralai/Mistral-7B-v0.1
Mistral 7B v0.1
Mistral 7B v0.1
Mistral 7B v0.1 is a multimodal model trained on 7B parameters with a 32K token sequence length. ([blog](https://mistral.ai/mistral-7b-v0-1/))
Mistral AI
open
false
2024/4/18
7,000,000,000
mistralai/mistral-large-2402
Mistral Large (2402)
Mistral Large (2402)
Mistral Large is a multilingual model with a 32K tokens context window and function-calling capabilities. ([blog](https://mistral.ai/news/mistral-large/))
Mistral AI
limited
false
2023/2/26
null
mistralai/mistral-large-2407
Mistral Large 2 (2407)
Mistral Large 2 (2407)
Mistral Large 2 is a 123 billion parameter model that has a 128k context window and supports dozens of languages and 80+ coding languages. ([blog](https://mistral.ai/news/mistral-large-2407/))
Mistral AI
open
false
2023/7/24
123,000,000,000
mistralai/mistral-medium-2312
Mistral Medium (2312)
Mistral Medium (2312)
Mistral is a transformer model that uses Grouped-Query Attention (GQA) and Sliding-Window Attention (SWA).
Mistral AI
limited
false
2023/12/11
null
mistralai/mistral-small-2402
Mistral Small (2402)
Mistral Small (2402)
Mistral Small is a multilingual model with a 32K tokens context window and function-calling capabilities. ([blog](https://mistral.ai/news/mistral-large/))
Mistral AI
limited
false
2023/2/26
null
mistralai/mixtral-8x22b
Mixtral (8x22B)
Mixtral (8x22B)
Mistral AI's mixture-of-experts model that uses 39B active parameters out of 141B ([blog post](https://mistral.ai/news/mixtral-8x22b/)).
Mistral AI
open
false
2024/4/10
176,000,000,000
mistralai/mixtral-8x22b-instruct-v0.1
Mixtral Instruct (8x22B)
Mixtral Instruct (8x22B)
Mistral AI's mixture-of-experts model that uses 39B active parameters out of 141B ([blog post](https://mistral.ai/news/mixtral-8x22b/)).
Mistral AI
open
false
2024/4/10
176,000,000,000
mistralai/mixtral-8x7b-32kseqlen
Mixtral (8x7B 32K seqlen)
Mixtral (8x7B 32K seqlen)
Mixtral is a mixture-of-experts model that has 46.7B total parameters but only uses 12.9B parameters per token. ([blog post](https://mistral.ai/news/mixtral-of-experts/), [tweet](https://twitter.com/MistralAI/status/1733150512395038967)).
Mistral AI
open
false
2023/12/8
46,700,000,000
mistralai/mixtral-8x7b-instruct-v0.1
Mixtral Instruct (8x7B)
Mixtral Instruct (8x7B)
Mixtral Instruct (8x7B) is a version of Mixtral (8x7B) that was optimized through supervised fine-tuning and direct preference optimisation (DPO) for careful instruction following. ([blog post](https://mistral.ai/news/mixtral-of-experts/)).
Mistral AI
open
false
2023/12/11
46,700,000,000
mistralai/open-mistral-nemo-2407
Mistral NeMo (2402)
Mistral NeMo (2402)
Mistral NeMo is a multilingual 12B model with a large context window of 128K tokens. ([blog](https://mistral.ai/news/mistral-nemo/))
Mistral AI
open
false
2024/7/18
null
mistralai/pixtral-12b-2409
Mistral Pixtral (2409)
Mistral Pixtral (2409)
Mistral Pixtral 12B is the first multimodal Mistral model for image understanding. ([blog](https://mistral.ai/news/pixtral-12b/))
Mistral AI
open
false
2024/9/17
null
mosaicml/mpt-30b
MPT (30B)
null
MPT (30B) is a Transformer trained from scratch on 1T tokens of text and code.
MosaicML
open
false
2023/6/22
30,000,000,000
mosaicml/mpt-30b-chat
MPT-Chat (30B)
null
MPT-Chat (30B) is a chatbot-like model for dialogue generation. It is built by finetuning MPT (30B), a Transformer trained from scratch on 1T tokens of text and code.
MosaicML
open
true
2023/6/22
30,000,000,000
mosaicml/mpt-7b
MPT (7B)
null
MPT (7B) is a Transformer trained from scratch on 1T tokens of text and code.
MosaicML
open
false
2023/5/5
6,700,000,000
mosaicml/mpt-7b-chat
MPT-Chat (7B)
null
MPT-Chat (7B) is a chatbot-like model for dialogue generation. It is built by finetuning MPT (30B) , a Transformer trained from scratch on 1T tokens of text and code.
MosaicML
open
true
2023/5/5
6,700,000,000
mosaicml/mpt-instruct-30b
MPT-Instruct (30B)
null
MPT-Instruct (30B) is a model for short-form instruction following. It is built by finetuning MPT (30B), a Transformer trained from scratch on 1T tokens of text and code.
MosaicML
open
false
2023/6/22
30,000,000,000
mosaicml/mpt-instruct-7b
MPT-Instruct (7B)
null
MPT-Instruct (7B) is a model for short-form instruction following. It is built by finetuning MPT (30B), a Transformer trained from scratch on 1T tokens of text and code.
MosaicML
open
false
2023/5/5
6,700,000,000
neurips/local
Local service
null
Local competition service
neurips
open
false
2021/12/1
1
NousResearch/Nous-Capybara-7B-V1.9
Nous Capybara 7B V1.9
Nous Capybara 7B V1.9
Nous Capybara 7B V1.9 is a multimodal model trained on 7B parameters with a 32K token sequence length. ([blog](https://nousresearch.com/nous-capybara-7b-v1-9/))
Nous Research
open
false
2024/4/18
7,000,000,000
NousResearch/Nous-Hermes-2-Mistral-7B-DPO
Nous Hermes 2 Mistral 7B DPO
Nous Hermes 2 Mistral 7B DPO
Nous Hermes 2 Mistral 7B DPO is a multimodal model trained on 7B parameters with a 32K token sequence length. ([blog](https://nousresearch.com/nous-hermes-2-mistral-7b-dpo/))
Nous Research
open
false
2024/4/18
7,000,000,000
NousResearch/Nous-Hermes-2-Mixtral-8x7B-DPO
Nous Hermes 2 Mixtral 8x7B DPO
Nous Hermes 2 Mixtral 8x7B DPO
Nous Hermes 2 Mixtral 8x7B DPO is a multimodal model trained on 8x7B parameters with a 32K token sequence length. ([blog](https://nousresearch.com/nous-hermes-2-mixtral-8x7b-dpo/))
Nous Research
open
false
2024/4/18
46,700,000,000
NousResearch/Nous-Hermes-2-Mixtral-8x7B-SFT
Nous Hermes 2 Mixtral 8x7B SFT
Nous Hermes 2 Mixtral 8x7B SFT
Nous Hermes 2 Mixtral 8x7B SFT is a multimodal model trained on 8x7B parameters with a 32K token sequence length. ([blog](https://nousresearch.com/nous-hermes-2-mixtral-8x7b-sft/))
Nous Research
open
false
2024/4/18
46,700,000,000
NousResearch/Nous-Hermes-2-Yi-34B
Nous Hermes 2 Yi 34B
Nous Hermes 2 Yi 34B
Nous Hermes 2 Yi 34B is a multimodal model trained on 34B parameters with a 32K token sequence length. ([blog](https://nousresearch.com/nous-hermes-2-yi-34b/))
Nous Research
open
false
2024/4/18
34,000,000,000
NousResearch/Nous-Hermes-Llama-2-7b
Nous Hermes Llama 2 7B
Nous Hermes Llama 2 7B
Nous Hermes Llama 2 7B is a multimodal model trained on 7B parameters with a 32K token sequence length. ([blog](https://nousresearch.com/nous-hermes-llama-2-7b/))
Nous Research
open
false
2024/4/18
7,000,000,000
NousResearch/Nous-Hermes-Llama2-13b
Nous Hermes Llama 2 13B
Nous Hermes Llama 2 13B
Nous Hermes Llama 2 13B is a multimodal model trained on 13B parameters with a 32K token sequence length. ([blog](https://nousresearch.com/nous-hermes-llama-2-13b/))
Nous Research
open
false
2024/4/18
13,000,000,000
nvidia/megatron-gpt2
Megatron GPT2
null
GPT-2 implemented in Megatron-LM ([paper](https://arxiv.org/abs/1909.08053)).
NVIDIA
open
true
null
null
Open-Orca/Mistral-7B-OpenOrca
Mistral 7B OpenOrca
Mistral 7B OpenOrca
Mistral 7B OpenOrca is a multimodal model trained on 7B parameters with a 32K token sequence length. ([blog](https://openorca.com/mistral-7b-openorca/))
Open Orca
open
false
2024/4/18
7,000,000,000
openai/ada
ada (350M)
null
Original GPT-3 (350M parameters) autoregressive language model ([paper](https://arxiv.org/pdf/2005.14165.pdf), [docs](https://beta.openai.com/docs/model-index-for-researchers)).
OpenAI
limited
false
2020/5/28
350,000,000
openai/babbage
babbage (1.3B)
null
Original GPT-3 (1.3B parameters) autoregressive language model ([paper](https://arxiv.org/pdf/2005.14165.pdf), [docs](https://beta.openai.com/docs/model-index-for-researchers)).
OpenAI
limited
false
2020/5/28
1,300,000,000
openai/code-cushman-001
code-cushman-001 (12B)
null
Codex-style model that is a stronger, multilingual version of the Codex (12B) model in the [Codex paper](https://arxiv.org/pdf/2107.03374.pdf).
OpenAI
limited
false
null
null
openai/code-davinci-001
code-davinci-001
null
code-davinci-001 model
OpenAI
limited
true
null
null
openai/code-davinci-002
code-davinci-002
null
Codex-style model that is designed for pure code-completion tasks ([docs](https://beta.openai.com/docs/models/codex)).
OpenAI
limited
false
null
null
openai/curie
curie (6.7B)
null
Original GPT-3 (6.7B parameters) autoregressive language model ([paper](https://arxiv.org/pdf/2005.14165.pdf), [docs](https://beta.openai.com/docs/model-index-for-researchers)).
OpenAI
limited
false
2020/5/28
6,700,000,000
openai/davinci
davinci (175B)
null
Original GPT-3 (175B parameters) autoregressive language model ([paper](https://arxiv.org/pdf/2005.14165.pdf), [docs](https://beta.openai.com/docs/model-index-for-researchers)).
OpenAI
limited
false
2020/5/28
175,000,000,000
openai/gpt-3.5-turbo-0125
GPT-3.5 Turbo (0125)
GPT-3.5 Turbo (0125)
Sibling model of text-davinci-003 that is optimized for chat but works well for traditional completions tasks as well. Snapshot from 2024-01-25.
OpenAI
limited
false
2023/6/13
null
openai/gpt-3.5-turbo-0301
GPT-3.5 Turbo (0301)
GPT-3.5 Turbo (0301)
Sibling model of text-davinci-003 that is optimized for chat but works well for traditional completions tasks as well. Snapshot from 2023-03-01.
OpenAI
limited
false
2023/3/1
null
openai/gpt-3.5-turbo-0613
GPT-3.5 Turbo (0613)
GPT-3.5 Turbo (0613)
Sibling model of text-davinci-003 that is optimized for chat but works well for traditional completions tasks as well. Snapshot from 2023-06-13.
OpenAI
limited
false
2023/6/13
null
openai/gpt-3.5-turbo-1106
GPT-3.5 Turbo (1106)
GPT-3.5 Turbo (1106)
Sibling model of text-davinci-003 that is optimized for chat but works well for traditional completions tasks as well. Snapshot from 2023-11-06.
OpenAI
limited
false
2024/1/25
null
openai/gpt-3.5-turbo-16k-0613
gpt-3.5-turbo-16k-0613
null
Sibling model of text-davinci-003 is optimized for chat but works well for traditional completions tasks as well. Snapshot from 2023-06-13 with a longer context length of 16,384 tokens.
OpenAI
limited
false
2023/6/13
null
openai/gpt-4-0314
gpt-4-0314
null
GPT-4 is a large multimodal model (currently only accepting text inputs and emitting text outputs) that is optimized for chat but works well for traditional completions tasks. Snapshot of gpt-4 from March 14th 2023.
OpenAI
limited
false
2023/3/14
null
openai/gpt-4-0613
GPT-4 (0613)
GPT-4 (0613)
GPT-4 is a large multimodal model (currently only accepting text inputs and emitting text outputs) that is optimized for chat but works well for traditional completions tasks. Snapshot of gpt-4 from 2023-06-13.
OpenAI
limited
false
2023/6/13
null
openai/gpt-4-1106-preview
gpt-4-1106-preview
null
GPT-4 Turbo (preview) is a large multimodal model that is optimized for chat but works well for traditional completions tasks. The model is cheaper and faster than the original GPT-4 model. Preview snapshot from November 6, 2023.
OpenAI
limited
false
2023/11/6
null
openai/gpt-4-1106-vision-preview
GPT-4V (1106 preview)
GPT-4V (1106 preview)
GPT-4V is a large multimodal model that accepts both text and images and is optimized for chat ([model card](https://openai.com/research/gpt-4v-system-card)).
OpenAI
limited
false
2023/11/6
null
openai/gpt-4-32k-0314
gpt-4-32k-0314
null
GPT-4 is a large multimodal model (currently only accepting text inputs and emitting text outputs) that is optimized for chat but works well for traditional completions tasks. Snapshot of gpt-4 with a longer context length of 32,768 tokens from March 14th 2023.
OpenAI
limited
false
2023/3/14
null
openai/gpt-4-32k-0613
gpt-4-32k-0613
null
GPT-4 is a large multimodal model (currently only accepting text inputs and emitting text outputs) that is optimized for chat but works well for traditional completions tasks. Snapshot of gpt-4 with a longer context length of 32,768 tokens from 2023-06-13.
OpenAI
limited
false
2023/6/13
null
openai/gpt-4-turbo-2024-04-09
GPT-4 Turbo (2024-04-09)
GPT-4 Turbo (2024-04-09)
GPT-4 Turbo (2024-04-09) is a large multimodal model that is optimized for chat but works well for traditional completions tasks. The model is cheaper and faster than the original GPT-4 model. Snapshot from 2024-04-09.
OpenAI
limited
false
2024/4/9
null
openai/gpt-4-vision-preview
GPT-4V (1106 preview)
GPT-4V (1106 preview)
GPT-4V is a large multimodal model that accepts both text and images and is optimized for chat ([model card](https://openai.com/research/gpt-4v-system-card)).
OpenAI
limited
false
2023/11/6
null
openai/gpt-4o-2024-05-13
GPT-4o (2024-05-13)
GPT-4o (2024-05-13)
GPT-4o (2024-05-13) is a large multimodal model that accepts as input any combination of text, audio, and image and generates any combination of text, audio, and image outputs. ([blog](https://openai.com/index/hello-gpt-4o/))
OpenAI
limited
false
2024/4/9
null
openai/gpt-4o-2024-08-06
GPT-4o (2024-08-06)
GPT-4o (2024-08-06)
GPT-4o (2024-08-06) is a large multimodal model that accepts as input any combination of text, audio, and image and generates any combination of text, audio, and image outputs. ([blog](https://openai.com/index/introducing-structured-outputs-in-the-api/))
OpenAI
limited
false
2024/8/6
null
openai/gpt-4o-mini-2024-07-18
GPT-4o mini (2024-07-18)
GPT-4o mini (2024-07-18)
GPT-4o mini (2024-07-18) is a multimodal model with a context window of 128K tokens and improved handling of non-English text. ([blog](https://openai.com/index/gpt-4o-mini-advancing-cost-efficient-intelligence/))
OpenAI
limited
false
2024/7/18
null
openai/text-ada-001
text-ada-001
null
text-ada-001 model that involves supervised fine-tuning on human-written demonstrations ([docs](https://beta.openai.com/docs/model-index-for-researchers)).
OpenAI
limited
false
2022/1/27
350,000,000
openai/text-babbage-001
text-babbage-001
null
text-babbage-001 model that involves supervised fine-tuning on human-written demonstrations ([docs](https://beta.openai.com/docs/model-index-for-researchers)).
OpenAI
limited
false
2022/1/27
1,300,000,000
openai/text-curie-001
text-curie-001
null
text-curie-001 model that involves supervised fine-tuning on human-written demonstrations ([docs](https://beta.openai.com/docs/model-index-for-researchers)).
OpenAI
limited
false
2022/1/27
6,700,000,000
openai/text-davinci-001
text-davinci-001
null
text-davinci-001 model that involves supervised fine-tuning on human-written demonstrations ([docs](https://beta.openai.com/docs/model-index-for-researchers)).
OpenAI
limited
true
2022/1/27
175,000,000,000
openai/text-davinci-002
text-davinci-002
null
text-davinci-002 model that involves supervised fine-tuning on human-written demonstrations. Derived from code-davinci-002 ([docs](https://beta.openai.com/docs/model-index-for-researchers)).
OpenAI
limited
false
2022/1/27
175,000,000,000
openai/text-davinci-003
text-davinci-003
null
text-davinci-003 model that involves reinforcement learning (PPO) with reward models. Derived from text-davinci-002 ([docs](https://beta.openai.com/docs/model-index-for-researchers)).
OpenAI
limited
false
2022/11/28
175,000,000,000
openchat/openchat-3.5-1210
OpenChat 3.5 (1210)
OpenChat 3.5 (1210)
OpenChat 3.5 is a large language model trained on 1210 billion parameters. ([blog](https://openchat.com/openchat-3.5-1210/))
OpenChat
open
false
2024/4/18
1,210,000,000,000
openthaigpt/openthaigpt-1.0.0-13b-chat
OpenThaiGPT v1.0.0 (13B)
OpenThaiGPT v1.0.0 (13B)
OpenThaiGPT v1.0.0 (13B) is a Thai language chat model based on Llama 2 that has been specifically fine-tuned for Thai instructions and enhanced by incorporating over 10,000 of the most commonly used Thai words into the dictionary. ([blog post](https://openthaigpt.aieat.or.th/openthaigpt-1.0.0-less-than-8-apr-2024-greater-than))
OpenThaiGPT
open
false
2024/4/8
13,000,000,000