Dataset Viewer
name
stringlengths 10
47
| display_name
stringlengths 5
41
| short_display_name
stringlengths 5
41
⌀ | description
stringlengths 3
534
| creator_organization
stringlengths 4
22
| access
stringclasses 3
values | todo
bool 2
classes | release_date
stringdate 2019-10-23 00:00:00
2024-12-11 00:00:00
⌀ | num_parameters
float64 1
1,210B
⌀ |
---|---|---|---|---|---|---|---|---|
01-ai/yi-34b
|
Yi (34B)
| null |
The Yi models are large language models trained from scratch by developers at 01.AI.
|
01.AI
|
open
| false |
2023/11/2
| 34,000,000,000 |
01-ai/yi-34b-chat
|
Yi Chat (34B)
|
Yi Chat (34B)
|
The Yi models are large language models trained from scratch by developers at 01.AI.
|
01.AI
|
open
| false |
2023/11/23
| 34,000,000,000 |
01-ai/yi-6b
|
Yi (6B)
| null |
The Yi models are large language models trained from scratch by developers at 01.AI.
|
01.AI
|
open
| false |
2023/11/2
| 6,000,000,000 |
01-ai/yi-large-preview
|
Yi Large (Preview)
|
Yi Large (Preview)
|
The Yi models are large language models trained from scratch by developers at 01.AI. ([tweet](https://x.com/01AI_Yi/status/1789894091620458667))
|
01.AI
|
limited
| false |
2024/5/12
| null |
ai21/j1-grande
|
J1-Grande v1 (17B)
| null |
Jurassic-1 Grande (17B parameters) with a "few tweaks" to the training process ([docs](https://studio.ai21.com/docs/jurassic1-language-models/), [tech report](https://uploads-ssl.webflow.com/60fd4503684b466578c0d307/61138924626a6981ee09caf6_jurassic_tech_paper.pdf)).
|
AI21 Labs
|
limited
| false |
2022/5/3
| 17,000,000,000 |
ai21/j1-grande-v2-beta
|
J1-Grande v2 beta (17B)
| null |
Jurassic-1 Grande v2 beta (17B parameters)
|
AI21 Labs
|
limited
| false |
2022/10/28
| 17,000,000,000 |
ai21/j1-jumbo
|
J1-Jumbo v1 (178B)
| null |
Jurassic-1 Jumbo (178B parameters) ([docs](https://studio.ai21.com/docs/jurassic1-language-models/), [tech report](https://uploads-ssl.webflow.com/60fd4503684b466578c0d307/61138924626a6981ee09caf6_jurassic_tech_paper.pdf)).
|
AI21 Labs
|
limited
| false |
2021/8/11
| 178,000,000,000 |
ai21/j1-large
|
J1-Large v1 (7.5B)
| null |
Jurassic-1 Large (7.5B parameters) ([docs](https://studio.ai21.com/docs/jurassic1-language-models/), [tech report](https://uploads-ssl.webflow.com/60fd4503684b466578c0d307/61138924626a6981ee09caf6_jurassic_tech_paper.pdf)).
|
AI21 Labs
|
limited
| false |
2021/8/11
| 7,500,000,000 |
ai21/j2-grande
|
Jurassic-2 Grande (17B)
| null |
Jurassic-2 Grande (17B parameters) ([docs](https://www.ai21.com/blog/introducing-j2))
|
AI21 Labs
|
limited
| false |
2023/3/9
| 17,000,000,000 |
ai21/j2-jumbo
|
Jurassic-2 Jumbo (178B)
| null |
Jurassic-2 Jumbo (178B parameters) ([docs](https://www.ai21.com/blog/introducing-j2))
|
AI21 Labs
|
limited
| false |
2023/3/9
| 178,000,000,000 |
ai21/j2-large
|
Jurassic-2 Large (7.5B)
| null |
Jurassic-2 Large (7.5B parameters) ([docs](https://www.ai21.com/blog/introducing-j2))
|
AI21 Labs
|
limited
| false |
2023/3/9
| 7,500,000,000 |
ai21/jamba-1.5-large
|
Jamba 1.5 Large
|
Jamba 1.5 Large
|
Jamba 1.5 Large is a long-context, hybrid SSM-Transformer instruction following foundation model that is optimized for function calling, structured output, and grounded generation. ([blog](https://www.ai21.com/blog/announcing-jamba-model-family))
|
AI21 Labs
|
open
| false |
2024/8/22
| 399,000,000,000 |
ai21/jamba-1.5-mini
|
Jamba 1.5 Mini
|
Jamba 1.5 Mini
|
Jamba 1.5 Mini is a long-context, hybrid SSM-Transformer instruction following foundation model that is optimized for function calling, structured output, and grounded generation. ([blog](https://www.ai21.com/blog/announcing-jamba-model-family))
|
AI21 Labs
|
open
| false |
2024/8/22
| 51,600,000,000 |
ai21/jamba-instruct
|
Jamba Instruct
|
Jamba Instruct
|
Jamba Instruct is an instruction tuned version of Jamba, which uses a hybrid Transformer-Mamba mixture-of-experts (MoE) architecture that interleaves blocks of Transformer and Mamba layers. ([blog](https://www.ai21.com/blog/announcing-jamba-instruct))
|
AI21 Labs
|
limited
| false |
2024/5/2
| 52,000,000,000 |
aisingapore/llama3-8b-cpt-sea-lionv2-base
|
Llama 3 CPT SEA-Lion v2 (8B)
|
Llama 3 CPT SEA-Lion v2 (8B)
|
Llama 3 CPT SEA-Lion v2 (8B) is a multilingual model which was continued pre-trained on 48B additional tokens, including tokens in Southeast Asian languages.
|
AI Singapore
|
open
| false |
2024/7/31
| 80,300,000,000 |
aisingapore/llama3-8b-cpt-sea-lionv2.1-instruct
|
Llama 3 CPT SEA-Lion v2.1 Instruct (8B)
|
Llama 3 CPT SEA-Lion v2.1 Instruct (8B)
|
Llama 3 CPT SEA-Lion v2.1 Instruct (8B) is a multilingual model which has been fine-tuned with around 100,000 English instruction-completion pairs alongside a smaller pool of around 50,000 instruction-completion pairs from other Southeast Asian languages, such as Indonesian, Thai and Vietnamese.
|
AI Singapore
|
open
| false |
2024/8/21
| 80,300,000,000 |
aisingapore/sea-lion-7b-instruct
|
SEA-LION Instruct (7B)
|
SEA-LION Instruct (7B)
|
SEA-LION is a collection of language models which has been pretrained and instruct-tuned on languages from the Southeast Asia region. It utilizes the MPT architecture and a custom SEABPETokenizer for tokenization.
|
AI Singapore
|
open
| false |
2023/2/24
| 7,000,000,000 |
AlephAlpha/luminous-base
|
Luminous Base (13B)
| null |
Luminous Base (13B parameters) ([docs](https://docs.aleph-alpha.com/docs/introduction/luminous/))
|
Aleph Alpha
|
limited
| false |
2022/1/1
| 13,000,000,000 |
AlephAlpha/luminous-extended
|
Luminous Extended (30B)
| null |
Luminous Extended (30B parameters) ([docs](https://docs.aleph-alpha.com/docs/introduction/luminous/))
|
Aleph Alpha
|
limited
| false |
2022/1/1
| 30,000,000,000 |
AlephAlpha/luminous-supreme
|
Luminous Supreme (70B)
| null |
Luminous Supreme (70B parameters) ([docs](https://docs.aleph-alpha.com/docs/introduction/luminous/))
|
Aleph Alpha
|
limited
| false |
2022/1/1
| 70,000,000,000 |
allenai/olmo-1.7-7b
|
OLMo 1.7 (7B)
|
OLMo 1.7 (7B)
|
OLMo is a series of Open Language Models trained on the Dolma dataset. The instruct versions was trained on the Tulu SFT mixture and a cleaned version of the UltraFeedback dataset.
|
Allen Institute for AI
|
open
| false |
2024/4/17
| 7,000,000,000 |
allenai/olmo-7b
|
OLMo (7B)
|
OLMo (7B)
|
OLMo is a series of Open Language Models trained on the Dolma dataset.
|
Allen Institute for AI
|
open
| false |
2024/2/1
| 7,000,000,000 |
anthropic/claude-2.0
|
Anthropic Claude 2.0
| null |
Claude 2.0 is a general purpose large language model developed by Anthropic. It uses a transformer architecture and is trained via unsupervised learning, RLHF, and Constitutional AI (including both a supervised and Reinforcement Learning (RL) phase). ([model card](https://efficient-manatee.files.svdcdn.com/production/images/Model-Card-Claude-2.pdf))
|
Anthropic
|
limited
| false |
2023/7/11
| null |
anthropic/claude-2.1
|
Anthropic Claude 2.1
| null |
Claude 2.1 is a general purpose large language model developed by Anthropic. It uses a transformer architecture and is trained via unsupervised learning, RLHF, and Constitutional AI (including both a supervised and Reinforcement Learning (RL) phase). ([model card](https://efficient-manatee.files.svdcdn.com/production/images/Model-Card-Claude-2.pdf))
|
Anthropic
|
limited
| false |
2023/11/21
| null |
anthropic/claude-3-5-haiku-20241022
|
Claude 3.5 Haiku (20241022)
|
Claude 3.5 Haiku (20241022)
|
Claude 3.5 Haiku is a Claude 3 family model which matches the performance of Claude 3 Opus at a similar speed to the previous generation of Haiku ([blog](https://www.anthropic.com/news/3-5-models-and-computer-use)).
|
Anthropic
|
limited
| false |
2024/11/4
| null |
anthropic/claude-3-5-sonnet-20240620
|
Claude 3.5 Sonnet (20240620)
|
Claude 3.5 Sonnet (20240620)
|
Claude 3.5 Sonnet is a Claude 3 family model which outperforms Claude 3 Opus while operating faster and at a lower cost. ([blog](https://www.anthropic.com/news/claude-3-5-sonnet))
|
Anthropic
|
limited
| false |
2024/6/20
| null |
anthropic/claude-3-5-sonnet-20241022
|
Claude 3.5 Sonnet (20241022)
|
Claude 3.5 Sonnet (20241022)
|
Claude 3.5 Sonnet is a Claude 3 family model which outperforms Claude 3 Opus while operating faster and at a lower cost ([blog](https://www.anthropic.com/news/claude-3-5-sonnet)). This is an upgraded snapshot released on 2024-10-22 ([blog](https://www.anthropic.com/news/3-5-models-and-computer-use)).
|
Anthropic
|
limited
| false |
2024/10/22
| null |
anthropic/claude-3-haiku-20240307
|
Claude 3 Haiku (20240307)
|
Claude 3 Haiku (20240307)
|
Claude 3 is a a family of models that possess vision and multilingual capabilities. They were trained with various methods such as unsupervised learning and Constitutional AI ([blog](https://www.anthropic.com/news/claude-3-family)).
|
Anthropic
|
limited
| false |
2024/3/13
| null |
anthropic/claude-3-opus-20240229
|
Claude 3 Opus (20240229)
|
Claude 3 Opus (20240229)
|
Claude 3 is a a family of models that possess vision and multilingual capabilities. They were trained with various methods such as unsupervised learning and Constitutional AI ([blog](https://www.anthropic.com/news/claude-3-family)).
|
Anthropic
|
limited
| false |
2024/3/4
| null |
anthropic/claude-3-sonnet-20240229
|
Claude 3 Sonnet (20240229)
|
Claude 3 Sonnet (20240229)
|
Claude 3 is a a family of models that possess vision and multilingual capabilities. They were trained with various methods such as unsupervised learning and Constitutional AI ([blog](https://www.anthropic.com/news/claude-3-family)).
|
Anthropic
|
limited
| false |
2024/3/4
| null |
anthropic/claude-instant-1.2
|
Anthropic Claude Instant 1.2
| null |
A lightweight version of Claude, a model trained using reinforcement learning from human feedback ([docs](https://www.anthropic.com/index/introducing-claude)).
|
Anthropic
|
limited
| false |
2023/8/9
| null |
anthropic/claude-instant-v1
|
Anthropic Claude Instant V1
| null |
A lightweight version of Claude, a model trained using reinforcement learning from human feedback ([docs](https://www.anthropic.com/index/introducing-claude)).
|
Anthropic
|
limited
| false |
2023/3/17
| null |
anthropic/claude-v1.3
|
Anthropic Claude v1.3
| null |
A model trained using reinforcement learning from human feedback ([docs](https://www.anthropic.com/index/introducing-claude)).
|
Anthropic
|
limited
| false |
2023/3/17
| null |
anthropic/stanford-online-all-v4-s3
|
Anthropic-LM v4-s3 (52B)
| null |
A 52B parameter language model, trained using reinforcement learning from human feedback [paper](https://arxiv.org/pdf/2204.05862.pdf).
|
Anthropic
|
closed
| false |
2021/12/1
| 52,000,000,000 |
Austism/chronos-hermes-13b
|
Chronos Hermes 13B
|
Chronos Hermes 13B
|
Chronos Hermes 13B is a large language model trained on 13 billion parameters. ([blog](https://chronos.ai/chronos-hermes-13b/))
|
Chronos
|
open
| false |
2024/4/18
| 13,000,000,000 |
codellama/CodeLlama-13b-Instruct-hf
|
CodeLlama 13B Instruct
|
CodeLlama 13B Instruct
|
CodeLlama 13B Instruct is a large language model trained on 13 billion parameters. ([blog](https://codellama.com/codellama-13b-instruct/))
|
CodeLlama
|
open
| false |
2024/4/18
| 13,000,000,000 |
codellama/CodeLlama-34b-Instruct-hf
|
CodeLlama 34B Instruct
|
CodeLlama 34B Instruct
|
CodeLlama 34B Instruct is a large language model trained on 34 billion parameters. ([blog](https://codellama.com/codellama-34b-instruct/))
|
CodeLlama
|
open
| false |
2024/4/18
| 34,000,000,000 |
codellama/CodeLlama-70b-Instruct-hf
|
CodeLlama 70B Instruct
|
CodeLlama 70B Instruct
|
CodeLlama 70B Instruct is a large language model trained on 70 billion parameters. ([blog](https://codellama.com/codellama-70b-instruct/))
|
CodeLlama
|
open
| false |
2024/4/18
| 70,000,000,000 |
codellama/CodeLlama-7b-Instruct-hf
|
CodeLlama 7B Instruct
|
CodeLlama 7B Instruct
|
CodeLlama 7B Instruct is a large language model trained on 7 billion parameters. ([blog](https://codellama.com/codellama-7b-instruct/))
|
CodeLlama
|
open
| false |
2024/4/18
| 7,000,000,000 |
cognitivecomputations/dolphin-2.5-mixtral-8x7b
|
Dolphin 2.5 Mixtral 8x7B
|
Dolphin 2.5 Mixtral 8x7B
|
Dolphin 2.5 Mixtral 8x7B is a multimodal model trained on 8x7B parameters with a 32K token sequence length. ([blog](https://cognitivecomputations.com/dolphin-2.5-mixtral-8x7b/))
|
Cognitive Computations
|
open
| false |
2024/4/18
| 46,700,000,000 |
cohere/command
|
Cohere Command
| null |
Command is Cohere’s flagship text generation model. It is trained to follow user commands and to be instantly useful in practical business applications. [docs](https://docs.cohere.com/reference/generate) and [changelog](https://docs.cohere.com/changelog)
|
Cohere
|
limited
| false |
2023/9/29
| null |
cohere/command-light
|
Cohere Command Light
| null |
Command is Cohere’s flagship text generation model. It is trained to follow user commands and to be instantly useful in practical business applications. [docs](https://docs.cohere.com/reference/generate) and [changelog](https://docs.cohere.com/changelog)
|
Cohere
|
limited
| false |
2023/9/29
| null |
cohere/command-medium-beta
|
Cohere Command beta (6.1B)
| null |
Cohere Command beta (6.1B parameters) is fine-tuned from the medium model to respond well with instruction-like prompts ([details](https://docs.cohere.ai/docs/command-beta)).
|
Cohere
|
limited
| false |
2022/11/8
| 6,100,000,000 |
cohere/command-r
|
Command R
|
Command R
|
Command R is a multilingual 35B parameter model with a context length of 128K that has been trained with conversational tool use capabilities.
|
Cohere
|
open
| false |
2024/3/11
| 35,000,000,000 |
cohere/command-r-plus
|
Command R Plus
|
Command R Plus
|
Command R+ is a multilingual 104B parameter model with a context length of 128K that has been trained with conversational tool use capabilities.
|
Cohere
|
open
| false |
2024/4/4
| 104,000,000,000 |
cohere/command-xlarge-beta
|
Cohere Command beta (52.4B)
| null |
Cohere Command beta (52.4B parameters) is fine-tuned from the XL model to respond well with instruction-like prompts ([details](https://docs.cohere.ai/docs/command-beta)).
|
Cohere
|
limited
| false |
2022/11/8
| 52,400,000,000 |
cohere/large-20220720
|
Cohere large v20220720 (13.1B)
| null |
Cohere large v20220720 (13.1B parameters), which is deprecated by Cohere as of December 2, 2022.
|
Cohere
|
limited
| false |
2022/7/20
| 13,100,000,000 |
cohere/medium-20220720
|
Cohere medium v20220720 (6.1B)
| null |
Cohere medium v20220720 (6.1B parameters)
|
Cohere
|
limited
| false |
2022/7/20
| 6,100,000,000 |
cohere/medium-20221108
|
Cohere medium v20221108 (6.1B)
| null |
Cohere medium v20221108 (6.1B parameters)
|
Cohere
|
limited
| false |
2022/11/8
| 6,100,000,000 |
cohere/small-20220720
|
Cohere small v20220720 (410M)
| null |
Cohere small v20220720 (410M parameters), which is deprecated by Cohere as of December 2, 2022.
|
Cohere
|
limited
| false |
2022/7/20
| 410,000,000 |
cohere/xlarge-20220609
|
Cohere xlarge v20220609 (52.4B)
| null |
Cohere xlarge v20220609 (52.4B parameters)
|
Cohere
|
limited
| false |
2022/6/9
| 52,400,000,000 |
cohere/xlarge-20221108
|
Cohere xlarge v20221108 (52.4B)
| null |
Cohere xlarge v20221108 (52.4B parameters)
|
Cohere
|
limited
| false |
2022/11/8
| 52,400,000,000 |
damo/seallm-7b-v2
|
SeaLLM v2 (7B)
|
SeaLLM v2 (7B)
|
SeaLLM v2 is a multilingual LLM for Southeast Asian (SEA) languages trained from Mistral (7B). ([website](https://damo-nlp-sg.github.io/SeaLLMs/))
|
Alibaba DAMO Academy
|
open
| false |
2024/2/2
| 7,000,000,000 |
damo/seallm-7b-v2.5
|
SeaLLM v2.5 (7B)
|
SeaLLM v2.5 (7B)
|
SeaLLM is a multilingual LLM for Southeast Asian (SEA) languages trained from Gemma (7B). ([website](https://damo-nlp-sg.github.io/SeaLLMs/))
|
Alibaba DAMO Academy
|
open
| false |
2024/4/12
| 7,000,000,000 |
databricks/dbrx-instruct
|
DBRX Instruct
|
DBRX Instruct
|
DBRX is a large language model with a fine-grained mixture-of-experts (MoE) architecture that uses 16 experts and chooses 4. It has 132B total parameters, of which 36B parameters are active on any input. ([blog post](https://www.databricks.com/blog/introducing-dbrx-new-state-art-open-llm))
|
Databricks
|
open
| false |
2024/3/27
| 132,000,000,000 |
databricks/dolly-v2-12b
|
Dolly V2 (12B)
| null |
Dolly V2 (12B) is an instruction-following large language model trained on the Databricks machine learning platform. It is based on pythia-12b.
|
Databricks
|
open
| true |
2023/4/12
| 11,327,027,200 |
databricks/dolly-v2-3b
|
Dolly V2 (3B)
| null |
Dolly V2 (3B) is an instruction-following large language model trained on the Databricks machine learning platform. It is based on pythia-12b.
|
Databricks
|
open
| true |
2023/4/12
| 2,517,652,480 |
databricks/dolly-v2-7b
|
Dolly V2 (7B)
| null |
Dolly V2 (7B) is an instruction-following large language model trained on the Databricks machine learning platform. It is based on pythia-12b.
|
Databricks
|
open
| true |
2023/4/12
| 6,444,163,072 |
deepmind/chinchilla
|
Chinchilla (70B)
| null |
Chinchilla (70B parameters) ([paper](https://arxiv.org/pdf/2203.15556.pdf)).
|
DeepMind
|
closed
| true | null | null |
deepmind/gopher
|
Gopher (280B)
| null |
Gopher (540B parameters) ([paper](https://arxiv.org/pdf/2112.11446.pdf)).
|
DeepMind
|
closed
| true | null | null |
deepseek-ai/deepseek-llm-67b-chat
|
DeepSeek LLM Chat (67B)
|
DeepSeek LLM Chat (67B)
|
DeepSeek LLM Chat is a open-source language model trained on 2 trillion tokens in both English and Chinese, and fine-tuned supervised fine-tuning (SFT) and Direct Preference Optimization (DPO). ([paper](https://arxiv.org/abs/2401.02954))
|
DeepSeek
|
open
| false |
2024/1/5
| 67,000,000,000 |
eleutherai/pythia-12b-v0
|
Pythia (12B)
| null |
Pythia (12B parameters). The Pythia project combines interpretability analysis and scaling laws to understand how knowledge develops and evolves during training in autoregressive transformers.
|
EleutherAI
|
open
| false |
2023/2/13
| 11,327,027,200 |
eleutherai/pythia-1b-v0
|
Pythia (1B)
| null |
Pythia (1B parameters). The Pythia project combines interpretability analysis and scaling laws to understand how knowledge develops and evolves during training in autoregressive transformers.
|
EleutherAI
|
open
| true |
2023/2/13
| 805,736,448 |
eleutherai/pythia-2.8b-v0
|
Pythia (2.8B)
| null |
Pythia (2.8B parameters). The Pythia project combines interpretability analysis and scaling laws to understand how knowledge develops and evolves during training in autoregressive transformers.
|
EleutherAI
|
open
| true |
2023/2/13
| 2,517,652,480 |
eleutherai/pythia-6.9b
|
Pythia (6.9B)
| null |
Pythia (6.9B parameters). The Pythia project combines interpretability analysis and scaling laws to understand how knowledge develops and evolves during training in autoregressive transformers.
|
EleutherAI
|
open
| false |
2023/2/13
| 6,444,163,072 |
garage-bAInd/Platypus2-70B-instruct
|
Platypus2 70B Instruct
|
Platypus2 70B Instruct
|
Platypus2 70B Instruct is a large language model trained on 70 billion parameters. ([blog](https://garage-bAInd.com/platypus2-70b-instruct/))
|
Garage bAInd
|
open
| false |
2024/4/18
| 70,000,000,000 |
google/code-bison-32k
|
Codey PaLM-2 (Bison)
| null |
Codey with a 32K context. PaLM 2 (Pathways Language Model) is a Transformer-based model trained using a mixture of objectives that was evaluated on English and multilingual language, and reasoning tasks. ([report](https://arxiv.org/pdf/2305.10403.pdf))
|
Google
|
limited
| false |
2023/6/29
| null |
google/code-bison@001
|
Codey PaLM-2 (Bison)
| null |
A model fine-tuned to generate code based on a natural language description of the desired code. PaLM 2 (Pathways Language Model) is a Transformer-based model trained using a mixture of objectives that was evaluated on English and multilingual language, and reasoning tasks. ([report](https://arxiv.org/pdf/2305.10403.pdf))
|
Google
|
limited
| false |
2023/6/29
| null |
google/gemini-1.0-pro-001
|
Gemini 1.0 Pro (001)
|
Gemini 1.0 Pro (001)
|
Gemini 1.0 Pro is a multimodal model able to reason across text, images, video, audio and code. ([paper](https://arxiv.org/abs/2312.11805))
|
Google
|
limited
| false |
2023/12/13
| null |
google/gemini-1.0-pro-002
|
Gemini 1.0 Pro (002)
|
Gemini 1.0 Pro (002)
|
Gemini 1.0 Pro is a multimodal model able to reason across text, images, video, audio and code. ([paper](https://arxiv.org/abs/2312.11805))
|
Google
|
limited
| false |
2024/4/9
| null |
google/gemini-1.0-pro-vision-001
|
Gemini 1.0 Pro Vision
|
Gemini 1.0 Pro Vision
|
Gemini 1.0 Pro Vision is a multimodal model able to reason across text, images, video, audio and code. ([paper](https://arxiv.org/abs/2312.11805))
|
Google
|
limited
| false |
2023/12/13
| null |
google/gemini-1.5-flash-001
|
Gemini 1.5 Flash (001)
|
Gemini 1.5 Flash (001)
|
Gemini 1.5 Flash is a multimodal mixture-of-experts model capable of recalling and reasoning over fine-grained information from long contexts. This model is accessed through Vertex AI and has all safety thresholds set to `BLOCK_NONE`. ([paper](https://arxiv.org/abs/2403.05530))
|
Google
|
limited
| false |
2024/5/24
| null |
google/gemini-1.5-flash-001-safety-block-none
|
Gemini 1.5 Flash (001, BLOCK_NONE safety)
|
Gemini 1.5 Flash (001, BLOCK_NONE safety)
|
Gemini 1.5 Flash is a multimodal mixture-of-experts model capable of recalling and reasoning over fine-grained information from long contexts. This model is accessed through Vertex AI and has all safety thresholds set to `BLOCK_NONE`. ([paper](https://arxiv.org/abs/2403.05530))
|
Google
|
limited
| false |
2024/5/24
| null |
google/gemini-1.5-flash-002
|
Gemini 1.5 Flash (002)
|
Gemini 1.5 Flash (002)
|
Gemini 1.5 Flash is a multimodal mixture-of-experts model capable of recalling and reasoning over fine-grained information from long contexts. This model is accessed through Vertex AI and has all safety thresholds set to `BLOCK_NONE`. ([paper](https://arxiv.org/abs/2403.05530))
|
Google
|
limited
| false |
2024/9/24
| null |
google/gemini-1.5-flash-preview-0514
|
Gemini 1.5 Flash (0514 preview)
|
Gemini 1.5 Flash (0514 preview)
|
Gemini 1.5 Flash is a smaller Gemini model. It has a 1 million token context window and allows interleaving text, images, audio and video as inputs. This model is accessed through Vertex AI and has all safety thresholds set to `BLOCK_NONE`. ([blog](https://blog.google/technology/developers/gemini-gemma-developer-updates-may-2024/))
|
Google
|
limited
| false |
2024/5/14
| null |
google/gemini-1.5-pro-001
|
Gemini 1.5 Pro (001)
|
Gemini 1.5 Pro (001)
|
Gemini 1.5 Pro is a multimodal mixture-of-experts model capable of recalling and reasoning over fine-grained information from long contexts. This model is accessed through Vertex AI and has all safety thresholds set to `BLOCK_NONE`. ([paper](https://arxiv.org/abs/2403.05530))
|
Google
|
limited
| false |
2024/5/24
| null |
google/gemini-1.5-pro-001-safety-block-none
|
Gemini 1.5 Pro (001, BLOCK_NONE safety)
|
Gemini 1.5 Pro (001, BLOCK_NONE safety)
|
Gemini 1.5 Pro is a multimodal mixture-of-experts model capable of recalling and reasoning over fine-grained information from long contexts. This model is accessed through Vertex AI and has all safety thresholds set to `BLOCK_NONE`. ([paper](https://arxiv.org/abs/2403.05530))
|
Google
|
limited
| false |
2024/5/24
| null |
google/gemini-1.5-pro-002
|
Gemini 1.5 Pro (002)
|
Gemini 1.5 Pro (002)
|
Gemini 1.5 Pro is a multimodal mixture-of-experts model capable of recalling and reasoning over fine-grained information from long contexts. This model is accessed through Vertex AI and has all safety thresholds set to `BLOCK_NONE`. ([paper](https://arxiv.org/abs/2403.05530))
|
Google
|
limited
| false |
2024/9/24
| null |
google/gemini-1.5-pro-preview-0409
|
Gemini 1.5 Pro (0409 preview)
|
Gemini 1.5 Pro (0409 preview)
|
Gemini 1.5 Pro is a multimodal mixture-of-experts model capable of recalling and reasoning over fine-grained information from long contexts. This model is accessed through Vertex AI and has all safety thresholds set to `BLOCK_NONE`. ([paper](https://arxiv.org/abs/2403.05530))
|
Google
|
limited
| false |
2024/4/10
| null |
google/gemini-1.5-pro-preview-0514
|
Gemini 1.5 Pro (0514 preview)
|
Gemini 1.5 Pro (0514 preview)
|
Gemini 1.5 Pro is a multimodal mixture-of-experts model capable of recalling and reasoning over fine-grained information from long contexts. This model is accessed through Vertex AI and has all safety thresholds set to `BLOCK_NONE`. ([paper](https://arxiv.org/abs/2403.05530))
|
Google
|
limited
| false |
2024/5/14
| null |
google/gemini-2.0-flash-exp
|
Gemini 2.0 Flash (Experimental)
|
Gemini 2.0 Flash (Experimental)
|
Gemini 2.0 Flash (Experimental) is a Gemini model that supports multimodal inputs like images, video and audio, as well as multimodal output like natively generated images mixed with text and steerable text-to-speech (TTS) multilingual audio. ([blog](https://blog.google/technology/google-deepmind/google-gemini-ai-update-december-2024/#gemini-2-0-flash))
|
Google
|
limited
| false |
2024/12/11
| null |
google/gemma-2-27b
|
Gemma 2 (27B)
|
Gemma 2 (27B)
|
Gemma is a family of lightweight, open models built from the research and technology that Google used to create the Gemini models. ([model card](https://www.kaggle.com/models/google/gemma), [blog post](https://blog.google/technology/developers/google-gemma-2/))
|
Google
|
open
| false |
2024/6/27
| null |
google/gemma-2-27b-it
|
Gemma 2 Instruct (27B)
|
Gemma 2 Instruct (27B)
|
Gemma is a family of lightweight, open models built from the research and technology that Google used to create the Gemini models. ([model card](https://www.kaggle.com/models/google/gemma), [blog post](https://blog.google/technology/developers/google-gemma-2/))
|
Google
|
open
| false |
2024/6/27
| null |
google/gemma-2-9b
|
Gemma 2 (9B)
|
Gemma 2 (9B)
|
Gemma is a family of lightweight, open models built from the research and technology that Google used to create the Gemini models. ([model card](https://www.kaggle.com/models/google/gemma), [blog post](https://blog.google/technology/developers/google-gemma-2/))
|
Google
|
open
| false |
2024/6/27
| null |
google/gemma-2-9b-it
|
Gemma 2 Instruct (9B)
|
Gemma 2 Instruct (9B)
|
Gemma is a family of lightweight, open models built from the research and technology that Google used to create the Gemini models. ([model card](https://www.kaggle.com/models/google/gemma), [blog post](https://blog.google/technology/developers/google-gemma-2/))
|
Google
|
open
| false |
2024/6/27
| null |
google/gemma-2b-it
|
Gemma 2B (IT)
|
Gemma 2B (IT)
|
Gemma is a family of lightweight, open models built from the research and technology that Google used to create the Gemini models. ([model card](https://www.kaggle.com/models/google/gemma), [blog post](https://blog.google/technology/developers/google-gemma-2/))
|
Google
|
open
| false |
2024/6/27
| null |
google/gemma-7b
|
Gemma (7B)
|
Gemma (7B)
|
Gemma is a family of lightweight, open models built from the research and technology that Google used to create the Gemini models. ([model card](https://www.kaggle.com/models/google/gemma), [blog post](https://blog.google/technology/developers/gemma-open-models/))
|
Google
|
open
| false |
2024/2/21
| null |
google/gemma-7b-it
|
Gemma Instruct (7B)
|
Gemma Instruct (7B)
|
TBD
|
Google
|
open
| false |
2024/2/21
| null |
google/gemma-7b-it
|
Gemma 7B (IT)
|
Gemma 7B (IT)
|
Gemma is a family of lightweight, open models built from the research and technology that Google used to create the Gemini models. ([model card](https://www.kaggle.com/models/google/gemma), [blog post](https://blog.google/technology/developers/google-gemma-2/))
|
Google
|
open
| false |
2024/6/27
| null |
google/paligemma-3b-mix-224
|
PaliGemma (3B) Mix 224
|
PaliGemma (3B) Mix 224
|
PaliGemma is a versatile and lightweight vision-language model (VLM) inspired by PaLI-3 and based on open components such as the SigLIP vision model and the Gemma language model. Pre-trained with 224x224 input images and 128 token input/output text sequences. Finetuned on a mixture of downstream academic datasets. ([blog](https://developers.googleblog.com/en/gemma-family-and-toolkit-expansion-io-2024/))
|
Google
|
open
| false |
2024/5/12
| null |
google/paligemma-3b-mix-448
|
PaliGemma (3B) Mix 448
|
PaliGemma (3B) Mix 448
|
PaliGemma is a versatile and lightweight vision-language model (VLM) inspired by PaLI-3 and based on open components such as the SigLIP vision model and the Gemma language model. Pre-trained with 448x448 input images and 512 token input/output text sequences. Finetuned on a mixture of downstream academic datasets. ([blog](https://developers.googleblog.com/en/gemma-family-and-toolkit-expansion-io-2024/))
|
Google
|
open
| false |
2024/5/12
| null |
google/palm
|
PaLM (540B)
| null |
Pathways Language Model (540B parameters) is trained using 6144 TPU v4 chips ([paper](https://arxiv.org/pdf/2204.02311.pdf)).
|
Google
|
closed
| true | null | null |
google/text-bison-32k
|
PaLM-2 (Bison)
| null |
The best value PaLM model with a 32K context. PaLM 2 (Pathways Language Model) is a Transformer-based model trained using a mixture of objectives that was evaluated on English and multilingual language, and reasoning tasks. ([report](https://arxiv.org/pdf/2305.10403.pdf))
|
Google
|
limited
| false |
2023/6/7
| null |
google/text-bison@001
|
PaLM-2 (Bison)
| null |
The best value PaLM model. PaLM 2 (Pathways Language Model) is a Transformer-based model trained using a mixture of objectives that was evaluated on English and multilingual language, and reasoning tasks. ([report](https://arxiv.org/pdf/2305.10403.pdf))
|
Google
|
limited
| false |
2023/6/7
| null |
google/text-unicorn@001
|
PaLM-2 (Unicorn)
| null |
The largest model in PaLM family. PaLM 2 (Pathways Language Model) is a Transformer-based model trained using a mixture of objectives that was evaluated on English and multilingual language, and reasoning tasks. ([report](https://arxiv.org/pdf/2305.10403.pdf))
|
Google
|
limited
| false |
2023/11/30
| null |
Gryphe/MythoMax-L2-13b
|
MythoMax L2 13B
|
MythoMax L2 13B
|
MythoMax L2 13B is a large language model trained on 13 billion parameters. ([blog](https://gryphe.com/mythomax-l2-13b/))
|
Gryphe
|
open
| false |
2024/4/18
| 13,000,000,000 |
huggingface/gpt2
|
GPT-2 (124M)
| null |
GPT-2 is a transformers model pretrained on a very large corpus of English data in a self-supervised fashion. This means it was pretrained on the raw texts only, with no humans labelling them in any way (which is why it can use lots of publicly available data) with an automatic process to generate inputs and labels from those texts.
|
OpenAI
|
open
| false | null | 124,000,000 |
huggingface/gpt2-large
|
GPT-2 Large (774M)
| null |
GPT-2 Large is the 774M parameter version of GPT-2, a transformer-based language model created and released by OpenAI. The model is a pretrained model on English language using a causal language modeling (CLM) objective.
|
OpenAI
|
open
| false | null | 774,000,000 |
huggingface/gpt2-medium
|
GPT-2 Medium (355M)
| null |
GPT-2 Medium is the 355M parameter version of GPT-2, a transformer-based language model created and released by OpenAI. The model is a pretrained model on English language using a causal language modeling (CLM) objective.
|
OpenAI
|
open
| false | null | 355,000,000 |
huggingface/gpt2-xl
|
GPT-2 XL (1.5B)
| null |
GPT-2 XL is the 1.5B parameter version of GPT-2, a transformer-based language model created and released by OpenAI. The model is a pretrained model on English language using a causal language modeling (CLM) objective.
|
OpenAI
|
open
| false | null | 1,500,000,000 |
End of preview. Expand
in Data Studio
No dataset card yet
- Downloads last month
- 11