name
stringlengths
10
47
display_name
stringlengths
5
41
short_display_name
stringlengths
5
41
description
stringlengths
3
534
creator_organization
stringlengths
4
22
access
stringclasses
3 values
todo
bool
2 classes
release_date
stringdate
2019-10-23 00:00:00
2024-12-11 00:00:00
num_parameters
float64
1
1,210B
openthaigpt/openthaigpt-1.0.0-7b-chat
OpenThaiGPT v1.0.0 (7B)
OpenThaiGPT v1.0.0 (7B)
OpenThaiGPT v1.0.0 (7B) is a Thai language chat model based on Llama 2 that has been specifically fine-tuned for Thai instructions and enhanced by incorporating over 10,000 of the most commonly used Thai words into the dictionary. ([blog post](https://openthaigpt.aieat.or.th/openthaigpt-1.0.0-less-than-8-apr-2024-greater-than))
OpenThaiGPT
open
false
2024/4/8
7,000,000,000
qwen/qwen-vl-chat
Qwen-VL Chat
Qwen-VL Chat
Chat version of Qwen-VL ([paper](https://arxiv.org/abs/2308.12966)).
Alibaba Cloud
open
false
2023/8/24
null
qwen/qwen1.5-0.5b-chat
Qwen1.5 Chat (0.5B)
Qwen1.5 Chat (0.5B)
0.5B-parameter version of the large language model series, Qwen 1.5 (abbr. Tongyi Qianwen), proposed by Aibaba Cloud. Qwen is a family of transformer models with SwiGLU activation, RoPE, and multi-head attention. ([blog](https://qwenlm.github.io/blog/qwen1.5/))
Qwen
open
false
2024/2/5
null
qwen/qwen1.5-1.8b-chat
Qwen1.5 Chat (1.8B)
Qwen1.5 Chat (1.8B)
1.8B-parameter chat version of the large language model series, Qwen 1.5 (abbr. Tongyi Qianwen), proposed by Aibaba Cloud. Qwen is a family of transformer models with SwiGLU activation, RoPE, and multi-head attention. ([blog](https://qwenlm.github.io/blog/qwen1.5/))
Qwen
open
false
2024/2/5
null
qwen/qwen1.5-110b-chat
Qwen1.5 Chat (110B)
Qwen1.5 Chat (110B)
110B-parameter chat version of the large language model series, Qwen 1.5 (abbr. Tongyi Qianwen), proposed by Aibaba Cloud. Qwen is a family of transformer models with SwiGLU activation, RoPE, and multi-head attention. The 110B version also includes grouped query attention (GQA). ([blog](https://qwenlm.github.io/blog/qwen1.5-110b/))
Qwen
open
false
2024/4/25
null
qwen/qwen1.5-14b
Qwen1.5 (14B)
Qwen1.5 (14B)
14B-parameter version of the large language model series, Qwen 1.5 (abbr. Tongyi Qianwen), proposed by Aibaba Cloud. Qwen is a family of transformer models with SwiGLU activation, RoPE, and multi-head attention. ([blog](https://qwenlm.github.io/blog/qwen1.5/))
Qwen
open
false
2024/2/5
null
qwen/qwen1.5-32b
Qwen1.5 (32B)
Qwen1.5 (32B)
32B-parameter version of the large language model series, Qwen 1.5 (abbr. Tongyi Qianwen), proposed by Aibaba Cloud. Qwen is a family of transformer models with SwiGLU activation, RoPE, and multi-head attention. The 32B version also includes grouped query attention (GQA). ([blog](https://qwenlm.github.io/blog/qwen1.5-32b/))
Qwen
open
false
2024/4/2
null
qwen/qwen1.5-4b-chat
Qwen1.5 Chat (4B)
Qwen1.5 Chat (4B)
4B-parameter chat version of the large language model series, Qwen 1.5 (abbr. Tongyi Qianwen), proposed by Aibaba Cloud. Qwen is a family of transformer models with SwiGLU activation, RoPE, and multi-head attention. ([blog](https://qwenlm.github.io/blog/qwen1.5/))
Qwen
open
false
2024/2/5
null
qwen/qwen1.5-72b
Qwen1.5 (72B)
Qwen1.5 (72B)
72B-parameter version of the large language model series, Qwen 1.5 (abbr. Tongyi Qianwen), proposed by Aibaba Cloud. Qwen is a family of transformer models with SwiGLU activation, RoPE, and multi-head attention. ([blog](https://qwenlm.github.io/blog/qwen1.5/))
Qwen
open
false
2024/2/5
null
qwen/qwen1.5-72b-chat
Qwen1.5 Chat (72B)
Qwen1.5 Chat (72B)
72B-parameter chat version of the large language model series, Qwen 1.5 (abbr. Tongyi Qianwen), proposed by Aibaba Cloud. Qwen is a family of transformer models with SwiGLU activation, RoPE, and multi-head attention. ([blog](https://qwenlm.github.io/blog/qwen1.5/))
Qwen
open
false
2024/2/5
null
qwen/qwen1.5-7b
Qwen1.5 (7B)
Qwen1.5 (7B)
7B-parameter version of the large language model series, Qwen 1.5 (abbr. Tongyi Qianwen), proposed by Aibaba Cloud. Qwen is a family of transformer models with SwiGLU activation, RoPE, and multi-head attention. ([blog](https://qwenlm.github.io/blog/qwen1.5/))
Qwen
open
false
2024/2/5
null
qwen/qwen1.5-7b-chat
Qwen1.5 Chat (7B)
Qwen1.5 Chat (7B)
7B-parameter version of the large language model series, Qwen 1.5 (abbr. Tongyi Qianwen), proposed by Aibaba Cloud. Qwen is a family of transformer models with SwiGLU activation, RoPE, and multi-head attention. ([blog](https://qwenlm.github.io/blog/qwen1.5/))
Qwen
open
false
2024/2/5
null
qwen/qwen2-72b-instruct
Qwen2 Instruct (72B)
Qwen2 Instruct (72B)
72B-parameter chat version of the large language model series, Qwen2. Qwen2 uses Group Query Attention (GQA) and has extended context length support up to 128K tokens. ([blog](https://qwenlm.github.io/blog/qwen2/))
Qwen
open
false
2024/6/7
null
qwen/qwen2.5-72b-instruct-turbo
Qwen2.5 Instruct Turbo (72B)
Qwen2.5 Instruct Turbo (72B)
Qwen2.5 Instruct Turbo (72B) was trained on 18 trillion tokens and supports 29 languages, and shows improvements over Qwen2 in knowledge, coding, mathematics, instruction following, generating long texts, and processing structure data. ([blog](https://qwenlm.github.io/blog/qwen2.5/)) Turbo is Together's cost-efficient implementation, providing fast FP8 performance while maintaining quality, closely matching FP16 reference models. ([blog](https://www.together.ai/blog/together-inference-engine-2))
Qwen
open
false
2024/9/19
null
qwen/qwen2.5-7b-instruct-turbo
Qwen2.5 Instruct Turbo (7B)
Qwen2.5 Instruct Turbo (7B)
Qwen2.5 Instruct Turbo (7B) was trained on 18 trillion tokens and supports 29 languages, and shows improvements over Qwen2 in knowledge, coding, mathematics, instruction following, generating long texts, and processing structure data. ([blog](https://qwenlm.github.io/blog/qwen2.5/)) Turbo is Together's cost-efficient implementation, providing fast FP8 performance while maintaining quality, closely matching FP16 reference models. ([blog](https://www.together.ai/blog/together-inference-engine-2))
Qwen
open
false
2024/9/19
null
sail/sailor-14b-chat
Sailor Chat (14B)
Sailor Chat (14B)
Sailor is a suite of Open Language Models tailored for South-East Asia, focusing on languages such as Indonesian, Thai, Vietnamese, Malay, and Lao. These models were continually pre-trained from Qwen1.5. ([paper](https://arxiv.org/abs/2404.03608))
SAIL
open
false
2024/4/4
14,000,000,000
sail/sailor-7b-chat
Sailor Chat (7B)
Sailor Chat (7B)
Sailor is a suite of Open Language Models tailored for South-East Asia, focusing on languages such as Indonesian, Thai, Vietnamese, Malay, and Lao. These models were continually pre-trained from Qwen1.5. ([paper](https://arxiv.org/abs/2404.03608))
SAIL
open
false
2024/4/4
7,000,000,000
sambanova/sambalingo-thai-chat
SambaLingo-Thai-Chat
SambaLingo-Thai-Chat
SambaLingo-Thai-Chat is a chat model trained using direct preference optimization on SambaLingo-Thai-Base. SambaLingo-Thai-Base adapts Llama 2 (7B) to Thai by training on 38 billion tokens from the Thai split of the Cultura-X dataset. ([paper](https://arxiv.org/abs/2404.05829))
SambaLingo
open
false
2024/4/8
7,000,000,000
sambanova/sambalingo-thai-chat-70b
SambaLingo-Thai-Chat-70B
SambaLingo-Thai-Chat-70B
SambaLingo-Thai-Chat-70B is a chat model trained using direct preference optimization on SambaLingo-Thai-Base-70B. SambaLingo-Thai-Base-70B adapts Llama 2 (7B) to Thai by training on 26 billion tokens from the Thai split of the Cultura-X dataset. ([paper](https://arxiv.org/abs/2404.05829))
SambaLingo
open
false
2024/4/8
70,000,000,000
scb10x/llama-3-typhoon-v1.5x-70b-instruct
Typhoon 1.5X instruct (70B)
Typhoon 1.5X instruct (70B)
Llama-3-Typhoon-1.5X-70B-instruct is a 70 billion parameter instruct model designed for the Thai language based on Llama 3 Instruct. It utilizes the task-arithmetic model editing technique. ([blog](https://blog.opentyphoon.ai/typhoon-1-5x-our-experiment-designed-for-application-use-cases-7b85d9e9845c))
SCB10X
open
false
2024/5/29
70,000,000,000
scb10x/llama-3-typhoon-v1.5x-8b-instruct
Typhoon 1.5X instruct (8B)
Typhoon 1.5X instruct (8B)
Llama-3-Typhoon-1.5X-8B-instruct is a 8 billion parameter instruct model designed for the Thai language based on Llama 3 Instruct. It utilizes the task-arithmetic model editing technique. ([blog](https://blog.opentyphoon.ai/typhoon-1-5x-our-experiment-designed-for-application-use-cases-7b85d9e9845c))
SCB10X
open
false
2024/5/29
8,000,000,000
scb10x/typhoon-7b
Typhoon (7B)
Typhoon (7B)
Typhoon (7B) is pretrained Thai large language model with 7 billion parameters based on Mistral 7B. ([paper](https://arxiv.org/abs/2312.13951))
SCB10X
open
false
2023/12/21
7,000,000,000
scb10x/typhoon-v1.5-72b-instruct
Typhoon v1.5 Instruct (72B)
Typhoon v1.5 Instruct (72B)
Typhoon v1.5 Instruct (72B) is a pretrained Thai large language model with 72 billion parameters based on Qwen1.5-72B. ([blog](https://blog.opentyphoon.ai/typhoon-1-5-release-a9364cb8e8d7))
SCB10X
open
false
2024/5/8
72,000,000,000
scb10x/typhoon-v1.5-8b-instruct
Typhoon v1.5 Instruct (8B)
Typhoon v1.5 Instruct (8B)
Typhoon v1.5 Instruct (8B) is a pretrained Thai large language model with 8 billion parameters based on Llama 3 8B. ([blog](https://blog.opentyphoon.ai/typhoon-1-5-release-a9364cb8e8d7))
SCB10X
open
false
2024/5/8
8,000,000,000
snorkelai/Snorkel-Mistral-PairRM-DPO
Snorkel Mistral PairRM DPO
Snorkel Mistral PairRM DPO
Snorkel Mistral PairRM DPO is a multimodal model trained on 7B parameters with a 32K token sequence length. ([blog](https://snorkelai.com/snorkel-mistral-pairrm-dpo/))
Snorkel AI
open
false
2024/4/18
7,000,000,000
snowflake/snowflake-arctic-instruct
Arctic Instruct
Arctic Instruct
Arctic combines a 10B dense transformer model with a residual 128x3.66B MoE MLP resulting in 480B total and 17B active parameters chosen using a top-2 gating.
Snowflake
open
false
2024/4/24
482,000,000,000
stabilityai/stablelm-base-alpha-3b
StableLM-Base-Alpha (3B)
null
StableLM-Base-Alpha is a suite of 3B and 7B parameter decoder-only language models pre-trained on a diverse collection of English datasets with a sequence length of 4096 to push beyond the context window limitations of existing open-source language models.
Stability AI
open
true
2023/4/20
3,000,000,000
stabilityai/stablelm-base-alpha-7b
StableLM-Base-Alpha (7B)
null
StableLM-Base-Alpha is a suite of 3B and 7B parameter decoder-only language models pre-trained on a diverse collection of English datasets with a sequence length of 4096 to push beyond the context window limitations of existing open-source language models.
Stability AI
open
true
2023/4/20
7,000,000,000
stanford/alpaca-7b
Alpaca (7B)
null
Alpaca 7B is a model fine-tuned from the LLaMA 7B model on 52K instruction-following demonstrations
Stanford
open
false
2023/3/13
7,000,000,000
teknium/OpenHermes-2-Mistral-7B
OpenHermes 2 Mistral 7B
OpenHermes 2 Mistral 7B
OpenHermes 2 Mistral 7B is a multimodal model trained on 7B parameters with a 32K token sequence length. ([blog](https://teknium.com/openhermes-2-mistral-7b/))
Teknium
open
false
2024/4/18
7,000,000,000
teknium/OpenHermes-2.5-Mistral-7B
OpenHermes 2.5 Mistral 7B
OpenHermes 2.5 Mistral 7B
OpenHermes 2.5 Mistral 7B is a multimodal model trained on 7B parameters with a 32K token sequence length. ([blog](https://teknium.com/openhermes-2-5-mistral-7b/))
Teknium
open
false
2024/4/18
7,000,000,000
tiiuae/falcon-40b
Falcon (40B)
null
Falcon-40B is a 40B parameters causal decoder-only model built by TII and trained on 1,500B tokens of RefinedWeb enhanced with curated corpora.
TII UAE
open
false
2023/5/25
40,000,000,000
tiiuae/falcon-40b-instruct
Falcon-Instruct (40B)
null
Falcon-40B-Instruct is a 40B parameters causal decoder-only model built by TII based on Falcon-7B and finetuned on a mixture of chat/instruct datasets.
TII UAE
open
false
2023/5/25
40,000,000,000
tiiuae/falcon-7b
Falcon (7B)
null
Falcon-7B is a 7B parameters causal decoder-only model built by TII and trained on 1,500B tokens of RefinedWeb enhanced with curated corpora.
TII UAE
open
false
2023/3/15
7,000,000,000
tiiuae/falcon-7b-instruct
Falcon-Instruct (7B)
null
Falcon-7B-Instruct is a 7B parameters causal decoder-only model built by TII based on Falcon-7B and finetuned on a mixture of chat/instruct datasets.
TII UAE
open
false
2023/3/15
7,000,000,000
together/bloom
BLOOM (176B)
null
BLOOM (176B parameters) is an autoregressive model trained on 46 natural languages and 13 programming languages ([paper](https://arxiv.org/pdf/2211.05100.pdf)).
BigScience
open
false
2022/6/28
176,000,000,000
together/bloomz
BLOOMZ (176B)
null
BLOOMZ (176B parameters) is BLOOM that has been fine-tuned on natural language instructions ([details](https://huggingface.co/bigscience/bloomz)).
BigScience
open
true
2022/11/3
176,000,000,000
together/cerebras-gpt-13b
Cerebras GPT (13B)
null
Cerebras GPT is a family of open compute-optimal language models scaled from 111M to 13B parameters trained on the Eleuther Pile. ([paper](https://arxiv.org/pdf/2304.03208.pdf))
Cerebras
limited
true
2023/4/6
13,000,000,000
together/cerebras-gpt-6.7b
Cerebras GPT (6.7B)
null
Cerebras GPT is a family of open compute-optimal language models scaled from 111M to 13B parameters trained on the Eleuther Pile. ([paper](https://arxiv.org/pdf/2304.03208.pdf))
Cerebras
limited
true
2023/4/6
6,700,000,000
together/codegeex
CodeGeeX (13B)
null
CodeGeeX (13B parameters) is an open dense code model trained on more than 20 programming languages on a corpus of more than 850B tokens ([blog](http://keg.cs.tsinghua.edu.cn/codegeex/)).
Tsinghua
open
true
2022/9/19
13,000,000,000
together/codegen
CodeGen (16B)
null
CodeGen (16B parameters) is an open dense code model trained for multi-turn program synthesis ([blog](https://arxiv.org/pdf/2203.13474.pdf)).
Tsinghua
open
true
2022/3/25
16,000,000,000
together/flan-t5-xxl
Flan-T5 (11B)
null
Flan-T5 (11B parameters) is T5 fine-tuned on 1.8K tasks ([paper](https://arxiv.org/pdf/2210.11416.pdf)).
Google
open
false
null
null
together/galactica-120b
Galactica (120B)
null
Galactica (120B parameters) is trained on 48 million papers, textbooks, lectures notes, compounds and proteins, scientific websites, etc. ([paper](https://galactica.org/static/paper.pdf)).
Meta
open
true
2022/11/15
120,000,000,000
together/galactica-30b
Galactica (30B)
null
Galactica (30B parameters) is trained on 48 million papers, textbooks, lectures notes, compounds and proteins, scientific websites, etc. ([paper](https://galactica.org/static/paper.pdf)).
Meta
open
true
2022/11/15
30,000,000,000
together/glm
GLM (130B)
null
GLM (130B parameters) is an open bilingual (English & Chinese) bidirectional dense model that was trained using General Language Model (GLM) procedure ([paper](https://arxiv.org/pdf/2210.02414.pdf)).
Tsinghua
open
false
2022/8/4
130,000,000,000
together/gpt-j-6b
GPT-J (6B)
null
GPT-J (6B parameters) autoregressive language model trained on The Pile ([details](https://arankomatsuzaki.wordpress.com/2021/06/04/gpt-j/)).
EleutherAI
open
false
2021/6/4
6,000,000,000
together/gpt-neox-20b
GPT-NeoX (20B)
null
GPT-NeoX (20B parameters) autoregressive language model trained on The Pile ([paper](https://arxiv.org/pdf/2204.06745.pdf)).
EleutherAI
open
false
2022/2/2
20,000,000,000
together/gpt-neoxt-chat-base-20b
GPT-NeoXT-Chat-Base (20B)
null
GPT-NeoXT-Chat-Base (20B) is fine-tuned from GPT-NeoX, serving as a base model for developing open-source chatbots.
Together
open
true
2023/3/8
20,000,000,000
together/h3-2.7b
H3 (2.7B)
null
H3 (2.7B parameters) is a decoder-only language model based on state space models ([paper](https://arxiv.org/abs/2212.14052)).
HazyResearch
open
true
2023/1/23
2,700,000,000
together/koala-13b
Koala (13B)
null
Koala (13B) is a chatbot fine-tuned from Llama (13B) on dialogue data gathered from the web. ([blog post](https://bair.berkeley.edu/blog/2023/04/03/koala/))
UC Berkeley
open
true
2022/4/3
13,000,000,000
together/opt-1.3b
OPT (1.3B)
null
Open Pre-trained Transformers (1.3B parameters) is a suite of decoder-only pre-trained transformers that are fully and responsibly shared with interested researchers ([paper](https://arxiv.org/pdf/2205.01068.pdf)).
Meta
open
false
2022/5/2
1,300,000,000
together/opt-175b
OPT (175B)
null
Open Pre-trained Transformers (175B parameters) is a suite of decoder-only pre-trained transformers that are fully and responsibly shared with interested researchers ([paper](https://arxiv.org/pdf/2205.01068.pdf)).
Meta
open
false
2022/5/2
175,000,000,000
together/opt-6.7b
OPT (6.7B)
null
Open Pre-trained Transformers (6.7B parameters) is a suite of decoder-only pre-trained transformers that are fully and responsibly shared with interested researchers ([paper](https://arxiv.org/pdf/2205.01068.pdf)).
Meta
open
false
2022/5/2
6,700,000,000
together/opt-66b
OPT (66B)
null
Open Pre-trained Transformers (66B parameters) is a suite of decoder-only pre-trained transformers that are fully and responsibly shared with interested researchers ([paper](https://arxiv.org/pdf/2205.01068.pdf)).
Meta
open
false
2022/5/2
66,000,000,000
together/opt-iml-175b
OPT-IML (175B)
null
OPT-IML (175B parameters) is a suite of decoder-only transformer LMs that are multi-task fine-tuned on 2000 datasets ([paper](https://arxiv.org/pdf/2212.12017.pdf)).
Meta
open
true
2022/12/22
175,000,000,000
together/opt-iml-30b
OPT-IML (30B)
null
OPT-IML (30B parameters) is a suite of decoder-only transformer LMs that are multi-task fine-tuned on 2000 datasets ([paper](https://arxiv.org/pdf/2212.12017.pdf)).
Meta
open
true
2022/12/22
30,000,000,000
together/redpajama-incite-base-3b-v1
RedPajama-INCITE-Base-v1 (3B)
null
RedPajama-INCITE-Base-v1 (3B parameters) is a 3 billion base model that aims to replicate the LLaMA recipe as closely as possible.
Together
open
false
2023/5/5
3,000,000,000
together/redpajama-incite-base-7b
RedPajama-INCITE-Base (7B)
null
RedPajama-INCITE-Base (7B parameters) is a 7 billion base model that aims to replicate the LLaMA recipe as closely as possible.
Together
open
true
2023/5/5
7,000,000,000
together/redpajama-incite-chat-3b-v1
RedPajama-INCITE-Chat-v1 (3B)
null
RedPajama-INCITE-Chat-v1 (3B parameters) is a model fine-tuned on OASST1 and Dolly2 to enhance chatting ability. It is built from RedPajama-INCITE-Base-v1 (3B), a 3 billion base model that aims to replicate the LLaMA recipe as closely as possible.
Together
open
true
2023/5/5
3,000,000,000
together/redpajama-incite-instruct-3b-v1
RedPajama-INCITE-Instruct-v1 (3B)
null
RedPajama-INCITE-Instruct-v1 (3B parameters) is a model fine-tuned for few-shot applications on the data of GPT-JT. It is built from RedPajama-INCITE-Base-v1 (3B), a 3 billion base model that aims to replicate the LLaMA recipe as closely as possible.
Together
open
true
2023/5/5
3,000,000,000
together/redpajama-incite-instruct-7b
RedPajama-INCITE-Instruct (7B)
null
RedPajama-INCITE-Instruct (7B parameters) is a model fine-tuned for few-shot applications on the data of GPT-JT. It is built from RedPajama-INCITE-Base (7B), a 7 billion base model that aims to replicate the LLaMA recipe as closely as possible.
Together
open
true
2023/5/5
7,000,000,000
together/t0pp
T0pp (11B)
null
T0pp (11B parameters) is an encoder-decoder model trained on a large set of different tasks specified in natural language prompts ([paper](https://arxiv.org/pdf/2110.08207.pdf)).
BigScience
open
false
2021/10/15
11,000,000,000
together/t5-11b
T5 (11B)
null
T5 (11B parameters) is an encoder-decoder model trained on a multi-task mixture, where each task is converted into a text-to-text format ([paper](https://arxiv.org/pdf/1910.10683.pdf)).
Google
open
false
2019/10/23
11,000,000,000
together/Together-gpt-JT-6B-v1
GPT-JT (6B)
null
GPT-JT (6B parameters) is a fork of GPT-J ([blog post](https://www.together.xyz/blog/releasing-v1-of-gpt-jt-powered-by-open-source-ai)).
Together
open
true
2022/11/29
6,700,000,000
together/ul2
UL2 (20B)
null
UL2 (20B parameters) is an encoder-decoder model trained on the C4 corpus. It's similar to T5 but trained with a different objective and slightly different scaling knobs ([paper](https://arxiv.org/pdf/2205.05131.pdf)).
Google
open
false
2022/5/10
20,000,000,000
together/yalm
YaLM (100B)
null
YaLM (100B parameters) is an autoregressive language model trained on English and Russian text ([GitHub](https://github.com/yandex/YaLM-100B)).
Yandex
open
false
2022/6/23
100,000,000,000
Undi95/Toppy-M-7B
Toppy M 7B
Toppy M 7B
Toppy M 7B is a multimodal model trained on 7B parameters with a 32K token sequence length. ([blog](https://undi95.com/toppy-m-7b/))
Undi95
open
false
2024/4/18
7,000,000,000
upstage/SOLAR-10.7B-Instruct-v1.0
SOLAR 10.7B Instruct v1.0
SOLAR 10.7B Instruct v1.0
SOLAR 10.7B Instruct v1.0 is a multimodal model trained on 10.7B parameters with a 32K token sequence length. ([blog](https://upstage.com/solar-10-7b-instruct-v1-0/))
Upstage
open
false
2024/4/18
10,700,000,000
upstage/solar-pro-241126
Solar Pro
Solar Pro
Solar Pro is a LLM designed for instruction-following and processing structured formats like HTML and Markdown. It supports English, Korean, and Japanese and has domain expertise in Finance, Healthcare, and Legal. ([blog](https://www.upstage.ai/blog/press/solar-pro-aws)).
Upstage
limited
false
2024/11/26
22,000,000,000
uw-madison/llava-v1.6-vicuna-13b-hf
LLaVA 1.6 (13B)
LLaVA 1.6 (13B)
LLaVa is an open-source chatbot trained by fine-tuning LlamA/Vicuna on GPT-generated multimodal instruction-following data. ([paper](https://arxiv.org/abs/2304.08485))
Microsoft
open
false
2024/1/1
13,000,000,000
WizardLM/WizardLM-13B-V1.2
WizardLM 13B V1.2
WizardLM 13B V1.2
WizardLM 13B V1.2 is a multimodal model trained on 13B parameters with a 32K token sequence length. ([blog](https://wizardlm.com/wizardlm-13b-v1-2/))
WizardLM
open
false
2024/4/18
13,000,000,000
writer/palmyra-base
Palmyra Base (5B)
null
Palmyra Base (5B)
Writer
limited
false
2022/10/13
5,000,000,000
writer/palmyra-e
Palmyra E (30B)
null
Palmyra E (30B)
Writer
limited
false
2023/3/3
30,000,000,000
writer/palmyra-instruct-30
InstructPalmyra (30B)
null
InstructPalmyra (30B parameters) is trained using reinforcement learning techniques based on feedback from humans.
Writer
limited
false
2023/2/16
30,000,000,000
writer/palmyra-large
Palmyra Large (20B)
null
Palmyra Large (20B)
Writer
limited
false
2022/12/23
20,000,000,000
writer/palmyra-vision-003
Palmyra Vision 003
Palmyra Vision 003
Palmyra Vision 003 (internal only)
Writer
limited
false
2024/5/24
5,000,000,000
writer/palmyra-x
Palmyra X (43B)
null
Palmyra-X (43B parameters) is trained to adhere to instructions using human feedback and utilizes a technique called multiquery attention. Furthermore, a new feature called 'self-instruct' has been introduced, which includes the implementation of an early stopping criteria specifically designed for minimal instruction tuning ([paper](https://dev.writer.com/docs/becoming-self-instruct-introducing-early-stopping-criteria-for-minimal-instruct-tuning)).
Writer
limited
false
2023/6/11
43,000,000,000
writer/palmyra-x-004
Palmyra-X-004
Palmyra-X-004
Palmyra-X-004 language model with a large context window of up to 128,000 tokens that excels in processing and understanding complex tasks.
Writer
limited
false
2024/9/12
null
writer/palmyra-x-32k
Palmyra X-32K (33B)
null
Palmyra-X-32K (33B parameters) is a Transformer-based model, which is trained on large-scale pre-training data. The pre-training data types are diverse and cover a wide range of areas. These data types are used in conjunction and the alignment mechanism to extend context window.
Writer
limited
false
2023/12/1
33,000,000,000
writer/palmyra-x-v2
Palmyra X V2 (33B)
null
Palmyra-X V2 (33B parameters) is a Transformer-based model, which is trained on extremely large-scale pre-training data. The pre-training data more than 2 trillion tokens types are diverse and cover a wide range of areas, used FlashAttention-2.
Writer
limited
false
2023/12/1
33,000,000,000
writer/palmyra-x-v3
Palmyra X V3 (72B)
null
Palmyra-X V3 (72B parameters) is a Transformer-based model, which is trained on extremely large-scale pre-training data. It is trained via unsupervised learning and DPO and use multiquery attention.
Writer
limited
false
2023/12/1
72,000,000,000
writer/silk-road
Silk Road (35B)
null
Silk Road (35B)
Writer
limited
false
2023/4/13
35,000,000,000