This is a model that is assumed to perform well, but may require more testing and user feedback. Be aware, only models featured within the GUI of GPT4All, are curated and officially supported by Nomic. Use at your own risk.

About

Model converted and quantized by: 3Simplex.
GPT4All v3.1.1 required.

image/png

Prompt Template

<|start_header_id|>system<|end_header_id|>

{system_prompt}<|eot_id|>
<|start_header_id|>user<|end_header_id|>

{user_input}<|eot_id|><|start_header_id|>assistant<|end_header_id|>

{assistant_response}

128k Context Length

"llama.context_length": 131072

Downloads last month
8,469
GGUF
Model size
8.03B params
Architecture
llama

4-bit

16-bit

Inference Examples
Unable to determine this model's library. Check the docs .

Model tree for GPT4All-Community/Meta-Llama-3.1-8B-Instruct-128k-GGUF

Quantized
(302)
this model