Overview

The 'test-internlm3' model is a refined version of the 'internlm/internlm3-8b-instruct' architecture, designed to enhance interactive instruction and dialogue capabilities. Its primary purpose is to assist users in generating contextually relevant responses and facilitating informative conversations across various topics. This model excels in areas such as academic assistance, customer support, and content creation, making it versatile for a wide range of applications. Users can expect high-quality, coherent outputs with improved contextual understanding, leading to more engaging interactions. The performance of 'test-internlm3' is marked by its ability to adapt to specific prompts while maintaining a natural and informative tone.

Variants

No Variant Cortex CLI command
1 gguf cortex run internlm3-8b-instruct

Use it with Jan (UI)

  1. Install Jan using Quickstart

  2. Use in Jan model Hub:

    cortexso/internlm3-8b-instruct

Use it with Cortex (CLI)

  1. Install Cortex using Quickstart

  2. Run the model with command:

    cortex run internlm3-8b-instruct

Credits

Downloads last month
62
GGUF
Model size
8.8B params
Architecture
llama

2-bit

3-bit

4-bit

5-bit

6-bit

8-bit

Inference Providers NEW
This model is not currently available via any of the supported third-party Inference Providers, and HF Inference API was unable to determine this model's library.