NexaAI/gpt-oss-20b-GGUF

Quickstart

Run them directly with nexa-sdk installed In nexa-sdk CLI:

NexaAI/gpt-oss-20b-GGUF

Overview

This is a GGUF version of the OpenAI GPT OSS 20B model, for lower latency, and local or specialized use cases (21B parameters with 3.6B active parameters).

Reference

Original model card: ggml-org/gpt-oss-20b-GGUF

Downloads last month
438
GGUF
Model size
20.9B params
Architecture
gpt-oss
Hardware compatibility
Log In to view the estimation
Inference Providers NEW
This model isn't deployed by any Inference Provider. πŸ™‹ Ask for provider support

Collection including NexaAI/gpt-oss-20b-GGUF