Jan-v1-4B-AWQ-4bit / README.md
cpatonn's picture
Upload folder using huggingface_hub
d2d3260 verified
metadata
license: apache-2.0
language:
  - en
base_model:
  - janhq/Jan-v1-4B
pipeline_tag: text-generation

Jan-v1: Advanced Agentic Language Model

GitHub License Jan App

Overview

Jan-v1 is the first release in the Jan Family, designed for agentic reasoning and problem-solving within the Jan App. Based on our Lucy model, Jan-v1 achieves improved performance through model scaling.

Jan-v1 uses the Qwen3-4B-thinking model to provide enhanced reasoning capabilities and tool utilization. This architecture delivers better performance on complex agentic tasks.

Performance

Question Answering (SimpleQA)

For question-answering, Jan-v1 shows a significant performance gain from model scaling, achieving 91.1% accuracy.

image/png

The 91.1% SimpleQA accuracy represents a significant milestone in factual question answering for models of this scale, demonstrating the effectiveness of our scaling and fine-tuning approach.

Chat Benchmarks

These benchmarks evaluate the model's conversational and instructional capabilities.

image/png

Quick Start

Integration with Jan App

Jan-v1 is optimized for direct integration with the Jan App. Simply select the model from the Jan App interface for immediate access to its full capabilities.

image/gif

Local Deployment

Using vLLM:

vllm serve janhq/Jan-v1-4B \
    --host 0.0.0.0 \
    --port 1234 \
    --enable-auto-tool-choice \
    --tool-call-parser hermes
    

Using llama.cpp:

llama-server --model Jan-v1-4B-Q4_K_M.gguf \
    --host 0.0.0.0 \
    --port 1234 \
    --jinja \
    --no-context-shift

Recommended Parameters

temperature: 0.6
top_p: 0.95
top_k: 20
min_p: 0.0
max_tokens: 2048

🀝 Community & Support

πŸ“„ Citation

Updated Soon