prompterminal's picture
Upload folder using huggingface_hub
255e848 verified
metadata
license: mit
tags:
  - text-generation
  - character-level
  - compression
  - research
datasets:
  - enwik8

Compressed nanoGPT (enwik8)

Outstanding Compression Results!

  • Performance: 1.635 → 1.637 BPC (+0.002)
  • Parameters: 28,801,536 → 27,359,744
  • Compression: 1.053× smaller (5.0% reduction)
  • Quality Loss: Only 0.1% degradation!

This demonstrates near-perfect compression of a character-level transformer.

Usage

from transformers import AutoModel

model = AutoModel.from_pretrained(
    "prompterminal/nanogpt-enwik8-compressed-working",
    trust_remote_code=True
)

# Generate text
import torch
prompt = torch.randint(0, 6060, (1, 10))  # Random start
output = model.generate(prompt, max_new_tokens=100)

Research Impact

First successful demonstration of high-quality compression on character-level transformers!