prompterminal's picture
Upload README.md with huggingface_hub
1e0f40a verified

GPT-2 XL Compressed Model Weights

This dataset contains the compressed model weights from tensor network compression methodology applied to GPT-2 XL.

πŸ“ Files Included

Compressed Model Weights (.pt files)

  • compressed_gpt2_xl_68.3%.pt - Base compressed model (~68% compression)
  • compressed_gpt2_xl_68.3%_healed.pt - Compressed + knowledge distillation healing
  • compressed_gpt2_xl_68.3%_enwik8_trained.pt - Compressed + enwik8 fine-tuning
  • compressed_gpt2_xl_68.3%_enwik8_final.pt - Final version after training
  • compressed_gpt2_xl_68.3%_enwik8_finetuned.pt - Fine-tuned version

Architecture & Metadata

  • model_architecture.pkl - Compressed model architecture
  • *_metadata.json - Training and compression metadata

πŸ”¬ Methodology

Based on quantum-inspired tensor network compression:

  • Matrix Product Operator (MPO) tensor network decomposition
  • 68% parameter reduction (1.56B β†’ ~500M parameters)
  • Tensor network compression technique
  • Knowledge distillation healing process

πŸš€ Usage

import torch

# Load compressed weights
model_weights = torch.load('compressed_gpt2_xl_68.3%_healed.pt', map_location='cpu')

# For ready-to-use model, see: 
# https://huggingface.co/prompterminal/gpt2-compressed

πŸ“Š Compression Stats

  • Original GPT-2 XL: 1.56B parameters, ~6.2GB
  • Compressed Version: ~500M parameters, ~1.98GB
  • Compression Ratio: 68% reduction
  • Method: MPO tensor networks + healing

🎯 Files Recommended for Use

  • Best for inference: compressed_gpt2_xl_68.3%_healed.pt
  • Best for fine-tuning: compressed_gpt2_xl_68.3%_enwik8_trained.pt
  • Research/analysis: All files + metadata

πŸ“š Citation

@misc{tensor_network_compression_2024,
  title={GPT-2 XL Compressed using Tensor Network Methods},
  author={prompterminal},
  year={2024},
  howpublished={HuggingFace Dataset}
}

πŸ”— Related


These weights represent pioneering work in tensor network compression for large language models.