GPT-2 XL Compressed Model Weights
This dataset contains the compressed model weights from tensor network compression methodology applied to GPT-2 XL.
π Files Included
Compressed Model Weights (.pt files)
compressed_gpt2_xl_68.3%.pt
- Base compressed model (~68% compression)compressed_gpt2_xl_68.3%_healed.pt
- Compressed + knowledge distillation healingcompressed_gpt2_xl_68.3%_enwik8_trained.pt
- Compressed + enwik8 fine-tuningcompressed_gpt2_xl_68.3%_enwik8_final.pt
- Final version after trainingcompressed_gpt2_xl_68.3%_enwik8_finetuned.pt
- Fine-tuned version
Architecture & Metadata
model_architecture.pkl
- Compressed model architecture*_metadata.json
- Training and compression metadata
π¬ Methodology
Based on quantum-inspired tensor network compression:
- Matrix Product Operator (MPO) tensor network decomposition
- 68% parameter reduction (1.56B β ~500M parameters)
- Tensor network compression technique
- Knowledge distillation healing process
π Usage
import torch
# Load compressed weights
model_weights = torch.load('compressed_gpt2_xl_68.3%_healed.pt', map_location='cpu')
# For ready-to-use model, see:
# https://huggingface.co/prompterminal/gpt2-compressed
π Compression Stats
- Original GPT-2 XL: 1.56B parameters, ~6.2GB
- Compressed Version: ~500M parameters, ~1.98GB
- Compression Ratio: 68% reduction
- Method: MPO tensor networks + healing
π― Files Recommended for Use
- Best for inference:
compressed_gpt2_xl_68.3%_healed.pt
- Best for fine-tuning:
compressed_gpt2_xl_68.3%_enwik8_trained.pt
- Research/analysis: All files + metadata
π Citation
@misc{tensor_network_compression_2024,
title={GPT-2 XL Compressed using Tensor Network Methods},
author={prompterminal},
year={2024},
howpublished={HuggingFace Dataset}
}
π Related
- Ready-to-use model: prompterminal/gpt2-compressed
- Tensor network compression research: Matrix Product Operator methods
These weights represent pioneering work in tensor network compression for large language models.