File size: 2,241 Bytes
1e0f40a |
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 41 42 43 44 45 46 47 48 49 50 51 52 53 54 55 56 57 58 59 60 61 62 63 64 65 66 67 68 69 |
# GPT-2 XL Compressed Model Weights
This dataset contains the compressed model weights from **tensor network compression** methodology applied to GPT-2 XL.
## π Files Included
### Compressed Model Weights (.pt files)
- `compressed_gpt2_xl_68.3%.pt` - Base compressed model (~68% compression)
- `compressed_gpt2_xl_68.3%_healed.pt` - Compressed + knowledge distillation healing
- `compressed_gpt2_xl_68.3%_enwik8_trained.pt` - Compressed + enwik8 fine-tuning
- `compressed_gpt2_xl_68.3%_enwik8_final.pt` - Final version after training
- `compressed_gpt2_xl_68.3%_enwik8_finetuned.pt` - Fine-tuned version
### Architecture & Metadata
- `model_architecture.pkl` - Compressed model architecture
- `*_metadata.json` - Training and compression metadata
## π¬ Methodology
Based on quantum-inspired tensor network compression:
- **Matrix Product Operator (MPO)** tensor network decomposition
- **68% parameter reduction** (1.56B β ~500M parameters)
- **Tensor network** compression technique
- **Knowledge distillation** healing process
## π Usage
```python
import torch
# Load compressed weights
model_weights = torch.load('compressed_gpt2_xl_68.3%_healed.pt', map_location='cpu')
# For ready-to-use model, see:
# https://huggingface.co/prompterminal/gpt2-compressed
```
## π Compression Stats
- **Original GPT-2 XL**: 1.56B parameters, ~6.2GB
- **Compressed Version**: ~500M parameters, ~1.98GB
- **Compression Ratio**: 68% reduction
- **Method**: MPO tensor networks + healing
## π― Files Recommended for Use
- **Best for inference**: `compressed_gpt2_xl_68.3%_healed.pt`
- **Best for fine-tuning**: `compressed_gpt2_xl_68.3%_enwik8_trained.pt`
- **Research/analysis**: All files + metadata
## π Citation
```bibtex
@misc{tensor_network_compression_2024,
title={GPT-2 XL Compressed using Tensor Network Methods},
author={prompterminal},
year={2024},
howpublished={HuggingFace Dataset}
}
```
## π Related
- **Ready-to-use model**: [prompterminal/gpt2-compressed](https://huggingface.co/prompterminal/gpt2-compressed)
- **Tensor network compression research**: Matrix Product Operator methods
---
*These weights represent pioneering work in tensor network compression for large language models.*
|