prompterminal commited on
Commit
1e0f40a
Β·
verified Β·
1 Parent(s): 77a433b

Upload README.md with huggingface_hub

Browse files
Files changed (1) hide show
  1. README.md +68 -0
README.md ADDED
@@ -0,0 +1,68 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ # GPT-2 XL Compressed Model Weights
2
+
3
+ This dataset contains the compressed model weights from **tensor network compression** methodology applied to GPT-2 XL.
4
+
5
+ ## πŸ“ Files Included
6
+
7
+ ### Compressed Model Weights (.pt files)
8
+ - `compressed_gpt2_xl_68.3%.pt` - Base compressed model (~68% compression)
9
+ - `compressed_gpt2_xl_68.3%_healed.pt` - Compressed + knowledge distillation healing
10
+ - `compressed_gpt2_xl_68.3%_enwik8_trained.pt` - Compressed + enwik8 fine-tuning
11
+ - `compressed_gpt2_xl_68.3%_enwik8_final.pt` - Final version after training
12
+ - `compressed_gpt2_xl_68.3%_enwik8_finetuned.pt` - Fine-tuned version
13
+
14
+ ### Architecture & Metadata
15
+ - `model_architecture.pkl` - Compressed model architecture
16
+ - `*_metadata.json` - Training and compression metadata
17
+
18
+ ## πŸ”¬ Methodology
19
+
20
+ Based on quantum-inspired tensor network compression:
21
+ - **Matrix Product Operator (MPO)** tensor network decomposition
22
+ - **68% parameter reduction** (1.56B β†’ ~500M parameters)
23
+ - **Tensor network** compression technique
24
+ - **Knowledge distillation** healing process
25
+
26
+ ## πŸš€ Usage
27
+
28
+ ```python
29
+ import torch
30
+
31
+ # Load compressed weights
32
+ model_weights = torch.load('compressed_gpt2_xl_68.3%_healed.pt', map_location='cpu')
33
+
34
+ # For ready-to-use model, see:
35
+ # https://huggingface.co/prompterminal/gpt2-compressed
36
+ ```
37
+
38
+ ## πŸ“Š Compression Stats
39
+
40
+ - **Original GPT-2 XL**: 1.56B parameters, ~6.2GB
41
+ - **Compressed Version**: ~500M parameters, ~1.98GB
42
+ - **Compression Ratio**: 68% reduction
43
+ - **Method**: MPO tensor networks + healing
44
+
45
+ ## 🎯 Files Recommended for Use
46
+
47
+ - **Best for inference**: `compressed_gpt2_xl_68.3%_healed.pt`
48
+ - **Best for fine-tuning**: `compressed_gpt2_xl_68.3%_enwik8_trained.pt`
49
+ - **Research/analysis**: All files + metadata
50
+
51
+ ## πŸ“š Citation
52
+
53
+ ```bibtex
54
+ @misc{tensor_network_compression_2024,
55
+ title={GPT-2 XL Compressed using Tensor Network Methods},
56
+ author={prompterminal},
57
+ year={2024},
58
+ howpublished={HuggingFace Dataset}
59
+ }
60
+ ```
61
+
62
+ ## πŸ”— Related
63
+
64
+ - **Ready-to-use model**: [prompterminal/gpt2-compressed](https://huggingface.co/prompterminal/gpt2-compressed)
65
+ - **Tensor network compression research**: Matrix Product Operator methods
66
+
67
+ ---
68
+ *These weights represent pioneering work in tensor network compression for large language models.*