Dataset Viewer
The dataset viewer is not available for this split.
Cannot extract the features (columns) for the split 'train' of the config 'default' of the dataset.
Error code: FeaturesError Exception: ArrowInvalid Message: Schema at index 1 was different: compression_ratio: string original_parameters: int64 compressed_parameters: int64 training_method: string base_model: string source_weights: string final_model: string compression_achieved: string status: string vs base_model: string fine_tuning_method: string batch_size: int64 accumulation_steps: int64 max_length: int64 learning_rate: double samples_processed: int64 parameter_updates: int64 final_loss: double compression_ratio: string Traceback: Traceback (most recent call last): File "/src/services/worker/src/worker/job_runners/split/first_rows.py", line 228, in compute_first_rows_from_streaming_response iterable_dataset = iterable_dataset._resolve_features() File "/src/services/worker/.venv/lib/python3.9/site-packages/datasets/iterable_dataset.py", line 3422, in _resolve_features features = _infer_features_from_batch(self.with_format(None)._head()) File "/src/services/worker/.venv/lib/python3.9/site-packages/datasets/iterable_dataset.py", line 2187, in _head return next(iter(self.iter(batch_size=n))) File "/src/services/worker/.venv/lib/python3.9/site-packages/datasets/iterable_dataset.py", line 2391, in iter for key, example in iterator: File "/src/services/worker/.venv/lib/python3.9/site-packages/datasets/iterable_dataset.py", line 1882, in __iter__ for key, pa_table in self._iter_arrow(): File "/src/services/worker/.venv/lib/python3.9/site-packages/datasets/iterable_dataset.py", line 1904, in _iter_arrow yield from self.ex_iterable._iter_arrow() File "/src/services/worker/.venv/lib/python3.9/site-packages/datasets/iterable_dataset.py", line 559, in _iter_arrow yield new_key, pa.Table.from_batches(chunks_buffer) File "pyarrow/table.pxi", line 4116, in pyarrow.lib.Table.from_batches File "pyarrow/error.pxi", line 154, in pyarrow.lib.pyarrow_internal_check_status File "pyarrow/error.pxi", line 91, in pyarrow.lib.check_status pyarrow.lib.ArrowInvalid: Schema at index 1 was different: compression_ratio: string original_parameters: int64 compressed_parameters: int64 training_method: string base_model: string source_weights: string final_model: string compression_achieved: string status: string vs base_model: string fine_tuning_method: string batch_size: int64 accumulation_steps: int64 max_length: int64 learning_rate: double samples_processed: int64 parameter_updates: int64 final_loss: double compression_ratio: string
Need help to make the dataset viewer work? Make sure to review how to configure the dataset viewer, and open a discussion for direct support.
YAML Metadata
Warning:
empty or missing yaml metadata in repo card
(https://huggingface.co/docs/hub/datasets-cards)
GPT-2 XL Compressed Model Weights
This dataset contains the compressed model weights from tensor network compression methodology applied to GPT-2 XL.
π Files Included
Compressed Model Weights (.pt files)
compressed_gpt2_xl_68.3%.pt
- Base compressed model (~68% compression)compressed_gpt2_xl_68.3%_healed.pt
- Compressed + knowledge distillation healingcompressed_gpt2_xl_68.3%_enwik8_trained.pt
- Compressed + enwik8 fine-tuningcompressed_gpt2_xl_68.3%_enwik8_final.pt
- Final version after trainingcompressed_gpt2_xl_68.3%_enwik8_finetuned.pt
- Fine-tuned version
Architecture & Metadata
model_architecture.pkl
- Compressed model architecture*_metadata.json
- Training and compression metadata
π¬ Methodology
Based on quantum-inspired tensor network compression:
- Matrix Product Operator (MPO) tensor network decomposition
- 68% parameter reduction (1.56B β ~500M parameters)
- Tensor network compression technique
- Knowledge distillation healing process
π Usage
import torch
# Load compressed weights
model_weights = torch.load('compressed_gpt2_xl_68.3%_healed.pt', map_location='cpu')
# For ready-to-use model, see:
# https://huggingface.co/prompterminal/gpt2-compressed
π Compression Stats
- Original GPT-2 XL: 1.56B parameters, ~6.2GB
- Compressed Version: ~500M parameters, ~1.98GB
- Compression Ratio: 68% reduction
- Method: MPO tensor networks + healing
π― Files Recommended for Use
- Best for inference:
compressed_gpt2_xl_68.3%_healed.pt
- Best for fine-tuning:
compressed_gpt2_xl_68.3%_enwik8_trained.pt
- Research/analysis: All files + metadata
π Citation
@misc{tensor_network_compression_2024,
title={GPT-2 XL Compressed using Tensor Network Methods},
author={prompterminal},
year={2024},
howpublished={HuggingFace Dataset}
}
π Related
- Ready-to-use model: prompterminal/gpt2-compressed
- Tensor network compression research: Matrix Product Operator methods
These weights represent pioneering work in tensor network compression for large language models.
- Downloads last month
- 82