AbstractPhil commited on
Commit
2818773
·
verified ·
1 Parent(s): 64f0d57

Update README.md

Browse files
Files changed (1) hide show
  1. README.md +120 -135
README.md CHANGED
@@ -1,61 +1,52 @@
1
- ---
2
- license: apache-2.0
3
- tags:
4
- - chemistry
5
- - biology
6
- - art
7
- ---
8
- Pentachora Adaptive Encoded (Multi-Channel)
9
-
10
- A geometry-regularized classifier with a 5-frequency encoder and pentachoron constellation heads.
11
- Authors: AbstractPhil, Quartermaster: Mirel - GPT 4o * GPT 5 * GPT 5 Thinking * GPT 5 Fast
12
- Contributions: Claude 4.1 Opus, Claude 4 Sonnet, Gemini
13
- License: Apache-2.0
14
-
15
- 📌 TL;DR
16
-
17
- This repository hosts training runs of a frequency-aware encoder (PentaFreq) paired with a pentachoron constellation classifier (dispatchers + specialists). The model blends classic cross-entropy with two contrastive objectives (dual InfoNCE and ROSE-weighted InfoNCE) and a geometric regularizer that keeps the learned vertex geometry sane.
18
- It supports 1-channel and 3-channel 28×28 inputs (e.g., TorchVision MNIST variants and MedMNIST 2D sets), is seeded/deterministic, and ships full artifacts (weights, plots, history, TensorBoard) for review.
19
-
20
- 🧠 Model overview
21
- Architecture
22
 
23
- PentaFreq Encoder (multi-channel)
24
-
25
- 5 spectral branches (ultra-high, high, mid, low-mid, low) → per-branch encoders → cross-attention → MLP fusion → normalized latent z.
26
-
27
- Channel-aware: supports C ∈ {1,3}; input is flattened to C×28×28.
28
 
29
- Pentachoron Constellation Classifier
30
 
31
- Two stacks (dispatchers & specialists) each containing pentachora (5-vertex simplices) with learnable vertices.
 
32
 
33
- Coherence gate modulates vertex logits; group heads (one per vertex) score class subsets; pair aggregation + fusion MLP produce final logits.
34
 
35
- Geometry terms encourage valid simplex structure and separation between the two stacks.
36
 
37
- Objective
38
 
39
- CE main cross-entropy on logits.
 
 
40
 
41
- Dual InfoNCE (stable) – encourages z to match the correct vertex across both stacks.
 
 
 
42
 
43
- ROSE-weighted InfoNCE (stable) – same idea, but reweights samples by an analytic ROSE similarity (triadic cosine + magnitude).
44
 
45
- Geometry Regularizationstable Cayley–Menger proxy (eigval-based), edge-variance, center separation, and a soft radius control; ramped in early epochs.
 
 
 
46
 
47
- All contrastive losses use log_softmax + gather to avoid inf−inf traps; all paths nan-sanitize defensively.
48
 
49
- Determinism
50
 
51
- Global seeding (Python/NumPy/Torch), deterministic DataLoader workers, generator-seeded samplers; cuDNN deterministic & TF32 off.
 
52
 
53
- Optional strict mode (torch.use_deterministic_algorithms(True)) and deterministic cuBLAS.
54
 
55
- 🗂️ Repository layout per run
56
 
57
  Each training run uploads a complete bundle at:
58
 
 
59
  <repo>/<root>/<DatasetName>/<Timestamp_or_best>/
60
  weights/
61
  encoder[_<Dataset>].safetensors
@@ -66,53 +57,55 @@ Each training run uploads a complete bundle at:
66
  history.json / history.csv
67
  tensorboard/ (+ zip)
68
  plots/ # accuracy, loss components, lambda, confusion matrices
 
69
 
 
70
 
71
- We also optionally publish a best/ alias inside each dataset folder pointing to the current champion.
72
-
73
- 🧩 Intended use & use cases
74
-
75
- Intended use: research-grade supervised classification and geometry-regularized representation learning on small images (28×28) across gray and color channels.
76
-
77
- Example use cases
78
-
79
- Benchmarking on MNIST family / MedMNIST 2D sets with defensible, reproducible training and complete artifacts.
80
-
81
- Geometry-aware representation learning: analyze how simplex vertices move, how the gate allocates probability mass, and how geometry regularization affects generalization.
82
 
83
- Class routing / specialization: per-vertex group heads provide an interpretable split of classes; confusion-driven vertex reweighting helps diagnose hard groups.
84
 
85
- Curriculum & loss ablations: toggle ROSE, dual InfoNCE, or geometry terms to study their marginal value under a controlled seed.
86
 
87
- OOD “pressure tests” (research): ROSE magnitude and routing entropy can be used as quick signals of uncertainty (not calibrated).
88
 
89
- Education & reproducibility: the runs are fully seeded, include TensorBoard logs and plots, and use safe numerical formulations.
 
 
 
 
 
90
 
91
- 🚫 Out-of-scope / limitations
92
 
93
- Not a medical device – even if trained on MedMNIST subsets, this is not a diagnostic tool. Don’t use it for clinical decisions.
94
 
95
- Input size is 28×28; higher-resolution domains require retraining and likely architecture tweaks.
 
 
 
 
96
 
97
- Dataset bias / shift – performance depends on the underlying distribution. Evaluate before deployment.
98
 
99
- Calibration logits are not guaranteed calibrated. For decision thresholds, use a validation set or post-hoc calibration.
100
 
101
- Robustness robustness to adversarial perturbations is not a design goal here.
102
 
103
- 📈 Example results (single-seed snapshots)
 
 
 
 
104
 
105
- Numbers below are indicative from our seeded runs with img_size=28, size-aware LR schedule and reg ramp; see manifest.json in each run for exact details.
 
106
 
107
- Dataset C Best Test Acc Epoch Notes
108
- MNIST/Fashion* 1 0.97–0.98 15–25 stable losses + reg ramp
109
- BloodMNIST 3 ~0.95–0.97+ 20–30 color preserved, 28×28
110
- EMNIST (bal) 1 0.88–0.92 25–45 many classes; pairs auto-scaled
111
 
112
- * depending on which of the pair (MNIST / FashionMNIST) is selected.
113
- Consult each dataset folder’s history.csv for the full learning curve and the current best accuracy.
114
 
115
- 🔧 How to use (PyTorch)
116
  import torch
117
  from safetensors.torch import load_file as load_safetensors
118
 
@@ -143,104 +136,96 @@ with torch.no_grad():
143
  logits, diag_out = constellation(z) # [B, C]
144
  pred = logits.argmax(dim=1)
145
  print(pred)
 
146
 
 
147
 
148
- To reproduce training, see config.json and history.csv; all recipes are encoded in the flagship notebook used for these runs.
149
-
150
- 🔬 Training procedure (default)
151
-
152
- Optimizer: AdamW (β1=0.9, β2=0.999), size-aware LR (≈2e-2 by default)
153
-
154
- Schedule: 10% warmup → cosine to lr_min=1e-6
155
-
156
- Batch size: up to 2048 (fits on T4/A100 at 28×28)
157
-
158
- Loss: CE + Dual InfoNCE + ROSE InfoNCE + Geometry Reg (ramped) + Diag MSE
159
-
160
- Determinism: seeds for Python/NumPy/Torch (CPU/GPU), deterministic DataLoader workers and samplers, cuDNN deterministic, TF32 off
161
-
162
- Numerical safety: log-softmax contrastive, eigval CM proxy, nan_to_num guards, optional step rollback if non-finite
163
-
164
- 📈 Evaluation
165
-
166
- Main metric: top-1 accuracy on the held-out test split defined by each dataset.
167
-
168
- Diagnostics we log:
169
-
170
- Routing entropy and vertex probabilities
171
-
172
- ROSE magnitudes
173
-
174
- Confusion matrices (per epoch and “best”)
175
-
176
- λ (geometry ↔ attention gate) over epochs
177
-
178
- Full loss decomposition
179
-
180
- 🔭 Potential for growth
181
 
182
- Hypercube Constellations (shipped classes in the notebook): scale from 4-simplex to n-cube graphs; compare geometry families.
183
 
184
- Multi-resolution (56→128→256 latent; 28→64→128 images); add pyramid encoders.
 
 
 
 
 
185
 
186
- Self-distillation / semi-supervised: use ROSE as a confidence-weighted pseudo-labeling signal.
187
 
188
- Better routing: learned vertex priors per class, entropy regularization, temperature schedules.
189
 
190
- Calibration & OOD: temperature scaling / Dirichlet heads; exploit ROSE magnitude and gating entropy for improved uncertainty estimates.
 
 
 
 
 
 
191
 
192
- Deployment adapters: ONNX / TorchScript exports; small mobile variants of PentaFreq.
193
 
194
- ⚖️ Ethical considerations & implications
195
 
196
- Clinical datasets (MedMNIST) are simplified proxies; they don’t reflect clinical complexity or demographic coverage.
 
 
 
 
 
197
 
198
- Downstream use must include dataset-appropriate validation and calibration; this model is for research only.
199
 
200
- Data bias and label noise can be amplified by strong geometry priors—review confusion matrices and per-class accuracies before claiming improvements.
201
 
202
- Positive implications: the constellation design offers a transparent, analyzable structure (per-vertex heads, explicit geometry), easing interpretability and ablation.
 
 
 
203
 
204
- 🔁 Reproducibility
205
 
206
- config.json contains all hyperparameters used for each run.
207
 
208
- manifest.json logs environment: Python, Torch, CUDA GPU, RAM, parameter counts.
 
 
 
209
 
210
- Seeds and determinism flags are printed in logs and set in code.
211
 
212
- history.csv + TensorBoard fully specify the learning trajectory.
213
 
214
- 🧾 License
215
 
216
- Apache License 2.0 – see LICENSE.
217
 
218
- 📣 Citation
219
 
220
  If you use this work, please cite:
221
 
 
222
  @software{abstractphil_pentachora_2025,
223
  author = {AbstractPhil and Mirel},
224
  title = {Pentachora Adaptive Encoded: Geometry-Regularized Classification with PentaFreq},
225
  year = {2025},
226
  license = {Apache-2.0},
227
- url = {https://huggingface.co/AbstractPhil/<repo>}
228
  }
 
229
 
230
- 🛠️ Changelog (excerpt)
231
-
232
- 2025-08: Flagship notebook stabilized (stable losses, eigval CM proxy, NaN rollback, deterministic sweep).
233
-
234
- 2025-08: Multi-channel PentaFreq; per-dataset HF folders with full artifacts; optional best/ alias.
235
-
236
- 2025-08: Hypercube constellation classes added for follow-up experiments.
237
 
238
- 💬 Contact
239
 
240
- Author: @AbstractPhil
 
 
241
 
242
- Quartermaster: Mirel (ChatGPT – GPT-5 Thinking)
243
 
244
- Issues / questions: open a Discussion on the HF repo or ping the author
245
 
246
- Notes for reviewers: Every dataset folder contains a complete artifact bundle. Start with manifest.json and history.csv; plots and TensorBoard give the quickest intuition of convergence and geometry behavior.
 
 
 
1
+ # Pentachora Adaptive Encoded (Multi-Channel)
2
+ **A geometry-regularized classifier with a 5-frequency encoder and pentachoron constellation heads.**
3
+ *Author:* **AbstractPhil** · *Quartermaster:* **Mirel** · GPT 4o - GPT 5 - GPT 5 Fast - GPT 5 Thinking - GPT 5 Pro
4
+ *Assistants:* Claude Opus 4.1 - Claude Sonnet 4 - Gemini 2.5
5
+ *License:* **Apache-2.0**
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
6
 
7
+ ---
 
 
 
 
8
 
9
+ ## 📌 TL;DR
10
 
11
+ This repository hosts training runs of a **frequency-aware encoder** (PentaFreq) paired with a **pentachoron constellation classifier** (dispatchers + specialists). The model blends classic cross-entropy with **two contrastive objectives** (dual InfoNCE and **ROSE-weighted** InfoNCE) and a **geometric regularizer** that keeps the learned vertex geometry sane.
12
+ It supports **1-channel and 3-channel** 28×28 inputs (e.g., TorchVision MNIST variants and MedMNIST 2D sets), is **seeded/deterministic**, and ships full artifacts (weights, plots, history, TensorBoard) for review.
13
 
14
+ ---
15
 
16
+ ## 🧠 Model overview
17
 
18
+ ### Architecture
19
 
20
+ - **PentaFreq Encoder (multi-channel)**
21
+ - 5 spectral branches (ultra-high, high, mid, low-mid, low) → per-branch encoders → cross-attention → MLP fusion → **normalized latent `z`**.
22
+ - Channel-aware: supports **C ∈ {1,3}**; input is flattened to `C×28×28`.
23
 
24
+ - **Pentachoron Constellation Classifier**
25
+ - **Two stacks** (dispatchers & specialists) each containing **pentachora** (5-vertex simplices) with learnable vertices.
26
+ - **Coherence gate** modulates vertex logits; **group heads** (one per vertex) score class subsets; **pair aggregation** + fusion MLP produce final logits.
27
+ - Geometry terms encourage valid simplex structure and separation between the two stacks.
28
 
29
+ ### Objective
30
 
31
+ - **CE**main cross-entropy on logits.
32
+ - **Dual InfoNCE (stable)** – encourages `z` to match the **correct vertex** across both stacks.
33
+ - **ROSE-weighted InfoNCE (stable)** – same idea, but reweights samples by an analytic **ROSE** similarity (triadic cosine + magnitude).
34
+ - **Geometry Regularization** – stable Cayley–Menger **proxy** (eigval-based), edge-variance, center separation, and a **soft radius control**; ramped in early epochs.
35
 
36
+ > All contrastive losses use `log_softmax` + `gather` to avoid `inf−inf` traps; all paths **nan-sanitize** defensively.
37
 
38
+ ### Determinism
39
 
40
+ - Global seeding (Python/NumPy/Torch), deterministic DataLoader workers, generator-seeded samplers; cuDNN deterministic & TF32 off.
41
+ - Optional strict mode (`torch.use_deterministic_algorithms(True)`) and deterministic cuBLAS.
42
 
43
+ ---
44
 
45
+ ## 🗂️ Repository layout per run
46
 
47
  Each training run uploads a complete bundle at:
48
 
49
+ ```
50
  <repo>/<root>/<DatasetName>/<Timestamp_or_best>/
51
  weights/
52
  encoder[_<Dataset>].safetensors
 
57
  history.json / history.csv
58
  tensorboard/ (+ zip)
59
  plots/ # accuracy, loss components, lambda, confusion matrices
60
+ ```
61
 
62
+ > We also optionally publish a **`best/`** alias inside each dataset folder pointing to the current champion.
63
 
64
+ ---
 
 
 
 
 
 
 
 
 
 
65
 
66
+ ## 🧩 Intended use & use cases
67
 
68
+ **Intended use**: research-grade supervised classification and geometry-regularized representation learning on small images (28×28) across gray and color channels.
69
 
70
+ **Example use cases**
71
 
72
+ - **Benchmarking** on MNIST family / MedMNIST 2D sets with defensible, reproducible training and complete artifacts.
73
+ - **Geometry-aware representation learning**: analyze how simplex vertices move, how the gate allocates probability mass, and how geometry regularization affects generalization.
74
+ - **Class routing / specialization**: per-vertex group heads provide an interpretable split of classes; confusion-driven vertex reweighting helps diagnose hard groups.
75
+ - **Curriculum & loss ablations**: toggle ROSE, dual InfoNCE, or geometry terms to study their marginal value under a controlled seed.
76
+ - **OOD “pressure tests”** (research): ROSE magnitude and routing entropy can be used as quick signals of uncertainty (not calibrated).
77
+ - **Education & reproducibility**: the runs are fully seeded, include TensorBoard logs and plots, and use safe numerical formulations.
78
 
79
+ ---
80
 
81
+ ## 🚫 Out-of-scope / limitations
82
 
83
+ - **Not a medical device** – even if trained on MedMNIST subsets, this is not a diagnostic tool. Don’t use it for clinical decisions.
84
+ - **Input size** is 28×28; higher-resolution domains require retraining and likely architecture tweaks.
85
+ - **Dataset bias / shift** – performance depends on the underlying distribution. Evaluate before deployment.
86
+ - **Calibration** – logits are not guaranteed calibrated. For decision thresholds, use a validation set or post-hoc calibration.
87
+ - **Robustness** – robustness to adversarial perturbations is not a design goal here.
88
 
89
+ ---
90
 
91
+ ## 📈 Example results (single-seed snapshots)
92
 
93
+ > Numbers below are indicative from our seeded runs with `img_size=28`, size-aware LR schedule and reg ramp; see `manifest.json` in each run for exact details.
94
 
95
+ | Dataset | C | Best Test Acc | Epoch | Notes |
96
+ |----------------|---|---------------:|------:|--------------------------------------|
97
+ | MNIST/Fashion* | 1 | 0.97–0.98 | 15–25 | stable losses + reg ramp |
98
+ | BloodMNIST | 3 | ~0.95–0.97+ | 20–30 | color preserved, 28×28 |
99
+ | EMNIST (bal) | 1 | 0.88–0.92 | 25–45 | many classes; pairs auto-scaled |
100
 
101
+ \* depending on which of the pair (MNIST / FashionMNIST) is selected.
102
+ Consult each dataset folder’s `history.csv` for the full learning curve and the **current best** accuracy.
103
 
104
+ ---
 
 
 
105
 
106
+ ## 🔧 How to use (PyTorch)
 
107
 
108
+ ```python
109
  import torch
110
  from safetensors.torch import load_file as load_safetensors
111
 
 
136
  logits, diag_out = constellation(z) # [B, C]
137
  pred = logits.argmax(dim=1)
138
  print(pred)
139
+ ```
140
 
141
+ > To reproduce training, see `config.json` and `history.csv`; all recipes are encoded in the flagship notebook used for these runs.
142
 
143
+ ---
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
144
 
145
+ ## 🔬 Training procedure (default)
146
 
147
+ - **Optimizer**: AdamW (β1=0.9, β2=0.999), size-aware LR (≈2e-2 by default)
148
+ - **Schedule**: 10% **warmup** → cosine to `lr_min=1e-6`
149
+ - **Batch size**: up to 2048 (fits on T4/A100 at 28×28)
150
+ - **Loss**: CE + Dual InfoNCE + ROSE InfoNCE + Geometry Reg (ramped) + Diag MSE
151
+ - **Determinism**: seeds for Python/NumPy/Torch (CPU/GPU), deterministic DataLoader workers and samplers, cuDNN deterministic, TF32 off
152
+ - **Numerical safety**: log-softmax contrastive, eigval CM proxy, `nan_to_num` guards, optional step rollback if non-finite
153
 
154
+ ---
155
 
156
+ ## 📈 Evaluation
157
 
158
+ - Main metric: **top-1 accuracy** on the held-out test split defined by each dataset.
159
+ - Diagnostics we log:
160
+ - **Routing entropy** and vertex probabilities
161
+ - **ROSE** magnitudes
162
+ - Confusion matrices (per epoch and “best”)
163
+ - λ (geometry ↔ attention gate) over epochs
164
+ - Full loss decomposition
165
 
166
+ ---
167
 
168
+ ## 🔭 Potential for growth
169
 
170
+ - **Hypercube Constellations** (shipped classes in the notebook): scale from 4-simplex to n-cube graphs; compare geometry families.
171
+ - **Multi-resolution** (56→128→256 latent; 28→64→128 images); add pyramid encoders.
172
+ - **Self-distillation / semi-supervised**: use ROSE as a confidence-weighted pseudo-labeling signal.
173
+ - **Better routing**: learned vertex priors per class, entropy regularization, temperature schedules.
174
+ - **Calibration & OOD**: temperature scaling / Dirichlet heads; exploit ROSE magnitude and gating entropy for improved uncertainty estimates.
175
+ - **Deployment adapters**: ONNX / TorchScript exports; small mobile variants of PentaFreq.
176
 
177
+ ---
178
 
179
+ ## ⚖️ Ethical considerations & implications
180
 
181
+ - **Clinical datasets** (MedMNIST) are simplified proxies; they don’t reflect clinical complexity or demographic coverage.
182
+ - **Downstream use** must include dataset-appropriate validation and calibration; this model is for **research** only.
183
+ - **Data bias** and **label noise** can be amplified by strong geometry priors—review confusion matrices and per-class accuracies before claiming improvements.
184
+ - **Positive implications**: the constellation design offers a **transparent, analyzable structure** (per-vertex heads, explicit geometry), easing **interpretability** and **ablation**.
185
 
186
+ ---
187
 
188
+ ## 🔁 Reproducibility
189
 
190
+ - `config.json` contains all hyperparameters used for each run.
191
+ - `manifest.json` logs environment: Python, Torch, CUDA GPU, RAM, parameter counts.
192
+ - Seeds and determinism flags are printed in logs and set in code.
193
+ - `history.csv` + TensorBoard fully specify the learning trajectory.
194
 
195
+ ---
196
 
197
+ ## 🧾 License
198
 
199
+ **Apache License 2.0** – see `LICENSE`.
200
 
201
+ ---
202
 
203
+ ## 📣 Citation
204
 
205
  If you use this work, please cite:
206
 
207
+ ```
208
  @software{abstractphil_pentachora_2025,
209
  author = {AbstractPhil and Mirel},
210
  title = {Pentachora Adaptive Encoded: Geometry-Regularized Classification with PentaFreq},
211
  year = {2025},
212
  license = {Apache-2.0},
213
+ url = {https://huggingface.co/AbstractPhil/pentachora-multi-channel-frequency-encoded}
214
  }
215
+ ```
216
 
217
+ ---
 
 
 
 
 
 
218
 
219
+ ## 🛠️ Changelog (excerpt)
220
 
221
+ - **2025-08**: Flagship notebook stabilized (stable losses, eigval CM proxy, NaN rollback, deterministic sweep).
222
+ - **2025-08**: Multi-channel PentaFreq; per-dataset HF folders with full artifacts; optional `best/` alias.
223
+ - **2025-08**: Hypercube constellation classes added for follow-up experiments.
224
 
225
+ ---
226
 
227
+ ## 💬 Contact
228
 
229
+ - **Author:** @AbstractPhil
230
+ - **Quartermaster:** Mirel (ChatGPT – GPT-5 Thinking)
231
+ - **Issues / questions:** open a Discussion on the HF repo or ping the author