AbstractPhil commited on
Commit
64f0d57
·
verified ·
1 Parent(s): 2af633f

Create README.md

Browse files
Files changed (1) hide show
  1. README.md +246 -0
README.md ADDED
@@ -0,0 +1,246 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ ---
2
+ license: apache-2.0
3
+ tags:
4
+ - chemistry
5
+ - biology
6
+ - art
7
+ ---
8
+ Pentachora Adaptive Encoded (Multi-Channel)
9
+
10
+ A geometry-regularized classifier with a 5-frequency encoder and pentachoron constellation heads.
11
+ Authors: AbstractPhil, Quartermaster: Mirel - GPT 4o * GPT 5 * GPT 5 Thinking * GPT 5 Fast
12
+ Contributions: Claude 4.1 Opus, Claude 4 Sonnet, Gemini
13
+ License: Apache-2.0
14
+
15
+ 📌 TL;DR
16
+
17
+ This repository hosts training runs of a frequency-aware encoder (PentaFreq) paired with a pentachoron constellation classifier (dispatchers + specialists). The model blends classic cross-entropy with two contrastive objectives (dual InfoNCE and ROSE-weighted InfoNCE) and a geometric regularizer that keeps the learned vertex geometry sane.
18
+ It supports 1-channel and 3-channel 28×28 inputs (e.g., TorchVision MNIST variants and MedMNIST 2D sets), is seeded/deterministic, and ships full artifacts (weights, plots, history, TensorBoard) for review.
19
+
20
+ 🧠 Model overview
21
+ Architecture
22
+
23
+ PentaFreq Encoder (multi-channel)
24
+
25
+ 5 spectral branches (ultra-high, high, mid, low-mid, low) → per-branch encoders → cross-attention → MLP fusion → normalized latent z.
26
+
27
+ Channel-aware: supports C ∈ {1,3}; input is flattened to C×28×28.
28
+
29
+ Pentachoron Constellation Classifier
30
+
31
+ Two stacks (dispatchers & specialists) each containing pentachora (5-vertex simplices) with learnable vertices.
32
+
33
+ Coherence gate modulates vertex logits; group heads (one per vertex) score class subsets; pair aggregation + fusion MLP produce final logits.
34
+
35
+ Geometry terms encourage valid simplex structure and separation between the two stacks.
36
+
37
+ Objective
38
+
39
+ CE – main cross-entropy on logits.
40
+
41
+ Dual InfoNCE (stable) – encourages z to match the correct vertex across both stacks.
42
+
43
+ ROSE-weighted InfoNCE (stable) – same idea, but reweights samples by an analytic ROSE similarity (triadic cosine + magnitude).
44
+
45
+ Geometry Regularization – stable Cayley–Menger proxy (eigval-based), edge-variance, center separation, and a soft radius control; ramped in early epochs.
46
+
47
+ All contrastive losses use log_softmax + gather to avoid inf−inf traps; all paths nan-sanitize defensively.
48
+
49
+ Determinism
50
+
51
+ Global seeding (Python/NumPy/Torch), deterministic DataLoader workers, generator-seeded samplers; cuDNN deterministic & TF32 off.
52
+
53
+ Optional strict mode (torch.use_deterministic_algorithms(True)) and deterministic cuBLAS.
54
+
55
+ 🗂️ Repository layout per run
56
+
57
+ Each training run uploads a complete bundle at:
58
+
59
+ <repo>/<root>/<DatasetName>/<Timestamp_or_best>/
60
+ weights/
61
+ encoder[_<Dataset>].safetensors
62
+ constellation[_<Dataset>].safetensors
63
+ diagnostic_head[_<Dataset>].safetensors
64
+ config.json # exact config used
65
+ manifest.json # env, params, dataset, best metrics
66
+ history.json / history.csv
67
+ tensorboard/ (+ zip)
68
+ plots/ # accuracy, loss components, lambda, confusion matrices
69
+
70
+
71
+ We also optionally publish a best/ alias inside each dataset folder pointing to the current champion.
72
+
73
+ 🧩 Intended use & use cases
74
+
75
+ Intended use: research-grade supervised classification and geometry-regularized representation learning on small images (28×28) across gray and color channels.
76
+
77
+ Example use cases
78
+
79
+ Benchmarking on MNIST family / MedMNIST 2D sets with defensible, reproducible training and complete artifacts.
80
+
81
+ Geometry-aware representation learning: analyze how simplex vertices move, how the gate allocates probability mass, and how geometry regularization affects generalization.
82
+
83
+ Class routing / specialization: per-vertex group heads provide an interpretable split of classes; confusion-driven vertex reweighting helps diagnose hard groups.
84
+
85
+ Curriculum & loss ablations: toggle ROSE, dual InfoNCE, or geometry terms to study their marginal value under a controlled seed.
86
+
87
+ OOD “pressure tests” (research): ROSE magnitude and routing entropy can be used as quick signals of uncertainty (not calibrated).
88
+
89
+ Education & reproducibility: the runs are fully seeded, include TensorBoard logs and plots, and use safe numerical formulations.
90
+
91
+ 🚫 Out-of-scope / limitations
92
+
93
+ Not a medical device – even if trained on MedMNIST subsets, this is not a diagnostic tool. Don’t use it for clinical decisions.
94
+
95
+ Input size is 28×28; higher-resolution domains require retraining and likely architecture tweaks.
96
+
97
+ Dataset bias / shift – performance depends on the underlying distribution. Evaluate before deployment.
98
+
99
+ Calibration – logits are not guaranteed calibrated. For decision thresholds, use a validation set or post-hoc calibration.
100
+
101
+ Robustness – robustness to adversarial perturbations is not a design goal here.
102
+
103
+ 📈 Example results (single-seed snapshots)
104
+
105
+ Numbers below are indicative from our seeded runs with img_size=28, size-aware LR schedule and reg ramp; see manifest.json in each run for exact details.
106
+
107
+ Dataset C Best Test Acc Epoch Notes
108
+ MNIST/Fashion* 1 0.97–0.98 15–25 stable losses + reg ramp
109
+ BloodMNIST 3 ~0.95–0.97+ 20–30 color preserved, 28×28
110
+ EMNIST (bal) 1 0.88–0.92 25–45 many classes; pairs auto-scaled
111
+
112
+ * depending on which of the pair (MNIST / FashionMNIST) is selected.
113
+ Consult each dataset folder’s history.csv for the full learning curve and the current best accuracy.
114
+
115
+ 🔧 How to use (PyTorch)
116
+ import torch
117
+ from safetensors.torch import load_file as load_safetensors
118
+
119
+ # --- load weights (example path) ---
120
+ ENC = "weights/encoder_MNIST.safetensors"
121
+ CON = "weights/constellation_MNIST.safetensors"
122
+ DIA = "weights/diagnostic_head_MNIST.safetensors"
123
+
124
+ # Recreate model classes (identical definitions to the notebook)
125
+ encoder = PentaFreqEncoderV2(input_dim=28*28, input_ch=1, base_dim=56, num_heads=2, channels=12)
126
+ constellation = BatchedPentachoronConstellation(num_classes=10, dim=56, num_pairs=5, lambda_sep=0.391)
127
+ diag = RoseDiagnosticHead(56)
128
+
129
+ encoder.load_state_dict(load_safetensors(ENC))
130
+ constellation.load_state_dict(load_safetensors(CON))
131
+ diag.load_state_dict(load_safetensors(DIA))
132
+
133
+ encoder.eval(); constellation.eval()
134
+
135
+ # --- dummy inference ---
136
+ # x: [B, C, H, W] converted to float tensor in [0,1]; flatten to [B, C*H*W]
137
+ # use the same normalization as training if you want best performance
138
+ x = torch.rand(8, 1, 28, 28)
139
+ x_flat = x.view(x.size(0), -1)
140
+
141
+ with torch.no_grad():
142
+ z = encoder(x_flat) # [B, D]
143
+ logits, diag_out = constellation(z) # [B, C]
144
+ pred = logits.argmax(dim=1)
145
+ print(pred)
146
+
147
+
148
+ To reproduce training, see config.json and history.csv; all recipes are encoded in the flagship notebook used for these runs.
149
+
150
+ 🔬 Training procedure (default)
151
+
152
+ Optimizer: AdamW (β1=0.9, β2=0.999), size-aware LR (≈2e-2 by default)
153
+
154
+ Schedule: 10% warmup → cosine to lr_min=1e-6
155
+
156
+ Batch size: up to 2048 (fits on T4/A100 at 28×28)
157
+
158
+ Loss: CE + Dual InfoNCE + ROSE InfoNCE + Geometry Reg (ramped) + Diag MSE
159
+
160
+ Determinism: seeds for Python/NumPy/Torch (CPU/GPU), deterministic DataLoader workers and samplers, cuDNN deterministic, TF32 off
161
+
162
+ Numerical safety: log-softmax contrastive, eigval CM proxy, nan_to_num guards, optional step rollback if non-finite
163
+
164
+ 📈 Evaluation
165
+
166
+ Main metric: top-1 accuracy on the held-out test split defined by each dataset.
167
+
168
+ Diagnostics we log:
169
+
170
+ Routing entropy and vertex probabilities
171
+
172
+ ROSE magnitudes
173
+
174
+ Confusion matrices (per epoch and “best”)
175
+
176
+ λ (geometry ↔ attention gate) over epochs
177
+
178
+ Full loss decomposition
179
+
180
+ 🔭 Potential for growth
181
+
182
+ Hypercube Constellations (shipped classes in the notebook): scale from 4-simplex to n-cube graphs; compare geometry families.
183
+
184
+ Multi-resolution (56→128→256 latent; 28→64→128 images); add pyramid encoders.
185
+
186
+ Self-distillation / semi-supervised: use ROSE as a confidence-weighted pseudo-labeling signal.
187
+
188
+ Better routing: learned vertex priors per class, entropy regularization, temperature schedules.
189
+
190
+ Calibration & OOD: temperature scaling / Dirichlet heads; exploit ROSE magnitude and gating entropy for improved uncertainty estimates.
191
+
192
+ Deployment adapters: ONNX / TorchScript exports; small mobile variants of PentaFreq.
193
+
194
+ ⚖️ Ethical considerations & implications
195
+
196
+ Clinical datasets (MedMNIST) are simplified proxies; they don’t reflect clinical complexity or demographic coverage.
197
+
198
+ Downstream use must include dataset-appropriate validation and calibration; this model is for research only.
199
+
200
+ Data bias and label noise can be amplified by strong geometry priors—review confusion matrices and per-class accuracies before claiming improvements.
201
+
202
+ Positive implications: the constellation design offers a transparent, analyzable structure (per-vertex heads, explicit geometry), easing interpretability and ablation.
203
+
204
+ 🔁 Reproducibility
205
+
206
+ config.json contains all hyperparameters used for each run.
207
+
208
+ manifest.json logs environment: Python, Torch, CUDA GPU, RAM, parameter counts.
209
+
210
+ Seeds and determinism flags are printed in logs and set in code.
211
+
212
+ history.csv + TensorBoard fully specify the learning trajectory.
213
+
214
+ 🧾 License
215
+
216
+ Apache License 2.0 – see LICENSE.
217
+
218
+ 📣 Citation
219
+
220
+ If you use this work, please cite:
221
+
222
+ @software{abstractphil_pentachora_2025,
223
+ author = {AbstractPhil and Mirel},
224
+ title = {Pentachora Adaptive Encoded: Geometry-Regularized Classification with PentaFreq},
225
+ year = {2025},
226
+ license = {Apache-2.0},
227
+ url = {https://huggingface.co/AbstractPhil/<repo>}
228
+ }
229
+
230
+ 🛠️ Changelog (excerpt)
231
+
232
+ 2025-08: Flagship notebook stabilized (stable losses, eigval CM proxy, NaN rollback, deterministic sweep).
233
+
234
+ 2025-08: Multi-channel PentaFreq; per-dataset HF folders with full artifacts; optional best/ alias.
235
+
236
+ 2025-08: Hypercube constellation classes added for follow-up experiments.
237
+
238
+ 💬 Contact
239
+
240
+ Author: @AbstractPhil
241
+
242
+ Quartermaster: Mirel (ChatGPT – GPT-5 Thinking)
243
+
244
+ Issues / questions: open a Discussion on the HF repo or ping the author
245
+
246
+ Notes for reviewers: Every dataset folder contains a complete artifact bundle. Start with manifest.json and history.csv; plots and TensorBoard give the quickest intuition of convergence and geometry behavior.