klamike commited on
Commit
ffc371a
·
verified ·
1 Parent(s): 0f9d0c2

Convert dataset to Parquet (part 00004-of-00005) (#5)

Browse files

- Convert dataset to Parquet (part 00004-of-00005) (b5347603866e2c15993ba6020c8de326a674acc0)
- Delete loading script (41c3f9131c2b34c0b1a581b5ee70b318f5824af0)
- Delete data file (019314ab2e760c21b76fbde489548123613de83d)
- Delete data file (c9c0e710a10b08e4953063671d1b6a6a068fa129)
- Delete data file (482a830eba4dc5453a409243c9362833a008ff09)
- Delete data file (fd2dfef98141c2e6d175303591df968d4d4f8161)
- Delete data file (3f5722bb93997a580217683a2f66a81d68b7247d)
- Delete data file (cc6c6de4d9e02cbd61e06a503e454e0499a091da)
- Delete data file (77e24198e14be0b9b162ef1f7385990c9768e5b6)
- Delete data file (8ecdd7cd00124adcb7e311f11fb07f1d91a5de7d)
- Delete data file (e76f7d453583fcf812ef4566686ff161395a27bb)
- Delete data file (a385105dc0e2c6ac158ed9003051fcb79dca621a)
- Delete data file (d510b6a2c5b19de056e13b36d74658a591691d5d)
- Delete data file (97baac392f72361529bceb8defaadc8575565ce2)
- Delete data file (f84beb9bfaa630ce3f3a83b4f21f1887bd5e0caa)
- Delete data file (63c44e9c168eed21d417dc8e173c52f5c77b6c9d)
- Delete data file (405a7efb6e5bceae11af96834e641d8bc046af16)
- Delete data file (c9049fe136e30b071fa7df5d4e1a8a4022b1e5c8)
- Delete data file (43a72cb5db989cd44dba0cfd54c0b14ac429b674)
- Delete data file (bb6b51fc5931af3cd3f35257f1c5481b03b886ce)
- Delete data file (ae42ee33d3b316f766855c6a755a4f1df12efa07)
- Delete data file (2a88faa9a67251d5c21154fcda7dfa3008e380ef)
- Delete data file (40671b75bf7a8c517dd08e312064a728328bc080)
- Delete data file (c1dde21c0cfc6d66ddd000ec66f84d872137de25)
- Delete data file (f23e8c69fd5787cc45c1b7f05f723d315a21ed9f)
- Delete data file (ffd76d57ea3bab2a8b9cbe89938c8e94b86fc387)
- Delete data file (13ffc58ea3c61e259ca4af56cfca569ca506b561)
- Delete data file (bc35d522bebe2fef3371c857d7774073353463d2)
- Delete data file (1f73be5900d8ce6466c325442ce936dccd5e4925)
- Delete data file (2f5365c647567f5581ba4c258ffae842375cb29b)
- Delete data file (7bdc9eaf1c29ce2e8c8a7d8c6b89b02d8d1641ee)
- Delete data file (64202290a91ed9cd2dd46a1a1edb3b5122c559ed)
- Delete data file (a7c25be054a9c2675574ae9ea3853ca5b7681f11)
- Delete data file (cdcfa892642d979e50c95df28355d80b352f8635)

This view is limited to 50 files because it contains too many changes.   See raw diff
Files changed (50) hide show
  1. infeasible/ACOPF/meta.h5.gz → 13659_pegase/test-00021-of-00045.parquet +2 -2
  2. case.json.gz → 13659_pegase/test-00022-of-00045.parquet +2 -2
  3. infeasible/DCOPF/meta.h5.gz → 13659_pegase/test-00023-of-00045.parquet +2 -2
  4. infeasible/SOCOPF/meta.h5.gz → 13659_pegase/test-00024-of-00045.parquet +2 -2
  5. 13659_pegase/test-00025-of-00045.parquet +3 -0
  6. 13659_pegase/test-00026-of-00045.parquet +3 -0
  7. 13659_pegase/test-00027-of-00045.parquet +3 -0
  8. 13659_pegase/test-00028-of-00045.parquet +3 -0
  9. 13659_pegase/test-00029-of-00045.parquet +3 -0
  10. 13659_pegase/test-00030-of-00045.parquet +3 -0
  11. 13659_pegase/test-00031-of-00045.parquet +3 -0
  12. 13659_pegase/test-00032-of-00045.parquet +3 -0
  13. 13659_pegase/test-00033-of-00045.parquet +3 -0
  14. 13659_pegase/test-00034-of-00045.parquet +3 -0
  15. 13659_pegase/test-00035-of-00045.parquet +3 -0
  16. 13659_pegase/test-00036-of-00045.parquet +3 -0
  17. 13659_pegase/test-00037-of-00045.parquet +3 -0
  18. 13659_pegase/test-00038-of-00045.parquet +3 -0
  19. 13659_pegase/test-00039-of-00045.parquet +3 -0
  20. 13659_pegase/test-00040-of-00045.parquet +3 -0
  21. 13659_pegase/test-00041-of-00045.parquet +3 -0
  22. 13659_pegase/test-00042-of-00045.parquet +3 -0
  23. 13659_pegase/test-00043-of-00045.parquet +3 -0
  24. 13659_pegase/test-00044-of-00045.parquet +3 -0
  25. PGLearn-ExtraLarge-13659_pegase.py +0 -427
  26. README.md +9 -1
  27. config.toml +0 -42
  28. infeasible/ACOPF/dual.h5.gz +0 -3
  29. infeasible/ACOPF/primal.h5.gz +0 -3
  30. infeasible/DCOPF/dual.h5.gz +0 -3
  31. infeasible/DCOPF/primal.h5.gz +0 -3
  32. infeasible/SOCOPF/dual.h5.gz +0 -3
  33. infeasible/SOCOPF/primal.h5.gz +0 -3
  34. infeasible/input.h5.gz +0 -3
  35. test/ACOPF/dual.h5.gz +0 -3
  36. test/ACOPF/meta.h5.gz +0 -3
  37. test/ACOPF/primal.h5.gz +0 -3
  38. test/DCOPF/dual.h5.gz +0 -3
  39. test/DCOPF/meta.h5.gz +0 -3
  40. test/DCOPF/primal.h5.gz +0 -3
  41. test/SOCOPF/dual.h5.gz +0 -3
  42. test/SOCOPF/meta.h5.gz +0 -3
  43. test/SOCOPF/primal.h5.gz +0 -3
  44. test/input.h5.gz +0 -3
  45. train/ACOPF/dual.h5.gz +0 -3
  46. train/ACOPF/meta.h5.gz +0 -3
  47. train/ACOPF/primal.h5.gz +0 -3
  48. train/DCOPF/dual.h5.gz +0 -3
  49. train/DCOPF/meta.h5.gz +0 -3
  50. train/DCOPF/primal.h5.gz +0 -3
infeasible/ACOPF/meta.h5.gz → 13659_pegase/test-00021-of-00045.parquet RENAMED
@@ -1,3 +1,3 @@
1
  version https://git-lfs.github.com/spec/v1
2
- oid sha256:66bed0e566ed87302d2064422ad608a2047a66002f7694b42b821569b5458c65
3
- size 864552
 
1
  version https://git-lfs.github.com/spec/v1
2
+ oid sha256:881aaf60ac7b523bc841c762c97de88502d5861cd36c4c0ffa199f0686c207a8
3
+ size 485404540
case.json.gz → 13659_pegase/test-00022-of-00045.parquet RENAMED
@@ -1,3 +1,3 @@
1
  version https://git-lfs.github.com/spec/v1
2
- oid sha256:0baac391ef868bdd095ebd29e58c8842b3e9ee38908425ef72b63aa26809491a
3
- size 11384812
 
1
  version https://git-lfs.github.com/spec/v1
2
+ oid sha256:748aed1c68d03a79a3f5f74dded4e592fff2e1dcb5e061d4e495ff86a1f34bcd
3
+ size 485256702
infeasible/DCOPF/meta.h5.gz → 13659_pegase/test-00023-of-00045.parquet RENAMED
@@ -1,3 +1,3 @@
1
  version https://git-lfs.github.com/spec/v1
2
- oid sha256:0c6390ca07af3f5214e4df07ec35901a046b0fa2e552aabff90038a51124d0b3
3
- size 863156
 
1
  version https://git-lfs.github.com/spec/v1
2
+ oid sha256:b096a56c4e79e751026429d3c3ef199b28ddbe59215fc0d406f6725f6e054430
3
+ size 481427206
infeasible/SOCOPF/meta.h5.gz → 13659_pegase/test-00024-of-00045.parquet RENAMED
@@ -1,3 +1,3 @@
1
  version https://git-lfs.github.com/spec/v1
2
- oid sha256:15d83007acddb37e9174e3c22ad9ada6d170fb386113a21d1cb17ba928201f16
3
- size 715114
 
1
  version https://git-lfs.github.com/spec/v1
2
+ oid sha256:165a192ffeb8157aeed26f747e57b5afe36f756827833d001b45dfe250f6fe27
3
+ size 481350752
13659_pegase/test-00025-of-00045.parquet ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:6a3c32ff87f8899fca3edcc25a6d1017cba4b5f523e341081497b4172dc4920b
3
+ size 481145734
13659_pegase/test-00026-of-00045.parquet ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:d5ea0bf879e825ebc5a9f67ae62dc79e57dd476765eb69d6bb16370e08162d6e
3
+ size 481606156
13659_pegase/test-00027-of-00045.parquet ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:c80c7c974f6e821114c2db9ed0bb1d7c6189042668f0256d27ea60e35a27892f
3
+ size 481729767
13659_pegase/test-00028-of-00045.parquet ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:b69f7571246339654d2a51e16832009c712953d45814755fe38216d2ee807777
3
+ size 481811129
13659_pegase/test-00029-of-00045.parquet ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:21e460a0750b8e43ee959dbb2d83f0af39a100bacc4b7fdbdc2def7942538052
3
+ size 481739172
13659_pegase/test-00030-of-00045.parquet ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:b926617b5ea585e63d503d3395157b8ad2d8e30ce726ce818bb2ae678cf77d5b
3
+ size 481739152
13659_pegase/test-00031-of-00045.parquet ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:c5100db46624f4341e33f16d4d767aa5027e75e339f648c59177d0807e8f4560
3
+ size 481337377
13659_pegase/test-00032-of-00045.parquet ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:2e7d27817e4b96280c2d6e1159988baeba70a887f0993d549c79e38d52937e76
3
+ size 481158230
13659_pegase/test-00033-of-00045.parquet ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:b8e9e8af4568f797e6ac1bcced37b424704ede86fd7c22564efda71276b75748
3
+ size 481530769
13659_pegase/test-00034-of-00045.parquet ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:9120314f5ce2e7c2fd5aba15881227cf734c16b05d07491f4a047a808ff2d2cd
3
+ size 481536228
13659_pegase/test-00035-of-00045.parquet ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:486905681218bfaf75310a60be893b19e23922114222c7d37c8d7aa868bf9575
3
+ size 481576812
13659_pegase/test-00036-of-00045.parquet ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:31a29c38c01a26204e448cba8971e4a9c074091297a3d4c098675244dd4fea97
3
+ size 481442248
13659_pegase/test-00037-of-00045.parquet ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:57a2833cba25a8188bd9455624e570fa97e7e4f57f8c72e80a5d290e0edce962
3
+ size 481248384
13659_pegase/test-00038-of-00045.parquet ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:967f9d71e06b2624a7633ef52b0d9f4fbc414b19c6dc05a11968121cfed53d92
3
+ size 481159329
13659_pegase/test-00039-of-00045.parquet ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:0559fb996d95aa5ef39d3ca010c1b4275f54f6063a33c39c0eaa8cefe80f3c31
3
+ size 481393283
13659_pegase/test-00040-of-00045.parquet ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:e6b31dda36dfc98bdb1e5d3aba3aa7a189cf096cd738f536fce9968b13653624
3
+ size 481177795
13659_pegase/test-00041-of-00045.parquet ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:e502073d85a0472d2e8a353fed80866a845ee9d284aca36b57a47ac1e12ec20a
3
+ size 481261567
13659_pegase/test-00042-of-00045.parquet ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:ba2fcfc8d3c826839e5675b32e7ac70ebabfae262bfa61060df7c12b165fd3f2
3
+ size 481474395
13659_pegase/test-00043-of-00045.parquet ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:0001297282fc8b01954d3794db930e9f3007392e78175d1392daf4c9ab2132f3
3
+ size 481712783
13659_pegase/test-00044-of-00045.parquet ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:b3ab99bb9acd13dd8c4902f4cf07c637e4f40752478fcf01a1eed766028de784
3
+ size 481656050
PGLearn-ExtraLarge-13659_pegase.py DELETED
@@ -1,427 +0,0 @@
1
- from __future__ import annotations
2
- from dataclasses import dataclass
3
- from pathlib import Path
4
- import json
5
- import shutil
6
-
7
- import datasets as hfd
8
- import h5py
9
- import pgzip as gzip
10
- import pyarrow as pa
11
-
12
- # ┌──────────────┐
13
- # │ Metadata │
14
- # └──────────────┘
15
-
16
- @dataclass
17
- class CaseSizes:
18
- n_bus: int
19
- n_load: int
20
- n_gen: int
21
- n_branch: int
22
-
23
- CASENAME = "13659_pegase"
24
- SIZES = CaseSizes(n_bus=13659, n_load=5544, n_gen=4092, n_branch=20467)
25
- NUM_TRAIN = 19532
26
- NUM_TEST = 4883
27
- NUM_INFEASIBLE = 25585
28
- SPLITFILES = {}
29
-
30
- URL = "https://huggingface.co/datasets/PGLearn/PGLearn-ExtraLarge-13659_pegase"
31
- DESCRIPTION = """\
32
- The 13659_pegase PGLearn optimal power flow dataset, part of the PGLearn-ExtraLarge collection. \
33
- """
34
- VERSION = hfd.Version("1.0.0")
35
- DEFAULT_CONFIG_DESCRIPTION="""\
36
- This configuration contains feasible input, primal solution, and dual solution data \
37
- for the ACOPF, DCOPF, and SOCOPF formulations on the {case} system. For case data, \
38
- download the case.json.gz file from the `script` branch of the repository. \
39
- https://huggingface.co/datasets/PGLearn/PGLearn-ExtraLarge-13659_pegase/blob/script/case.json.gz
40
- """
41
- USE_ML4OPF_WARNING = """
42
- ================================================================================================
43
- Loading PGLearn-ExtraLarge-13659_pegase through the `datasets.load_dataset` function may be slow.
44
-
45
- Consider using ML4OPF to directly convert to `torch.Tensor`; for more info see:
46
- https://github.com/AI4OPT/ML4OPF?tab=readme-ov-file#manually-loading-data
47
-
48
- Or, use `huggingface_hub.snapshot_download` and an HDF5 reader; for more info see:
49
- https://huggingface.co/datasets/PGLearn/PGLearn-ExtraLarge-13659_pegase#downloading-individual-files
50
- ================================================================================================
51
- """
52
- CITATION = """\
53
- @article{klamkinpglearn,
54
- title={{PGLearn - An Open-Source Learning Toolkit for Optimal Power Flow}},
55
- author={Klamkin, Michael and Tanneau, Mathieu and Van Hentenryck, Pascal},
56
- year={2025},
57
- }\
58
- """
59
-
60
- IS_COMPRESSED = True
61
-
62
- # ┌──────────────────┐
63
- # │ Formulations │
64
- # └──────────────────┘
65
-
66
- def acopf_features(sizes: CaseSizes, primal: bool, dual: bool, meta: bool):
67
- features = {}
68
- if primal: features.update(acopf_primal_features(sizes))
69
- if dual: features.update(acopf_dual_features(sizes))
70
- if meta: features.update({f"ACOPF/{k}": v for k, v in META_FEATURES.items()})
71
- return features
72
-
73
- def dcopf_features(sizes: CaseSizes, primal: bool, dual: bool, meta: bool):
74
- features = {}
75
- if primal: features.update(dcopf_primal_features(sizes))
76
- if dual: features.update(dcopf_dual_features(sizes))
77
- if meta: features.update({f"DCOPF/{k}": v for k, v in META_FEATURES.items()})
78
- return features
79
-
80
- def socopf_features(sizes: CaseSizes, primal: bool, dual: bool, meta: bool):
81
- features = {}
82
- if primal: features.update(socopf_primal_features(sizes))
83
- if dual: features.update(socopf_dual_features(sizes))
84
- if meta: features.update({f"SOCOPF/{k}": v for k, v in META_FEATURES.items()})
85
- return features
86
-
87
- FORMULATIONS_TO_FEATURES = {
88
- "ACOPF": acopf_features,
89
- "DCOPF": dcopf_features,
90
- "SOCOPF": socopf_features,
91
- }
92
-
93
- # ┌───────────────────┐
94
- # │ BuilderConfig │
95
- # └───────────────────┘
96
-
97
- class PGLearnLarge13659_pegaseConfig(hfd.BuilderConfig):
98
- """BuilderConfig for PGLearn-ExtraLarge-13659_pegase.
99
- By default, primal solution data, metadata, input, casejson, are included for the train and test splits.
100
-
101
- To modify the default configuration, pass attributes of this class to `datasets.load_dataset`:
102
-
103
- Attributes:
104
- formulations (list[str]): The formulation(s) to include, e.g. ["ACOPF", "DCOPF"]
105
- primal (bool, optional): Include primal solution data. Defaults to True.
106
- dual (bool, optional): Include dual solution data. Defaults to False.
107
- meta (bool, optional): Include metadata. Defaults to True.
108
- input (bool, optional): Include input data. Defaults to True.
109
- casejson (bool, optional): Include case.json data. Defaults to True.
110
- train (bool, optional): Include training samples. Defaults to True.
111
- test (bool, optional): Include testing samples. Defaults to True.
112
- infeasible (bool, optional): Include infeasible samples. Defaults to False.
113
- """
114
- def __init__(self,
115
- formulations: list[str],
116
- primal: bool=True, dual: bool=False, meta: bool=True, input: bool = True, casejson: bool=True,
117
- train: bool=True, test: bool=True, infeasible: bool=False,
118
- compressed: bool=IS_COMPRESSED, **kwargs
119
- ):
120
- super(PGLearnLarge13659_pegaseConfig, self).__init__(version=VERSION, **kwargs)
121
-
122
- self.case = CASENAME
123
- self.formulations = formulations
124
-
125
- self.primal = primal
126
- self.dual = dual
127
- self.meta = meta
128
- self.input = input
129
- self.casejson = casejson
130
-
131
- self.train = train
132
- self.test = test
133
- self.infeasible = infeasible
134
-
135
- self.gz_ext = ".gz" if compressed else ""
136
-
137
- @property
138
- def size(self):
139
- return SIZES
140
-
141
- @property
142
- def features(self):
143
- features = {}
144
- if self.casejson: features.update(case_features())
145
- if self.input: features.update(input_features(SIZES))
146
- for formulation in self.formulations:
147
- features.update(FORMULATIONS_TO_FEATURES[formulation](SIZES, self.primal, self.dual, self.meta))
148
- return hfd.Features(features)
149
-
150
- @property
151
- def splits(self):
152
- splits: dict[hfd.Split, dict[str, str | int]] = {}
153
- if self.train:
154
- splits[hfd.Split.TRAIN] = {
155
- "name": "train",
156
- "num_examples": NUM_TRAIN
157
- }
158
- if self.test:
159
- splits[hfd.Split.TEST] = {
160
- "name": "test",
161
- "num_examples": NUM_TEST
162
- }
163
- if self.infeasible:
164
- splits[hfd.Split("infeasible")] = {
165
- "name": "infeasible",
166
- "num_examples": NUM_INFEASIBLE
167
- }
168
- return splits
169
-
170
- @property
171
- def urls(self):
172
- urls: dict[str, None | str | list] = {
173
- "case": None, "train": [], "test": [], "infeasible": [],
174
- }
175
-
176
- if self.casejson:
177
- urls["case"] = f"case.json" + self.gz_ext
178
- else:
179
- urls.pop("case")
180
-
181
- split_names = []
182
- if self.train: split_names.append("train")
183
- if self.test: split_names.append("test")
184
- if self.infeasible: split_names.append("infeasible")
185
-
186
- for split in split_names:
187
- if self.input: urls[split].append(f"{split}/input.h5" + self.gz_ext)
188
- for formulation in self.formulations:
189
- if self.primal:
190
- filename = f"{split}/{formulation}/primal.h5" + self.gz_ext
191
- if filename in SPLITFILES: urls[split].append(SPLITFILES[filename])
192
- else: urls[split].append(filename)
193
- if self.dual:
194
- filename = f"{split}/{formulation}/dual.h5" + self.gz_ext
195
- if filename in SPLITFILES: urls[split].append(SPLITFILES[filename])
196
- else: urls[split].append(filename)
197
- if self.meta:
198
- filename = f"{split}/{formulation}/meta.h5" + self.gz_ext
199
- if filename in SPLITFILES: urls[split].append(SPLITFILES[filename])
200
- else: urls[split].append(filename)
201
- return urls
202
-
203
- # ┌────────────────────┐
204
- # │ DatasetBuilder │
205
- # └────────────────────┘
206
-
207
- class PGLearnLarge13659_pegase(hfd.ArrowBasedBuilder):
208
- """DatasetBuilder for PGLearn-ExtraLarge-13659_pegase.
209
- The main interface is `datasets.load_dataset` with `trust_remote_code=True`, e.g.
210
-
211
- ```python
212
- from datasets import load_dataset
213
- ds = load_dataset("PGLearn/PGLearn-ExtraLarge-13659_pegase", trust_remote_code=True,
214
- # modify the default configuration by passing kwargs
215
- formulations=["DCOPF"],
216
- dual=False,
217
- meta=False,
218
- )
219
- ```
220
- """
221
-
222
- DEFAULT_WRITER_BATCH_SIZE = 10000
223
- BUILDER_CONFIG_CLASS = PGLearnLarge13659_pegaseConfig
224
- DEFAULT_CONFIG_NAME=CASENAME
225
- BUILDER_CONFIGS = [
226
- PGLearnLarge13659_pegaseConfig(
227
- name=CASENAME, description=DEFAULT_CONFIG_DESCRIPTION.format(case=CASENAME),
228
- formulations=list(FORMULATIONS_TO_FEATURES.keys()),
229
- primal=True, dual=True, meta=True, input=True, casejson=False,
230
- train=True, test=True, infeasible=False,
231
- )
232
- ]
233
-
234
- def _info(self):
235
- return hfd.DatasetInfo(
236
- features=self.config.features, splits=self.config.splits,
237
- description=DESCRIPTION + self.config.description,
238
- homepage=URL, citation=CITATION,
239
- )
240
-
241
- def _split_generators(self, dl_manager: hfd.DownloadManager):
242
- hfd.logging.get_logger().warning(USE_ML4OPF_WARNING)
243
-
244
- filepaths = dl_manager.download_and_extract(self.config.urls)
245
-
246
- splits: list[hfd.SplitGenerator] = []
247
- if self.config.train:
248
- splits.append(hfd.SplitGenerator(
249
- name=hfd.Split.TRAIN,
250
- gen_kwargs=dict(case_file=filepaths.get("case", None), data_files=tuple(filepaths["train"]), n_samples=NUM_TRAIN),
251
- ))
252
- if self.config.test:
253
- splits.append(hfd.SplitGenerator(
254
- name=hfd.Split.TEST,
255
- gen_kwargs=dict(case_file=filepaths.get("case", None), data_files=tuple(filepaths["test"]), n_samples=NUM_TEST),
256
- ))
257
- if self.config.infeasible:
258
- splits.append(hfd.SplitGenerator(
259
- name=hfd.Split("infeasible"),
260
- gen_kwargs=dict(case_file=filepaths.get("case", None), data_files=tuple(filepaths["infeasible"]), n_samples=NUM_INFEASIBLE),
261
- ))
262
- return splits
263
-
264
- def _generate_tables(self, case_file: str | None, data_files: tuple[hfd.utils.track.tracked_str | list[hfd.utils.track.tracked_str]], n_samples: int):
265
- case_data: str | None = json.dumps(json.load(open_maybe_gzip_cat(case_file))) if case_file is not None else None
266
- data: dict[str, h5py.File] = {}
267
- for file in data_files:
268
- v = h5py.File(open_maybe_gzip_cat(file), "r")
269
- if isinstance(file, list):
270
- k = "/".join(Path(file[0].get_origin()).parts[-3:-1]).split(".")[0]
271
- else:
272
- k = "/".join(Path(file.get_origin()).parts[-2:]).split(".")[0]
273
- data[k] = v
274
- for k in list(data.keys()):
275
- if "/input" in k: data[k.split("/", 1)[1]] = data.pop(k)
276
-
277
- batch_size = self._writer_batch_size or self.DEFAULT_WRITER_BATCH_SIZE
278
- for i in range(0, n_samples, batch_size):
279
- effective_batch_size = min(batch_size, n_samples - i)
280
-
281
- sample_data = {
282
- f"{dk}/{k}":
283
- hfd.features.features.numpy_to_pyarrow_listarray(v[i:i + effective_batch_size, ...])
284
- for dk, d in data.items() for k, v in d.items() if f"{dk}/{k}" in self.config.features
285
- }
286
-
287
- if case_data is not None:
288
- sample_data["case/json"] = pa.array([case_data] * effective_batch_size)
289
-
290
- yield i, pa.Table.from_pydict(sample_data)
291
-
292
- for f in data.values():
293
- f.close()
294
-
295
- # ┌──────────────┐
296
- # │ Features │
297
- # └──────────────┘
298
-
299
- FLOAT_TYPE = "float32"
300
- INT_TYPE = "int64"
301
- BOOL_TYPE = "bool"
302
- STRING_TYPE = "string"
303
-
304
- def case_features():
305
- # FIXME: better way to share schema of case data -- need to treat jagged arrays
306
- return {
307
- "case/json": hfd.Value(STRING_TYPE),
308
- }
309
-
310
- META_FEATURES = {
311
- "meta/seed": hfd.Value(dtype=INT_TYPE),
312
- "meta/formulation": hfd.Value(dtype=STRING_TYPE),
313
- "meta/primal_objective_value": hfd.Value(dtype=FLOAT_TYPE),
314
- "meta/dual_objective_value": hfd.Value(dtype=FLOAT_TYPE),
315
- "meta/primal_status": hfd.Value(dtype=STRING_TYPE),
316
- "meta/dual_status": hfd.Value(dtype=STRING_TYPE),
317
- "meta/termination_status": hfd.Value(dtype=STRING_TYPE),
318
- "meta/build_time": hfd.Value(dtype=FLOAT_TYPE),
319
- "meta/extract_time": hfd.Value(dtype=FLOAT_TYPE),
320
- "meta/solve_time": hfd.Value(dtype=FLOAT_TYPE),
321
- }
322
-
323
- def input_features(sizes: CaseSizes):
324
- return {
325
- "input/pd": hfd.Sequence(length=sizes.n_load, feature=hfd.Value(dtype=FLOAT_TYPE)),
326
- "input/qd": hfd.Sequence(length=sizes.n_load, feature=hfd.Value(dtype=FLOAT_TYPE)),
327
- "input/gen_status": hfd.Sequence(length=sizes.n_gen, feature=hfd.Value(dtype=BOOL_TYPE)),
328
- "input/branch_status": hfd.Sequence(length=sizes.n_branch, feature=hfd.Value(dtype=BOOL_TYPE)),
329
- "input/seed": hfd.Value(dtype=INT_TYPE),
330
- }
331
-
332
- def acopf_primal_features(sizes: CaseSizes):
333
- return {
334
- "ACOPF/primal/vm": hfd.Sequence(length=sizes.n_bus, feature=hfd.Value(dtype=FLOAT_TYPE)),
335
- "ACOPF/primal/va": hfd.Sequence(length=sizes.n_bus, feature=hfd.Value(dtype=FLOAT_TYPE)),
336
- "ACOPF/primal/pg": hfd.Sequence(length=sizes.n_gen, feature=hfd.Value(dtype=FLOAT_TYPE)),
337
- "ACOPF/primal/qg": hfd.Sequence(length=sizes.n_gen, feature=hfd.Value(dtype=FLOAT_TYPE)),
338
- "ACOPF/primal/pf": hfd.Sequence(length=sizes.n_branch, feature=hfd.Value(dtype=FLOAT_TYPE)),
339
- "ACOPF/primal/pt": hfd.Sequence(length=sizes.n_branch, feature=hfd.Value(dtype=FLOAT_TYPE)),
340
- "ACOPF/primal/qf": hfd.Sequence(length=sizes.n_branch, feature=hfd.Value(dtype=FLOAT_TYPE)),
341
- "ACOPF/primal/qt": hfd.Sequence(length=sizes.n_branch, feature=hfd.Value(dtype=FLOAT_TYPE)),
342
- }
343
- def acopf_dual_features(sizes: CaseSizes):
344
- return {
345
- "ACOPF/dual/kcl_p": hfd.Sequence(length=sizes.n_bus, feature=hfd.Value(dtype=FLOAT_TYPE)),
346
- "ACOPF/dual/kcl_q": hfd.Sequence(length=sizes.n_bus, feature=hfd.Value(dtype=FLOAT_TYPE)),
347
- "ACOPF/dual/vm": hfd.Sequence(length=sizes.n_bus, feature=hfd.Value(dtype=FLOAT_TYPE)),
348
- "ACOPF/dual/pg": hfd.Sequence(length=sizes.n_gen, feature=hfd.Value(dtype=FLOAT_TYPE)),
349
- "ACOPF/dual/qg": hfd.Sequence(length=sizes.n_gen, feature=hfd.Value(dtype=FLOAT_TYPE)),
350
- "ACOPF/dual/ohm_pf": hfd.Sequence(length=sizes.n_branch, feature=hfd.Value(dtype=FLOAT_TYPE)),
351
- "ACOPF/dual/ohm_pt": hfd.Sequence(length=sizes.n_branch, feature=hfd.Value(dtype=FLOAT_TYPE)),
352
- "ACOPF/dual/ohm_qf": hfd.Sequence(length=sizes.n_branch, feature=hfd.Value(dtype=FLOAT_TYPE)),
353
- "ACOPF/dual/ohm_qt": hfd.Sequence(length=sizes.n_branch, feature=hfd.Value(dtype=FLOAT_TYPE)),
354
- "ACOPF/dual/pf": hfd.Sequence(length=sizes.n_branch, feature=hfd.Value(dtype=FLOAT_TYPE)),
355
- "ACOPF/dual/pt": hfd.Sequence(length=sizes.n_branch, feature=hfd.Value(dtype=FLOAT_TYPE)),
356
- "ACOPF/dual/qf": hfd.Sequence(length=sizes.n_branch, feature=hfd.Value(dtype=FLOAT_TYPE)),
357
- "ACOPF/dual/qt": hfd.Sequence(length=sizes.n_branch, feature=hfd.Value(dtype=FLOAT_TYPE)),
358
- "ACOPF/dual/va_diff": hfd.Sequence(length=sizes.n_branch, feature=hfd.Value(dtype=FLOAT_TYPE)),
359
- "ACOPF/dual/sm_fr": hfd.Sequence(length=sizes.n_branch, feature=hfd.Value(dtype=FLOAT_TYPE)),
360
- "ACOPF/dual/sm_to": hfd.Sequence(length=sizes.n_branch, feature=hfd.Value(dtype=FLOAT_TYPE)),
361
- "ACOPF/dual/slack_bus": hfd.Value(dtype=FLOAT_TYPE),
362
- }
363
- def dcopf_primal_features(sizes: CaseSizes):
364
- return {
365
- "DCOPF/primal/va": hfd.Sequence(length=sizes.n_bus, feature=hfd.Value(dtype=FLOAT_TYPE)),
366
- "DCOPF/primal/pg": hfd.Sequence(length=sizes.n_gen, feature=hfd.Value(dtype=FLOAT_TYPE)),
367
- "DCOPF/primal/pf": hfd.Sequence(length=sizes.n_branch, feature=hfd.Value(dtype=FLOAT_TYPE)),
368
- }
369
- def dcopf_dual_features(sizes: CaseSizes):
370
- return {
371
- "DCOPF/dual/kcl_p": hfd.Sequence(length=sizes.n_bus, feature=hfd.Value(dtype=FLOAT_TYPE)),
372
- "DCOPF/dual/pg": hfd.Sequence(length=sizes.n_gen, feature=hfd.Value(dtype=FLOAT_TYPE)),
373
- "DCOPF/dual/ohm_pf": hfd.Sequence(length=sizes.n_branch, feature=hfd.Value(dtype=FLOAT_TYPE)),
374
- "DCOPF/dual/pf": hfd.Sequence(length=sizes.n_branch, feature=hfd.Value(dtype=FLOAT_TYPE)),
375
- "DCOPF/dual/va_diff": hfd.Sequence(length=sizes.n_branch, feature=hfd.Value(dtype=FLOAT_TYPE)),
376
- "DCOPF/dual/slack_bus": hfd.Value(dtype=FLOAT_TYPE),
377
- }
378
- def socopf_primal_features(sizes: CaseSizes):
379
- return {
380
- "SOCOPF/primal/w": hfd.Sequence(length=sizes.n_bus, feature=hfd.Value(dtype=FLOAT_TYPE)),
381
- "SOCOPF/primal/pg": hfd.Sequence(length=sizes.n_gen, feature=hfd.Value(dtype=FLOAT_TYPE)),
382
- "SOCOPF/primal/qg": hfd.Sequence(length=sizes.n_gen, feature=hfd.Value(dtype=FLOAT_TYPE)),
383
- "SOCOPF/primal/pf": hfd.Sequence(length=sizes.n_branch, feature=hfd.Value(dtype=FLOAT_TYPE)),
384
- "SOCOPF/primal/pt": hfd.Sequence(length=sizes.n_branch, feature=hfd.Value(dtype=FLOAT_TYPE)),
385
- "SOCOPF/primal/qf": hfd.Sequence(length=sizes.n_branch, feature=hfd.Value(dtype=FLOAT_TYPE)),
386
- "SOCOPF/primal/qt": hfd.Sequence(length=sizes.n_branch, feature=hfd.Value(dtype=FLOAT_TYPE)),
387
- "SOCOPF/primal/wr": hfd.Sequence(length=sizes.n_branch, feature=hfd.Value(dtype=FLOAT_TYPE)),
388
- "SOCOPF/primal/wi": hfd.Sequence(length=sizes.n_branch, feature=hfd.Value(dtype=FLOAT_TYPE)),
389
- }
390
- def socopf_dual_features(sizes: CaseSizes):
391
- return {
392
- "SOCOPF/dual/kcl_p": hfd.Sequence(length=sizes.n_bus, feature=hfd.Value(dtype=FLOAT_TYPE)),
393
- "SOCOPF/dual/kcl_q": hfd.Sequence(length=sizes.n_bus, feature=hfd.Value(dtype=FLOAT_TYPE)),
394
- "SOCOPF/dual/w": hfd.Sequence(length=sizes.n_bus, feature=hfd.Value(dtype=FLOAT_TYPE)),
395
- "SOCOPF/dual/pg": hfd.Sequence(length=sizes.n_gen, feature=hfd.Value(dtype=FLOAT_TYPE)),
396
- "SOCOPF/dual/qg": hfd.Sequence(length=sizes.n_gen, feature=hfd.Value(dtype=FLOAT_TYPE)),
397
- "SOCOPF/dual/ohm_pf": hfd.Sequence(length=sizes.n_branch, feature=hfd.Value(dtype=FLOAT_TYPE)),
398
- "SOCOPF/dual/ohm_pt": hfd.Sequence(length=sizes.n_branch, feature=hfd.Value(dtype=FLOAT_TYPE)),
399
- "SOCOPF/dual/ohm_qf": hfd.Sequence(length=sizes.n_branch, feature=hfd.Value(dtype=FLOAT_TYPE)),
400
- "SOCOPF/dual/ohm_qt": hfd.Sequence(length=sizes.n_branch, feature=hfd.Value(dtype=FLOAT_TYPE)),
401
- "SOCOPF/dual/jabr": hfd.Array2D(shape=(sizes.n_branch, 4), dtype=FLOAT_TYPE),
402
- "SOCOPF/dual/sm_fr": hfd.Array2D(shape=(sizes.n_branch, 3), dtype=FLOAT_TYPE),
403
- "SOCOPF/dual/sm_to": hfd.Array2D(shape=(sizes.n_branch, 3), dtype=FLOAT_TYPE),
404
- "SOCOPF/dual/va_diff": hfd.Sequence(length=sizes.n_branch, feature=hfd.Value(dtype=FLOAT_TYPE)),
405
- "SOCOPF/dual/wr": hfd.Sequence(length=sizes.n_branch, feature=hfd.Value(dtype=FLOAT_TYPE)),
406
- "SOCOPF/dual/wi": hfd.Sequence(length=sizes.n_branch, feature=hfd.Value(dtype=FLOAT_TYPE)),
407
- "SOCOPF/dual/pf": hfd.Sequence(length=sizes.n_branch, feature=hfd.Value(dtype=FLOAT_TYPE)),
408
- "SOCOPF/dual/pt": hfd.Sequence(length=sizes.n_branch, feature=hfd.Value(dtype=FLOAT_TYPE)),
409
- "SOCOPF/dual/qf": hfd.Sequence(length=sizes.n_branch, feature=hfd.Value(dtype=FLOAT_TYPE)),
410
- "SOCOPF/dual/qt": hfd.Sequence(length=sizes.n_branch, feature=hfd.Value(dtype=FLOAT_TYPE)),
411
- }
412
-
413
- # ┌─────���─────────┐
414
- # │ Utilities │
415
- # └───────────────┘
416
-
417
- def open_maybe_gzip_cat(path: str | list):
418
- if isinstance(path, list):
419
- dest = Path(path[0]).parent.with_suffix(".h5")
420
- if not dest.exists():
421
- with open(dest, "wb") as dest_f:
422
- for piece in path:
423
- with open(piece, "rb") as piece_f:
424
- shutil.copyfileobj(piece_f, dest_f)
425
- shutil.rmtree(Path(piece).parent)
426
- path = dest.as_posix()
427
- return gzip.open(path, "rb") if path.endswith(".gz") else open(path, "rb")
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
README.md CHANGED
@@ -288,6 +288,14 @@ dataset_info:
288
  - name: test
289
  num_bytes: 22356219165
290
  num_examples: 4883
291
- download_size: 83983465828
292
  dataset_size: 111781095820
 
 
 
 
 
 
 
 
293
  ---
 
288
  - name: test
289
  num_bytes: 22356219165
290
  num_examples: 4883
291
+ download_size: 108772668920
292
  dataset_size: 111781095820
293
+ configs:
294
+ - config_name: 13659_pegase
295
+ data_files:
296
+ - split: train
297
+ path: 13659_pegase/train-*
298
+ - split: test
299
+ path: 13659_pegase/test-*
300
+ default: true
301
  ---
config.toml DELETED
@@ -1,42 +0,0 @@
1
- # Name of the reference PGLib case. Must be a valid PGLib case name.
2
- pglib_case = "pglib_opf_case13659_pegase"
3
- floating_point_type = "Float32"
4
-
5
- [sampler]
6
- # data sampler options
7
- [sampler.load]
8
- noise_type = "ScaledUniform"
9
- l = 0.6 # Lower bound of base load factor
10
- u = 1.0 # Upper bound of base load factor
11
- sigma = 0.20 # Relative (multiplicative) noise level.
12
-
13
-
14
- [OPF]
15
-
16
- [OPF.ACOPF]
17
- type = "ACOPF"
18
- solver.name = "Ipopt"
19
- solver.attributes.tol = 1e-6
20
- solver.attributes.linear_solver = "ma27"
21
-
22
- [OPF.DCOPF]
23
- # Formulation/solver options
24
- type = "DCOPF"
25
- solver.name = "HiGHS"
26
-
27
- [OPF.SOCOPF]
28
- type = "SOCOPF"
29
- solver.name = "Clarabel"
30
- # Tight tolerances
31
- solver.attributes.tol_gap_abs = 1e-6
32
- solver.attributes.tol_gap_rel = 1e-6
33
- solver.attributes.tol_feas = 1e-6
34
- solver.attributes.tol_infeas_rel = 1e-6
35
- solver.attributes.tol_ktratio = 1e-6
36
- # Reduced accuracy settings
37
- solver.attributes.reduced_tol_gap_abs = 1e-6
38
- solver.attributes.reduced_tol_gap_rel = 1e-6
39
- solver.attributes.reduced_tol_feas = 1e-6
40
- solver.attributes.reduced_tol_infeas_abs = 1e-6
41
- solver.attributes.reduced_tol_infeas_rel = 1e-6
42
- solver.attributes.reduced_tol_ktratio = 1e-6
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
infeasible/ACOPF/dual.h5.gz DELETED
@@ -1,3 +0,0 @@
1
- version https://git-lfs.github.com/spec/v1
2
- oid sha256:3c25499ca146fc6f001adf8c603c7eada6fcd6c3235daef11d41879d3acee8c6
3
- size 20978811921
 
 
 
 
infeasible/ACOPF/primal.h5.gz DELETED
@@ -1,3 +0,0 @@
1
- version https://git-lfs.github.com/spec/v1
2
- oid sha256:33ff85c9f638556caed1a67bc69fe4b2a6c03a9b83d357da0bd0bad8170c2bf1
3
- size 9441799458
 
 
 
 
infeasible/DCOPF/dual.h5.gz DELETED
@@ -1,3 +0,0 @@
1
- version https://git-lfs.github.com/spec/v1
2
- oid sha256:2e1305540e86023b4900919c3cfc448c7a87a45c09e0e8fad4e23814281e2df2
3
- size 2200307412
 
 
 
 
infeasible/DCOPF/primal.h5.gz DELETED
@@ -1,3 +0,0 @@
1
- version https://git-lfs.github.com/spec/v1
2
- oid sha256:7702dd1b5a992d44f3feb5af39f6bf75520fe8b02353fdac6e680ac5e0927621
3
- size 2785347118
 
 
 
 
infeasible/SOCOPF/dual.h5.gz DELETED
@@ -1,3 +0,0 @@
1
- version https://git-lfs.github.com/spec/v1
2
- oid sha256:9d05bdc272bdab8a5c189c8841f31b671d33b0dc7ecb4da9f2544dbe73b67a37
3
- size 39277309502
 
 
 
 
infeasible/SOCOPF/primal.h5.gz DELETED
@@ -1,3 +0,0 @@
1
- version https://git-lfs.github.com/spec/v1
2
- oid sha256:393d7c1da9254222c64afce7ac0f4b0c3e709dadc3a077e5826fdd408a4d91d2
3
- size 11652866067
 
 
 
 
infeasible/input.h5.gz DELETED
@@ -1,3 +0,0 @@
1
- version https://git-lfs.github.com/spec/v1
2
- oid sha256:18b99acb65a676da149f07f6221e59a8e4e4827ee5c5b692943789b3a124f413
3
- size 1048224063
 
 
 
 
test/ACOPF/dual.h5.gz DELETED
@@ -1,3 +0,0 @@
1
- version https://git-lfs.github.com/spec/v1
2
- oid sha256:172039f0ec0603ee36ce1c98a5ef298ab9a439f42e5571629a24b3acb7b17e23
3
- size 4042267445
 
 
 
 
test/ACOPF/meta.h5.gz DELETED
@@ -1,3 +0,0 @@
1
- version https://git-lfs.github.com/spec/v1
2
- oid sha256:a6cba1a5a17afd6fe254212ffc02e44f0af8f773100aafb5b7194ad14a37ef54
3
- size 171872
 
 
 
 
test/ACOPF/primal.h5.gz DELETED
@@ -1,3 +0,0 @@
1
- version https://git-lfs.github.com/spec/v1
2
- oid sha256:62d205b7061a1b9a20b58cef430c96bc5eafde961a94fb60c71290a846b8af1f
3
- size 1819626635
 
 
 
 
test/DCOPF/dual.h5.gz DELETED
@@ -1,3 +0,0 @@
1
- version https://git-lfs.github.com/spec/v1
2
- oid sha256:769a46a454280f50fe673632a6400de8410c45da323008d3186c7f33e2e3ff5f
3
- size 476671947
 
 
 
 
test/DCOPF/meta.h5.gz DELETED
@@ -1,3 +0,0 @@
1
- version https://git-lfs.github.com/spec/v1
2
- oid sha256:037ba3b04283044e864f6cf48e78bb93f6ec4837c76534f152048089b0bbb953
3
- size 172452
 
 
 
 
test/DCOPF/primal.h5.gz DELETED
@@ -1,3 +0,0 @@
1
- version https://git-lfs.github.com/spec/v1
2
- oid sha256:33e0605714ae5c09b375b136c2d208df13461fb4e67caea80d61b75362994e50
3
- size 534809913
 
 
 
 
test/SOCOPF/dual.h5.gz DELETED
@@ -1,3 +0,0 @@
1
- version https://git-lfs.github.com/spec/v1
2
- oid sha256:77ab0aeff1be1b5db1bf0b407066b29038709fe537ba4fbcebb751463f47b6d4
3
- size 7512418303
 
 
 
 
test/SOCOPF/meta.h5.gz DELETED
@@ -1,3 +0,0 @@
1
- version https://git-lfs.github.com/spec/v1
2
- oid sha256:1652031d1d2a4e07a69dd55d4eccb7a7251cbbb4b8ea706235a3a69ac5f66586
3
- size 172235
 
 
 
 
test/SOCOPF/primal.h5.gz DELETED
@@ -1,3 +0,0 @@
1
- version https://git-lfs.github.com/spec/v1
2
- oid sha256:f4ab68ce823734c98d8952bc787ffa9dc1728d7d5e8b9329153a31959dae4005
3
- size 2210715942
 
 
 
 
test/input.h5.gz DELETED
@@ -1,3 +0,0 @@
1
- version https://git-lfs.github.com/spec/v1
2
- oid sha256:23ff001cecd39ed21decfe0a1ca70a1b7467ec18e7e0dc3f647e1cf2eb1971cc
3
- size 200008872
 
 
 
 
train/ACOPF/dual.h5.gz DELETED
@@ -1,3 +0,0 @@
1
- version https://git-lfs.github.com/spec/v1
2
- oid sha256:bba485478ffbda5311275b43bef77109a31ed389c7b63bead80d0d9db917476b
3
- size 16169636727
 
 
 
 
train/ACOPF/meta.h5.gz DELETED
@@ -1,3 +0,0 @@
1
- version https://git-lfs.github.com/spec/v1
2
- oid sha256:2a8ced7d0f8dad685cd232954bc175d8a7a512c62038759273c5bc0882a45168
3
- size 666168
 
 
 
 
train/ACOPF/primal.h5.gz DELETED
@@ -1,3 +0,0 @@
1
- version https://git-lfs.github.com/spec/v1
2
- oid sha256:f6838fc17fade0d394cf25ca2eb2663715c49da7e15520a4580909376b9f112d
3
- size 7278701296
 
 
 
 
train/DCOPF/dual.h5.gz DELETED
@@ -1,3 +0,0 @@
1
- version https://git-lfs.github.com/spec/v1
2
- oid sha256:35c19cf8ea8f94097831d9cf6e4608ecdc580bd4a31159f8a00408703f125ae3
3
- size 1906745974
 
 
 
 
train/DCOPF/meta.h5.gz DELETED
@@ -1,3 +0,0 @@
1
- version https://git-lfs.github.com/spec/v1
2
- oid sha256:c7bae64c626e42844bd90af4084f3d7bafde6bd4dac29829974c845ab9e9fadc
3
- size 667622
 
 
 
 
train/DCOPF/primal.h5.gz DELETED
@@ -1,3 +0,0 @@
1
- version https://git-lfs.github.com/spec/v1
2
- oid sha256:028ea46540130067bb3ebfd66b0fdb5235247a9c3f362d4b50fe3c0d61fedd1d
3
- size 2139386560