WensongSong commited on
Commit
55f208b
·
verified ·
1 Parent(s): b3fcfe5

Complete dataset refresh with new version

Browse files
README.md CHANGED
@@ -1,13 +1,4 @@
1
- ---
2
- license: mit
3
- task_categories:
4
- - image-to-image
5
- language:
6
- - en
7
- pretty_name: a
8
- size_categories:
9
- - 10M<n<100M
10
- ---
11
  # AnyInsertion
12
  <p align="center">
13
  <a href="https://song-wensong.github.io/"><strong>Wensong Song</strong></a>
@@ -21,28 +12,23 @@ size_categories:
21
  <a href="https://scholar.google.com/citations?user=RMSuNFwAAAAJ&hl=en"><strong>Yi Yang</strong></a>
22
  <br>
23
  <br>
24
- <a href="https://arxiv.org/pdf/2504.15009" style="display: inline-block; margin-right: 10px;">
25
- <img src='https://img.shields.io/badge/arXiv-InsertAnything-red?color=%23aa1a1a' alt='Paper PDF'>
26
- </a>
27
- <a href='https://song-wensong.github.io/insert-anything/' style="display: inline-block; margin-right: 10px;">
28
- <img src='https://img.shields.io/badge/Project%20Page-InsertAnything-cyan?logoColor=%23FFD21E&color=%23cbe6f2' alt='Project Page'>
29
- </a>
30
- <a href='https://github.com/song-wensong/insert-anything' style="display: inline-block;">
31
- <img src='https://img.shields.io/badge/GitHub-InsertAnything-black?logoColor=23FFD21E&color=%231d2125'>
32
- </a>
33
  <br>
34
  <b>Zhejiang University &nbsp; | &nbsp; Harvard University &nbsp; | &nbsp; Nanyang Technological University </b>
35
  </p>
36
 
37
  ## News
38
 
39
- * **[2025.4.25]** Released AnyInsertion v1 mask-prompt dataset on Hugging Face.
 
40
 
41
 
42
  ## Summary
43
  This is the dataset proposed in our paper [**Insert Anything: Image Insertion via In-Context Editing in DiT**](https://arxiv.org/abs/2504.15009)
44
 
45
- AnyInsertion dataset consists of training and testing subsets. The training set includes 159,908 samples across two prompt types: 58,188 mask-prompt image pairs and 101,720 text-prompt image pairs;the test set includes 158 data pairs: 120 mask-prompt pairs and 38 text-prompt pairs.
46
 
47
  AnyInsertion dataset covers diverse categories including human subjects, daily necessities, garments, furniture, and various objects.
48
 
@@ -57,43 +43,80 @@ AnyInsertion dataset covers diverse categories including human subjects, daily n
57
 
58
 
59
  data/
60
- ├── train/
61
- │ ├── accessory/
62
- │ │ ├── ref_image/ # Reference image containing the element to be inserted
63
- │ │ ├── ref_mask/ # The mask corresponding to the inserted element
64
- │ │ ├── tar_image/ # Ground truth
65
- │ │ ├── tar_mask/ # The mask corresponding to the edited area of target image
66
- │ │
67
- │ ├── object/
68
- │ │ ├── ref_image/
69
- │ │ ├── ref_mask/
70
- │ │ ├── tar_image/
71
- │ │ ├── tar_mask/
72
- │ │
73
- │ └── person/
74
- │ ├── ref_image/
75
- ├── ref_mask/
76
- ├── tar_image/
77
- │ ├── tar_mask/
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
78
 
79
- └── test/
80
- ├── garment/
81
- │ ├── ref_image/
82
- │ ├── ref_mask/
83
- │ ├── tar_image/
84
- │ ├── tar_mask/
85
-
86
- ├── object/
87
- │ ├── ref_image/
88
- │ ├── ref_mask/
89
- │ ├── tar_image/
90
- │ ├── tar_mask/
91
-
92
- └── person/
93
- ├── ref_image/
94
- ├── ref_mask/
95
- ├── tar_image/
96
- ├── tar_mask/
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
97
  ```
98
 
99
 
@@ -117,152 +140,16 @@ data/
117
      <img src="examples/tar_mask.png" alt="Tar_mask" style="width: 100%;">
118
      <figcaption>Tar_mask</figcaption>
119
    </figure>
 
 
 
 
 
 
 
 
120
  </div>
121
 
122
- ## Usage
123
- This guide explains how to load and use the AnyInsertion dataset, specifically the subset focusing on mask-prompt image pairs, which has been prepared in Apache Arrow format for efficient loading with the Hugging Face `datasets` library.
124
-
125
- ### Installation
126
-
127
- First, ensure you have the `datasets` library installed. If not, you can install it via pip:
128
-
129
- ```bash
130
- pip install datasets pillow
131
- ```
132
-
133
- ### Loading the Dataset
134
- You can load the dataset directly from the Hugging Face Hub using its identifier:
135
-
136
- ```python
137
- from datasets import load_dataset
138
-
139
- # Replace with the correct Hugging Face Hub repository ID
140
- repo_id = "WensongSong/AnyInsertion"
141
-
142
- # Load the entire dataset (usually returns a DatasetDict with 'train' and 'test' splits)
143
- dataset = load_dataset(repo_id)
144
-
145
- print(dataset)
146
- # Expected output similar to:
147
- # DatasetDict({
148
- # train: Dataset({
149
- # features: ['id', 'split', 'category', 'main_label', 'ref_image', 'ref_mask', 'tar_image', 'tar_mask'],
150
- # num_rows: XXXX
151
- # })
152
- # test: Dataset({
153
- # features: ['id', 'split', 'category', 'main_label', 'ref_image', 'ref_mask', 'tar_image', 'tar_mask'],
154
- # num_rows: YYYY
155
- # })
156
- # })
157
- ```
158
-
159
- ### Loading Specific Splits
160
- If you only need a specific split (e.g., 'test'), you can specify it during loading:
161
- ``` python
162
- # Load only the 'test' split
163
- test_dataset = load_dataset(repo_id, split='test')
164
- print("Loaded Test Split:")
165
- print(test_dataset)
166
-
167
- # Load only the 'train' split
168
- train_dataset = load_dataset(repo_id, split='train')
169
- print("\nLoaded Train Split:")
170
- print(train_dataset)
171
- ```
172
-
173
- ### Dataset Structure
174
- * The loaded dataset (or individual splits) has the following structure and features (columns):
175
-
176
- * id (string): A unique identifier for each data sample, typically formatted as "split/category/image_id" (e.g., "train/accessory/0").
177
-
178
- * split (string): Indicates whether the sample belongs to the 'train' or 'test' set.
179
-
180
- * category (string): The category of the main object or subject in the sample. Possible values include: 'accessory', 'object', 'person' (for train), 'garment', 'object_test', 'person' (for test).
181
-
182
- * main_label (string): The label associated with the reference image/mask pair, derived from the original label.json files.
183
-
184
- * ref_image (Image): The reference image containing the object or element to be conceptually inserted. Loaded as a PIL (Pillow) Image object.
185
-
186
- * ref_mask (Image): The binary mask highlighting the specific element within the ref_image. Loaded as a PIL Image object.
187
-
188
- * tar_image (Image): The target image, representing the ground truth result after the conceptual insertion or editing. Loaded as a PIL Image object.
189
-
190
- * tar_mask (Image): The binary mask indicating the edited or inserted region within the tar_image. Loaded as a PIL Image object.
191
-
192
- ### Accessing Data
193
-
194
- You can access data like a standard Python dictionary or list:
195
-
196
- ```python
197
- # Get the training split from the loaded DatasetDict
198
- train_ds = dataset['train']
199
-
200
- # Get the first sample from the training set
201
- first_sample = train_ds[0]
202
-
203
- # Access specific features (columns) of the sample
204
- ref_image = first_sample['ref_image']
205
- label = first_sample['main_label']
206
- category = first_sample['category']
207
-
208
- print(f"\nFirst train sample category: {category}, label: {label}")
209
- print(f"Reference image size: {ref_image.size}") # ref_image is a PIL Image
210
-
211
- # Display the image (requires matplotlib or other image libraries)
212
- # import matplotlib.pyplot as plt
213
- # plt.imshow(ref_image)
214
- # plt.title(f"Category: {category}, Label: {label}")
215
- # plt.show()
216
-
217
- # Iterate through the dataset (e.g., the first 5 test samples)
218
- print("\nIterating through the first 5 test samples:")
219
- test_ds = dataset['test']
220
- for i in range(5):
221
- sample = test_ds[i]
222
- print(f" Sample {i}: ID={sample['id']}, Category={sample['category']}, Label={sample['main_label']}")
223
- ```
224
-
225
- ### Filtering Data
226
-
227
- The datasets library provides powerful filtering capabilities.
228
-
229
- ```python
230
- # Filter the training set to get only 'accessory' samples
231
- accessory_train_ds = train_ds.filter(lambda example: example['category'] == 'accessory')
232
- print(f"\nNumber of 'accessory' samples in train split: {len(accessory_train_ds)}")
233
-
234
- # Filter the test set for 'person' samples
235
- person_test_ds = test_ds.filter(lambda example: example['category'] == 'person')
236
- print(f"Number of 'person' samples in test split: {len(person_test_ds)}")
237
- ```
238
- #### Filtering by Split (if loaded as DatasetDict)
239
- Although loading specific splits is preferred, you can also filter by the split column if you loaded the entire DatasetDict and somehow combined them (not typical, but possible):
240
-
241
- ```python
242
- # Assuming 'combined_ds' is a dataset containing both train and test rows
243
- # test_split_filtered = combined_ds.filter(lambda example: example['split'] == 'test')
244
- ```
245
-
246
- ### Working with Images
247
- The features defined as Image (ref_image, ref_mask, tar_image, tar_mask) will automatically load the image data as PIL (Pillow) Image objects when accessed. You can then use standard Pillow methods or convert them to other formats (like NumPy arrays or PyTorch tensors) for further processing.
248
-
249
- ```python
250
- # Example: Convert reference image to NumPy array
251
- import numpy as np
252
-
253
- first_sample = train_ds[0]
254
- ref_image_pil = first_sample['ref_image']
255
- ref_image_np = np.array(ref_image_pil)
256
-
257
- print(f"\nReference image shape as NumPy array: {ref_image_np.shape}")
258
- ```
259
-
260
- ## Citation
261
- ```
262
- @article{song2025insert,
263
- title={Insert Anything: Image Insertion via In-Context Editing in DiT},
264
- author={Song, Wensong and Jiang, Hong and Yang, Zongxing and Quan, Ruijie and Yang, Yi},
265
- journal={arXiv preprint arXiv:2504.15009},
266
- year={2025}
267
- }
268
- ```
 
1
+ <!-- <h1 align="center">AnyInsertion dataset</h2> -->
 
 
 
 
 
 
 
 
 
2
  # AnyInsertion
3
  <p align="center">
4
  <a href="https://song-wensong.github.io/"><strong>Wensong Song</strong></a>
 
12
  <a href="https://scholar.google.com/citations?user=RMSuNFwAAAAJ&hl=en"><strong>Yi Yang</strong></a>
13
  <br>
14
  <br>
15
+ <a href="https://arxiv.org/pdf/2504.15009"><img src='https://img.shields.io/badge/arXiv-InsertAnything-red?color=%23aa1a1a' alt='Paper PDF'></a>
16
+ <a href='https://song-wensong.github.io/insert-anything/'><img src='https://img.shields.io/badge/Project%20Page-InsertAnything-cyan?logoColor=%23FFD21E&color=%23cbe6f2' alt='Project Page'></a>
17
+ <a href=''><img src='https://img.shields.io/badge/Hugging%20Face-InsertAnything-yellow?logoColor=%23FFD21E&color=%23ffcc1c'></a>
 
 
 
 
 
 
18
  <br>
19
  <b>Zhejiang University &nbsp; | &nbsp; Harvard University &nbsp; | &nbsp; Nanyang Technological University </b>
20
  </p>
21
 
22
  ## News
23
 
24
+ * **[2025.5.7]** Release **AnyInsertion** v1 text prompt dataset on HuggingFace.
25
+ * **[2025.4.24]** Release **AnyInsertion** v1 mask prompt dataset on HuggingFace.
26
 
27
 
28
  ## Summary
29
  This is the dataset proposed in our paper [**Insert Anything: Image Insertion via In-Context Editing in DiT**](https://arxiv.org/abs/2504.15009)
30
 
31
+ AnyInsertion dataset consists of training and testing subsets. The training set includes 136,385 samples across two prompt types: 58,188 mask-prompt image pairs and 78,197 text-prompt image pairs;the test set includes 158 data pairs: 120 mask-prompt pairs and 38 text-prompt pairs.
32
 
33
  AnyInsertion dataset covers diverse categories including human subjects, daily necessities, garments, furniture, and various objects.
34
 
 
43
 
44
 
45
  data/
46
+ ├── text_prompt/
47
+ │ ├── train/
48
+ │ │ ├── accessory/
49
+ │ │ ├── ref_image/ # Reference image containing the element to be inserted
50
+ │ │ ├── ref_mask/ # The mask corresponding to the inserted element
51
+ │ │ ├── tar_image/ # Ground truth
52
+ │ │ │ └── src_image/ # Source images
53
+ │ │ ├── add/ # Source image with the inserted element from Ground Truth removed
54
+ │ │ │ └── replace/ # Source image where the inserted element in Ground Truth is replaced
55
+ │ │ ├── object/
56
+ │ │ ├── ref_image/
57
+ │ │ ├── ref_mask/
58
+ │ │ │ ├── tar_image/
59
+ │ │ └── src_image/
60
+ │ │ ├── add/
61
+ │ │ └── replace/
62
+ │ └── person/
63
+ ├── ref_image/
64
+ │ │ ├── ref_mask/
65
+ │ │ ├── tar_image/
66
+ │ │ └── src_image/
67
+ │ │ ├── add/
68
+ │ │ └── replace/
69
+ │ └── test/
70
+ │ ├── garment/
71
+ │ │ ├── ref_image/
72
+ │ │ ├── ref_mask/
73
+ │ │ ├── tar_image/
74
+ │ │ └── src_image/
75
+ │ └── object/
76
+ │ ├── ref_image/
77
+ │ ├── ref_mask/
78
+ │ ├── tar_image/
79
+ │ └── src_image/
80
 
81
+ ├── mask_prompt/
82
+ ├── train/
83
+ ├── accessory/
84
+ │ │ ├── ref_image/
85
+ │ │ ├── ref_mask/
86
+ │ │ ├── tar_image/
87
+ │ │ ├── tar_mask/ # The mask corresponding to the edited area of target image
88
+ │ │ ├── object/
89
+ │ │ ├── ref_image/
90
+ │ │ ├── ref_mask/
91
+ │ │ ├── tar_image/
92
+ │ │ ├── tar_mask/
93
+ │ └── person/
94
+ │ │ ├── ref_image/
95
+ │ │ ├── ref_mask/
96
+ │ │ ├── tar_image/
97
+ │ │ ├── tar_mask/
98
+ │ └── test/
99
+ │ ├── garment/
100
+ │ │ ├── ref_image/
101
+ │ │ ├── ref_mask/
102
+ │ │ ├── tar_image/
103
+ │ │ ├── tar_mask/
104
+ │ ├── object/
105
+ │ │ ├── ref_image/
106
+ │ │ ├── ref_mask/
107
+ │ │ ├── tar_image/
108
+ │ │ ├── tar_mask/
109
+ │ └── person/
110
+ │ ├── ref_image/
111
+ │ ├── ref_mask/
112
+ │ ├── tar_image/
113
+ │ ├── tar_mask/
114
+
115
+
116
+
117
+
118
+
119
+
120
  ```
121
 
122
 
 
140
      <img src="examples/tar_mask.png" alt="Tar_mask" style="width: 100%;">
141
      <figcaption>Tar_mask</figcaption>
142
    </figure>
143
+   <figure style="margin: 10px; width: calc(25% - 20px);">
144
+     <img src="examples/add.png" alt="Add" style="width: 100%;">
145
+     <figcaption>Add</figcaption>
146
+   </figure>
147
+   <figure style="margin: 10px; width: calc(25% - 20px);">
148
+     <img src="examples/replace.png" alt="Replace" style="width: 100%;">
149
+     <figcaption>Replace</figcaption>
150
+   </figure>
151
  </div>
152
 
153
+ ### Text Prompt
154
+ Add Prompt: Add [label from `tar_image` (in label.json) ]</p>
155
+ Replace Prompt: Replace [label from `src_image` (in src_image/replace/replace_label.json) ] with [label from `tar_image` (in label.json) ]
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
examples/add.png ADDED

Git LFS Details

  • SHA256: 2408b1852bafa5745c05a3d5b7c60007d3e8749bbbe60c0b47de469f2af91dfa
  • Pointer size: 132 Bytes
  • Size of remote file: 3.48 MB
examples/replace.png ADDED

Git LFS Details

  • SHA256: c1b750a33935088b89cec1d9598f4426de4692e051ac50b626dfec545c375a6c
  • Pointer size: 132 Bytes
  • Size of remote file: 3.69 MB
mask_prompt/test.parquet ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:5c3e0b696f25228d1c9ba29ac6e7da1cc7c8338a82241676334da2e38271d3ad
3
+ size 15681
mask_prompt/train-00000-of-00012.parquet ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:6e663de136aa528064c3ddafaf0033bbc11f37d87e2d848f46ca2ff411941f20
3
+ size 426754
mask_prompt/train-00001-of-00012.parquet ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:43dba845c8cc7d864a419ef04595c74d7c88fcaa5e825345cd9f25d021df0387
3
+ size 407230
mask_prompt/train-00002-of-00012.parquet ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:0ec21c55f4b18274ae5ef631edd3f1a9f306b6dbb546336f6d653b1f87aaf6ca
3
+ size 396145
mask_prompt/train-00003-of-00012.parquet ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:5af3ec7eebe565a1919d6f4fcd0788dc8a019afbeef81320cb3f4df77cac52d7
3
+ size 397262
mask_prompt/train-00004-of-00012.parquet ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:ee375ad29f18a3bec59f16abbff69d09d462cb0917c074a3482905c6250ada6a
3
+ size 396149
mask_prompt/train-00005-of-00012.parquet ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:d5cbc58acc7200ce4d0e92d7b6885954b514f0f508286d86f0ff53bcdcd145e7
3
+ size 393857
mask_prompt/train-00006-of-00012.parquet ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:f9f2b09a951d34cb9fd44573cdf12444b4553d2a4e9d1c83b7910fe3fbcacd71
3
+ size 393992
mask_prompt/train-00007-of-00012.parquet ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:03f947269c36158527eb2d36e4c873534c389531af472323fddea964be627e70
3
+ size 392338
mask_prompt/train-00008-of-00012.parquet ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:436c0734e3b664293ecf5ecd042adb1eac144059a8904a3888c4b1381150c391
3
+ size 393674
mask_prompt/train-00009-of-00012.parquet ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:c1b348b05201232f5d57bf9d683657b18d480ed993d42c52703c98ff76cb9399
3
+ size 394101
mask_prompt/train-00010-of-00012.parquet ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:31a0e7930823ab84ef6530f80fed832cfc220c693874b4da45bc718b5db52038
3
+ size 393613
mask_prompt/train-00011-of-00012.parquet ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:8ecf9ca5ff12c1e38f12cf07d7b3f24d28f26ebf97b22fa681148610cf5a7600
3
+ size 393598
text_prompt/test.parquet ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:e2bd9324241c15e134e68032710ff44a9647090a1607c9bfabc84db2b0b233f0
3
+ size 8905
text_prompt/train-00000-of-00016.parquet ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:b5586644840371f2b79c332df10ffd6a47a05a82db097cf514a668271cef5061
3
+ size 348789
text_prompt/train-00001-of-00016.parquet ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:e492c0aeae1198432fb6a11c6ef2110e437fb8e674c26d67d5e34d56e2a34780
3
+ size 348635
text_prompt/train-00002-of-00016.parquet ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:56a17a9bfb46a9cee159a1a6b117c7ae0c3ae3c6cd88846d03931c6cc3e9b720
3
+ size 350468
text_prompt/train-00003-of-00016.parquet ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:e9f9dc6472b432e7868ca58a1bd82404c2a8315e5feb4e81cdf8b313b5a17316
3
+ size 390787
text_prompt/train-00004-of-00016.parquet ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:5854cf14c8c97517036a296747d03458cfffdf09e6fc1f63987b8e9827ac8fd1
3
+ size 374052
text_prompt/train-00005-of-00016.parquet ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:bd23d299ed276940e7589d8e3454055662b8dca006063eb3a0475b25de341013
3
+ size 381831
text_prompt/train-00006-of-00016.parquet ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:0e8ab15c39caf71186a9fc473ad80d6af82bae9192c5a6521371a490e711d920
3
+ size 359640
text_prompt/train-00007-of-00016.parquet ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:ed6db7631a5745f18a74c49b58a8cd2bc016ac12589643475b87666c3f946128
3
+ size 355428
text_prompt/train-00008-of-00016.parquet ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:fe170c8152655efb9e2809e08d84657682d9ceecd1bbeb58212798ba111b0733
3
+ size 355478
text_prompt/train-00009-of-00016.parquet ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:d4bf0fafa000eee1b7f72256cafa49a76c1e1d07c1b53dbef65b8282daa9e57f
3
+ size 354084
text_prompt/train-00010-of-00016.parquet ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:3b7fe48fb4843aa0c14dc154799f726ebacc328064ebee17c8d4b615d7fafb6b
3
+ size 357970
text_prompt/train-00011-of-00016.parquet ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:01f952a89923c1e533be5006baf704b5388c58f3e68e890f61f2062a54555119
3
+ size 354499
text_prompt/train-00012-of-00016.parquet ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:3f9d401a13a9fba093d31cb752d1c9eab21797a8ff681a06866b0fd599ab5e3f
3
+ size 351271
text_prompt/train-00013-of-00016.parquet ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:e5dea70516a373055e03eccf7b050a740bf0b3fdb006cd092eb7cc8ba87a0a15
3
+ size 359826
text_prompt/train-00014-of-00016.parquet ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:db657e180203897d6c1258445fac759a02b9b60ee822155adce9ee0329304e60
3
+ size 366286
text_prompt/train-00015-of-00016.parquet ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:8843fb86f68c867a228d53bb8d53e3677aa1b9557f123b1db0850fa0083b8b8e
3
+ size 353211