Datasets:
Tasks:
Video Classification
Modalities:
Video
Sub-tasks:
multi-class-image-classification
Languages:
English
Size:
< 1K
License:
Update README.md
Browse files
README.md
CHANGED
@@ -50,42 +50,4 @@ Example use cases:
|
|
50 |
|
51 |
Do **not** use this dataset as training data. If you require a trainable dataset, you must substitute animations that are licensed for ML use.
|
52 |
|
53 |
-
|
54 |
-
```python
|
55 |
-
# After downloading & unzipping:
|
56 |
-
# Synthetic_reasoning_dataset/
|
57 |
-
# anomaly_videos/...
|
58 |
-
# follow_videos/...
|
59 |
-
# spatial_colored_videos/...
|
60 |
-
# spatial_videos/...
|
61 |
-
|
62 |
-
from datasets.synthetic_reasoning_dataset import (
|
63 |
-
SyntheticReasoningDataset,
|
64 |
-
build_index_dataframe,
|
65 |
-
)
|
66 |
-
|
67 |
-
root = "path/to/Synthetic_reasoning_dataset"
|
68 |
-
|
69 |
-
# 1) Quick index as a DataFrame (great for sanity checks)
|
70 |
-
df = build_index_dataframe(root)
|
71 |
-
print(df.head())
|
72 |
-
|
73 |
-
# 2) PyTorch-style dataset without decoding (fast)
|
74 |
-
ds = SyntheticReasoningDataset(root, tasks=None, decode=False)
|
75 |
-
path, label_id, meta = ds[0]
|
76 |
-
print(path, label_id, meta)
|
77 |
-
|
78 |
-
# 3) With decoding (requires torchvision); sample every 2nd frame, cap at 64 frames
|
79 |
-
ds_decoded = SyntheticReasoningDataset(root, tasks=["anomaly", "follow"], decode=True, sample_stride=2, max_frames=64)
|
80 |
-
video, label_id, meta = ds_decoded[0]
|
81 |
-
print(video.shape, label_id, meta)
|
82 |
-
|
83 |
-
# 4) Label spaces
|
84 |
-
from datasets.synthetic_reasoning_dataset import LABELS_PER_TASK, LABEL2ID
|
85 |
-
print(LABELS_PER_TASK)
|
86 |
-
print(LABEL2ID)
|
87 |
-
|
88 |
-
```
|
89 |
-
|
90 |
-
---
|
91 |
-
|
|
|
50 |
|
51 |
Do **not** use this dataset as training data. If you require a trainable dataset, you must substitute animations that are licensed for ML use.
|
52 |
|
53 |
+
For testing with vision-LLMs see our [GitHub repo](https://github.com/pascalbenschopTU/VLLM_AnomalyRecognition).
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|