Model Card for X3D-KABR-Kinetics

X3D-KABR-Kinetics is a behavior recognition model for in situ drone videos of zebras and giraffes, built using X3D model initialized on Kinetics weights. It is trained on the KABR dataset, which is comprised of 10 hours of aerial video footage of reticulated giraffes (Giraffa reticulata), Plains zebras (Equus quagga), and Grevy’s zebras (Equus grevyi) captured using a DJI Mavic 2S drone. It includes both spatiotemporal (i.e., mini-scenes) and behavior annotations provided by an expert behavioral ecologist.

Model Details

Model Description

  • Developed by: [Maksim Kholiavchenko, Maksim Kukushkin, Otto Brookes, Jenna Kline, Sam Stevens, Isla Duporge, Alec Sheets, Reshma R. Babu, Namrata Banerji, Elizabeth Campolongo, Matthew Thompson, Nina Van Tiel, Jackson Miliko, Eduardo Bessa Mirmehdi, Thomas Schmid, Tanya Berger-Wolf, Daniel I. Rubenstein, Tilo Burghardt, Charles V. Stewart]

  • Model type: [X3D]

  • License: [MIT]

  • Fine-tuned from model: [X3D-S, Kinetics]

This model was developed for the benefit of the community as an open-source product, thus we request that any derivative products are also open-source.

Model Sources

Uses

X3D-KABR-Kinetics has extensively studied ungulate behavior classification from aerial video.

Direct Use

Please see the illustrative examples on the kabr-tools repository for more information on how this model can be used generate time-budgets from aerial video of animals.

Out-of-Scope Use

This model was trained to detect and classify behavior from drone videos of zebras and giraffes in Kenya. It may not perform well on other species or settings.

How to Get Started with the Model

Please see the illustrative examples on the kabr-tools repository for more information on how this model can be used generate time-budgets from aerial video of animals.

Training Details

Training Data

KABR Dataset

Training Procedure

Preprocessing

Raw drone videos were pre-processed using CVAT to detect and track each individual animal in each high-resolution video and link the results into tracklets. For each tracklet, we create a separate video, called a mini-scene, by extracting a sub-image centered on each detection in a video frame. This allows us to compensate for the drone's movement and provides a stable, zoomed-in representation of the animal.

See project page and the paper for data preprocessing details.

We applied data augmentation techniques during training, including horizontal flipping to randomly mirror the input frames horizontally and color augmentations to randomly modify the brightness, contrast, and saturation of the input frames.

Training Hyperparameters

The model was trained for 120 epochs, using a batch size of 5. We used the EQL loss function to address the long-tailed class distribution and SGD optimizer with a learning rate of 1e5. We used a sample rate of 16x5, and random weight initialization.

BibTeX:

If you use our model in your work, please cite the model and associated paper.

Model

@software{kabr_x3d_model,
  author = {Maksim Kholiavchenko, Maksim Kukushkin, Otto Brookes, Jenna Kline, Sam Stevens, Isla Duporge, Alec Sheets,
Reshma R. Babu, Namrata Banerji, Elizabeth Campolongo,
Matthew Thompson, Nina Van Tiel, Jackson Miliko,
Eduardo Bessa Mirmehdi, Thomas Schmid,
Tanya Berger-Wolf, Daniel I. Rubenstein, Tilo Burghardt, Charles V. Stewart},
  doi = {<doi once generated>},
  title = {KABR model},
  version = {v0.1},
  year = {2024},
  url = {https://huggingface.co/imageomics/x3d-kabr-kinetics}
}

Paper

@InProceedings{Kholiavchenko_2024_WACV,
    author    = {Kholiavchenko, Maksim and Kline, Jenna and Ramirez, Michelle and Stevens, Sam and Sheets, Alec and Babu, Reshma and Banerji, Namrata and Campolongo, Elizabeth and Thompson, Matthew and Van Tiel, Nina and Miliko, Jackson and Bessa, Eduardo and Duporge, Isla and Berger-Wolf, Tanya and Rubenstein, Daniel and Stewart, Charles},
    title     = {KABR: In-Situ Dataset for Kenyan Animal Behavior Recognition From Drone Videos},
    booktitle = {Proceedings of the IEEE/CVF Winter Conference on Applications of Computer Vision (WACV) Workshops},
    month     = {January},
    year      = {2024},
    pages     = {31-40}
}

Model Card Authors

[Jenna Kline and Maksim Kholiavchenko]

Model Card Contact

Maksim Kholiavchenko

Contributions

This work was supported by the Imageomics Institute, which is funded by the US National Science Foundation's Harnessing the Data Revolution (HDR) program under Award #2118240 (Imageomics: A New Frontier of Biological Information Powered by Knowledge-Guided Machine Learning). Additional support was also provided by the AI Institute for Intelligent Cyberinfrastructure with Computational Learning in the Environment (ICICLE), which is funded by the US National Science Foundation under Award #2112606. Any opinions, findings and conclusions or recommendations expressed in this material are those of the author(s) and do not necessarily reflect the views of the National Science Foundation.

The data was gathered at the Mpala Research Centre in Kenya, in accordance with Research License No. NACOSTI/P/22/18214. The data collection protocol adhered strictly to the guidelines set forth by the Institutional Animal Care and Use Committee under permission No. IACUC 1835F.

Downloads last month

-

Downloads are not tracked for this model. How to track
Inference API
Unable to determine this model's library. Check the docs .

Dataset used to train imageomics/x3d-kabr-kinetics

Collection including imageomics/x3d-kabr-kinetics