|
---
|
|
license: apache-2.0
|
|
language:
|
|
- en
|
|
---
|
|
# Metal compatible models for Frame Interpolation in PyTorch |
|
This repository contains exported models for [Frame interpolation in PyTorch](https://github.com/dajes/frame-interpolation-pytorch) and [ComfyUI Frame Interpolation \(ComfyUI VFI\)](https://github.com/Fannovel16/ComfyUI-Frame-Interpolation). |
|
The models from [v1.0.2](https://github.com/dajes/frame-interpolation-pytorch/releases/tag/v1.0.2) distributed in the [Frame interpolation in PyTorch](https://github.com/dajes/frame-interpolation-pytorch) repository have an issue where they don't work on Apple Silicon Macs ([Runtime error: MPS Unsupported Border padding mode](https://github.com/dajes/frame-interpolation-pytorch/issues/4)). |
|
These models have been re-exported using a [patch](https://github.com/dajes/frame-interpolation-pytorch/pull/6) that resolves this issue. |
|
|
|
The models were created using the following steps: |
|
|
|
1. Clone the original repository and install dependencies: |
|
``` |
|
git clone https://github.com/dajes/frame-interpolation-pytorch.git |
|
cd frame-interpolation-pytorch |
|
|
|
uv venv |
|
uv pip install -r requirements.txt |
|
uv pip install tensorflow |
|
``` |
|
|
|
2. Download model files distributed by [FILM: Frame Interpolation for Large Motion](https://github.com/google-research/frame-interpolation) from [Google Drive](https://drive.google.com/drive/folders/153dvxVSAcsNv1cyHVJySYZ-Twchm4Jdi) and place them in the repository root: |
|
``` |
|
tree |
|
. |
|
βββ export.py |
|
βββ feature_extractor.py |
|
βββ fusion.py |
|
βββ inference.py |
|
βββ interpolator.py |
|
βββ LICENSE |
|
βββ photos |
|
βΒ Β βββ one.png |
|
βΒ Β βββ output.gif |
|
βΒ Β βββ two.png |
|
βββ pyramid_flow_estimator.py |
|
βββ README.md |
|
βββ requirements.txt |
|
βββ saved_model |
|
βΒ Β βββ assets |
|
βΒ Β βββ keras_metadata.pb |
|
βΒ Β βββ saved_model.pb |
|
βΒ Β βββ variables |
|
βΒ Β βββ variables.data-00000-of-00001 |
|
βΒ Β βββ variables.index |
|
βββ util.py |
|
``` |
|
|
|
3. Apply the [patch](https://github.com/dajes/frame-interpolation-pytorch/pull/6): |
|
``` |
|
curl -L https://github.com/dajes/frame-interpolation-pytorch/pull/6.patch -o PR6.patch |
|
git apply PR6.patch |
|
``` |
|
|
|
4. Export the models: |
|
``` |
|
uv run export.py ./saved_model film_net_fp16.pt |
|
uv run export.py ./saved_model film_net_fp32.pt --fp32 |
|
``` |
|
|
|
## Citation |
|
In accordance with the requirements of the original repositories, if you use these model files, please cite the following: |
|
``` |
|
@inproceedings{reda2022film, |
|
title = {FILM: Frame Interpolation for Large Motion}, |
|
author = {Fitsum Reda and Janne Kontkanen and Eric Tabellion and Deqing Sun and Caroline Pantofaru and Brian Curless}, |
|
booktitle = {European Conference on Computer Vision (ECCV)}, |
|
year = {2022} |
|
} |
|
``` |
|
|
|
``` |
|
@misc{film-tf, |
|
title = {Tensorflow 2 Implementation of "FILM: Frame Interpolation for Large Motion"}, |
|
author = {Fitsum Reda and Janne Kontkanen and Eric Tabellion and Deqing Sun and Caroline Pantofaru and Brian Curless}, |
|
year = {2022}, |
|
publisher = {GitHub}, |
|
journal = {GitHub repository}, |
|
howpublished = {\url{https://github.com/google-research/frame-interpolation}} |
|
} |
|
``` |
|
|
|
## License |
|
Following the original repositories, [Frame Interpolation in PyTorch](https://github.com/dajes/frame-interpolation-pytorch) and [FILM: Frame Interpolation for Large Motion](https://github.com/google-research/frame-interpolation), this project is licensed under the Apache 2.0 License. |