Sheesh, impressive dataset!
Hey Andrew! Glad to see activity related to this paper on HF finally (from the original authors). I'm Han, indepedent researcher / ex-founder of various projects turned into total hippy with a non-profit oriented project.
I took particular interest in your guys' research because it re-affirmed some personal thoughts and piqued some curiosities of mine, and have sparked the beginning of an ambitious study / project in this area. In case you haven't seen yet, I've done my best attempt at implementing the model on HF without botching your original work and was actually a week or two out from sharing the abstract for project "OpenSight" (project name not set in stone yet). Leaderboard and ranking system is also in the works, but we are a smol, smol team. In addition to the inference space and model, I've also begun work on slimming down your original dataset (for various reasons which will be outlinted).
To sum it up, I've been meaning to get in touch with you and @jespark , I apologize for the delay (several other research going on as well). I'd love to begin by poking your brains with some simple questions that would immensely help with some context, and additionally, if the OpenSight project would be of any interest to either of you in either contributing or even co-starting. The super short simplified explanation of the project is roughly:
- the deepfake detection service industry is currently completely monopolized by a few private companies; nothing wrong with this, but let's effing democratize it while potentially opening up revenue streams for continued work;
- this literal cat-and-mouse game of detection models playing catchup with the latest generation models is unsustainable and a huge burden on resource and time
- there's been considerable research conducted on changing the status quo of detection (as many others are), but I will soon™️ be sharing my proposed solution (and I hesitate to use the word 'solution' here, for obvious reasons) that will bring some sanity back into deepfake detection that is at least sustainable until AGI humbles us all and it's gg.
- continuning on my last point, at the moment it is most definitely NOT sustainable for a single entity to provide reliable detection models, and the open source detection models are all over the place. Serious resources will need to be put up every few months just to stay one step behind, and although I'm not sure if your research from November was funded or sponsored, the open source community will have an impossible battle with training models with datasets as large as your full 1.1T version, let alone the "small" 77Gb set -- which unfortunately was already rendered an unreliable model as early as early January with minimal effort. I'm probably preaching to the choir, so I'll leave it at that (and by no means did I mean any offense, I know that your paper wasn't suggesting that we keep throwing more resources 😉) .
Apologies for the super long introduction, there's been a truckload of ideas, trials, experimentations, pivots, and doubts on my mind this past quarter involving this issue. Again, I don't want to say I have found the "solution" but so far our novel method has been managing to stay "a pace behind" the SOTA image generators and models (3d-14d lag vs. the current 3-4 month lag), with only 1-3gb datasets for training. And of course, the long term sustainability and strength of this novel method would rely on the power of community.
tl;dr - open source deepfake detection models all suck, yes even your model trained on the motherload of data was rendered unreliable pretty much the month after it was released, deepfake detection itself needs a complete re-haul from philosophy to methodology, we have some cool ideas that has shown very promising results, wanna contribute? if so lets talk. p.s. a review and a nod-of-approval of our implementation of the pre-trained model implementation would be cool, no guarantees that we didn't botch something somewhere ❤️
Thanks Han for your interest, and thanks for showcasing our classifier on Hugging Face. We are unfortunately not interested in contributing to your project at the moment. Wish you good luck!