andrew-bitmind commited on
Commit
6b7443f
·
verified ·
1 Parent(s): 94d8a26

Italicize text

Browse files
Files changed (1) hide show
  1. app.py +1 -1
app.py CHANGED
@@ -77,7 +77,7 @@ with demo:
77
  gr.Markdown("""
78
  ## 🎯 The Open Benchmark for Detecting AI-Generated Images
79
 
80
- [DFD-Arena](https://github.com/BitMind-AI/dfd-arena) is the first benchmark to address the open-source computer vision community's need for a **comprehensive** evaluation framework for state-of-the-art (SOTA) detection of AI-generated images.
81
 
82
  While [previous studies](https://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=9721302) have focused on benchmarking the SOTA on content-specific subsets of the deepfake detection problem, e.g. human face deepfake benchmarking via [DeepfakeBench](https://github.com/SCLBD/DeepfakeBench), these benchmarks do not adequately account for the broad spectrum of real and generated image types seen in everyday scenarios.
83
 
 
77
  gr.Markdown("""
78
  ## 🎯 The Open Benchmark for Detecting AI-Generated Images
79
 
80
+ [DFD-Arena](https://github.com/BitMind-AI/dfd-arena) is the first benchmark to address the open-source computer vision community's need for a *comprehensive evaluation framework* for state-of-the-art (SOTA) detection of AI-generated images.
81
 
82
  While [previous studies](https://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=9721302) have focused on benchmarking the SOTA on content-specific subsets of the deepfake detection problem, e.g. human face deepfake benchmarking via [DeepfakeBench](https://github.com/SCLBD/DeepfakeBench), these benchmarks do not adequately account for the broad spectrum of real and generated image types seen in everyday scenarios.
83