Spaces:
Running
Running
Update README.md
Browse files
README.md
CHANGED
@@ -25,12 +25,10 @@ Combined with a late interaction matching mechanism, *ColPali* largely outperfor
|
|
25 |
|
26 |
## Models
|
27 |
|
28 |
-
- [*ColPali*](https://huggingface.co/vidore/colpali): *ColPali* is our main model contribution, it is a model based on a novel model architecture and training strategy based on Vision Language Models (VLMs), to efficiently index documents from their visual features.
|
29 |
It is a [PaliGemma-3B](https://huggingface.co/google/paligemma-3b-mix-448) extension that generates [ColBERT](https://arxiv.org/abs/2004.12832)- style multi-vector representations of text and images.
|
30 |
-
|
31 |
- [*BiPali*](https://huggingface.co/vidore/bipali): It is an extension of original SigLIP architecture, the SigLIP-generated patch embeddings are fed to a text language model, PaliGemma-3B, to obtain LLM contextualized output patch embeddings.
|
32 |
These representations are pool-averaged to get a single vector representation and create a PaliGemma bi-encoder, *BiPali*.
|
33 |
-
|
34 |
- [*BiSigLIP*](https://huggingface.co/vidore/bisiglip): Finetuned version of original [SigLIP](https://huggingface.co/google/siglip-so400m-patch14-384), a strong vision-language bi-encoder model.
|
35 |
|
36 |
|
@@ -47,9 +45,7 @@ We organized datasets into collections to constitute our benchmark ViDoRe and it
|
|
47 |
[InfoVQA](https://huggingface.co/datasets/vidore/infovqa_test_subsampled), [TATDQA](https://huggingface.co/datasets/vidore/tatdqa_test), [TabFQuAD](https://huggingface.co/datasets/vidore/tabfquad_test_subsampled)) and from datasets synthetically generated spanning various themes and industrial applications:
|
48 |
([Artificial Intelligence](https://huggingface.co/datasets/vidore/syntheticDocQA_artificial_intelligence_test), [Government Reports](https://huggingface.co/datasets/vidore/syntheticDocQA_government_reports_test), [Healthcare Industry](https://huggingface.co/datasets/vidore/syntheticDocQA_healthcare_industry_test), [Energy](https://huggingface.co/datasets/vidore/syntheticDocQA_energy_test) and [Shift Project](https://huggingface.co/datasets/vidore/shiftproject_test)).
|
49 |
Further details can be found on the corresponding dataset cards.
|
50 |
-
|
51 |
- [*OCR Baseline*](https://huggingface.co/collections/vidore/vidore-chunk-ocr-baseline-666acce88c294ef415548a56): Datasets in this collection are the same as in ViDoRe but preprocessed for textual retrieving. The original ViDoRe benchmark was passed to Unstructured to partition each page into chunks. Visual chunks are OCRized with Tesseract.
|
52 |
-
|
53 |
- [*Captioning Baseline*](https://huggingface.co/collections/vidore/vidore-captioning-baseline-6658a2a62d857c7a345195fd): Datasets in this collection are the same as in ViDoRe but preprocessed for textual retrieving. The original ViDoRe benchmark was passed to Unstructured to partition each page into chunks. Visual chunks are captioned using Claude Sonnet.
|
54 |
|
55 |
## Code
|
@@ -60,12 +56,8 @@ We organized datasets into collections to constitute our benchmark ViDoRe and it
|
|
60 |
## Extra
|
61 |
|
62 |
- [*Demo*](https://huggingface.co/spaces/manu/ColPali-demo): A demo to try it out ! This will be improved in the coming days !
|
63 |
-
|
64 |
-
- *Blogpost*: To be announced
|
65 |
-
|
66 |
- [*Preprint*](https://huggingface.co/papers/2407.01449): The paper with all details !
|
67 |
|
68 |
-
|
69 |
## Contact
|
70 |
|
71 |
- Manuel Faysse: [email protected]
|
|
|
25 |
|
26 |
## Models
|
27 |
|
28 |
+
- [*ColPali*](https://huggingface.co/vidore/colpali-v1.2): *ColPali* is our main model contribution, it is a model based on a novel model architecture and training strategy based on Vision Language Models (VLMs), to efficiently index documents from their visual features.
|
29 |
It is a [PaliGemma-3B](https://huggingface.co/google/paligemma-3b-mix-448) extension that generates [ColBERT](https://arxiv.org/abs/2004.12832)- style multi-vector representations of text and images.
|
|
|
30 |
- [*BiPali*](https://huggingface.co/vidore/bipali): It is an extension of original SigLIP architecture, the SigLIP-generated patch embeddings are fed to a text language model, PaliGemma-3B, to obtain LLM contextualized output patch embeddings.
|
31 |
These representations are pool-averaged to get a single vector representation and create a PaliGemma bi-encoder, *BiPali*.
|
|
|
32 |
- [*BiSigLIP*](https://huggingface.co/vidore/bisiglip): Finetuned version of original [SigLIP](https://huggingface.co/google/siglip-so400m-patch14-384), a strong vision-language bi-encoder model.
|
33 |
|
34 |
|
|
|
45 |
[InfoVQA](https://huggingface.co/datasets/vidore/infovqa_test_subsampled), [TATDQA](https://huggingface.co/datasets/vidore/tatdqa_test), [TabFQuAD](https://huggingface.co/datasets/vidore/tabfquad_test_subsampled)) and from datasets synthetically generated spanning various themes and industrial applications:
|
46 |
([Artificial Intelligence](https://huggingface.co/datasets/vidore/syntheticDocQA_artificial_intelligence_test), [Government Reports](https://huggingface.co/datasets/vidore/syntheticDocQA_government_reports_test), [Healthcare Industry](https://huggingface.co/datasets/vidore/syntheticDocQA_healthcare_industry_test), [Energy](https://huggingface.co/datasets/vidore/syntheticDocQA_energy_test) and [Shift Project](https://huggingface.co/datasets/vidore/shiftproject_test)).
|
47 |
Further details can be found on the corresponding dataset cards.
|
|
|
48 |
- [*OCR Baseline*](https://huggingface.co/collections/vidore/vidore-chunk-ocr-baseline-666acce88c294ef415548a56): Datasets in this collection are the same as in ViDoRe but preprocessed for textual retrieving. The original ViDoRe benchmark was passed to Unstructured to partition each page into chunks. Visual chunks are OCRized with Tesseract.
|
|
|
49 |
- [*Captioning Baseline*](https://huggingface.co/collections/vidore/vidore-captioning-baseline-6658a2a62d857c7a345195fd): Datasets in this collection are the same as in ViDoRe but preprocessed for textual retrieving. The original ViDoRe benchmark was passed to Unstructured to partition each page into chunks. Visual chunks are captioned using Claude Sonnet.
|
50 |
|
51 |
## Code
|
|
|
56 |
## Extra
|
57 |
|
58 |
- [*Demo*](https://huggingface.co/spaces/manu/ColPali-demo): A demo to try it out ! This will be improved in the coming days !
|
|
|
|
|
|
|
59 |
- [*Preprint*](https://huggingface.co/papers/2407.01449): The paper with all details !
|
60 |
|
|
|
61 |
## Contact
|
62 |
|
63 |
- Manuel Faysse: [email protected]
|