fastvlm-gguf

  • run it with gguf-connector; simply execute the command below in console/terminal
ggc f6

GGUF file(s) available. Select which one to use:

  1. fastvlm-0.5b-iq4_nl.gguf
  2. fastvlm-0.5b-q4_0.gguf
  3. fastvlm-0.5b-q8_0.gguf

Enter your choice (1 to 3): _

  • opt a gguf file in your current directory to interact with; nothing else

screenshot for the latest update, you should be able to customize how many tokens for output (see picture)

connector f5 (alternative 1)

ggc f5
  • note: f5 is different from f6 (above) and f9 (below); you don't need gguf files with f5

connector f7 (alternative 2)*

ggc f7

*real time screen describer; live captioning

connector f9 - advanced mode (alternative 3)**

ggc f9

screenshot **for advanced mode, it's available to specify your text prompt with picture input (see above)

reference

  • base model from apple
  • gguf-connector (pypi)
Downloads last month
2,309
GGUF
Model size
759M params
Architecture
pig
Hardware compatibility
Log In to view the estimation

2-bit

3-bit

4-bit

5-bit

6-bit

8-bit

16-bit

32-bit

Inference Providers NEW
This model isn't deployed by any Inference Provider. 🙋 Ask for provider support

Model tree for calcuis/fastvlm-gguf

Base model

apple/FastVLM-0.5B
Quantized
(2)
this model