SGlang deploy error
#21 opened about 1 year ago
by
day9011
How can deploy this model
#20 opened about 1 year ago
by
yongho1213
Running Inference on Azure VM
1
#19 opened about 1 year ago
by
J812
Upload preprocessor_config.json
#18 opened about 1 year ago
by
SanaFalakJ
What kind of GPU need to run this model locally on-prem ?
π₯
1
1
#17 opened about 1 year ago
by
eliastick
huggingface deploy
#16 opened about 1 year ago
by
bk2000
ValueError: Unrecognized configuration class <class 'transformers.models.llava.configuration_llava.LlavaConfig'> for this kind of AutoModel: AutoModelForCausalLM.
5
#15 opened about 1 year ago
by
chr1ce

Running it on an Apple MBP M3 - non-quantized
#14 opened about 1 year ago
by
christianweyer

ζ―ζδΈζOCRε
2
#12 opened over 1 year ago
by
weiminw
Upload preprocessor_config.json
1
#10 opened over 1 year ago
by
auchoi
How to fine-tune this model?
#9 opened over 1 year ago
by
jizhongpeng
Missing preprocessor_config.json
π
12
6
#8 opened over 1 year ago
by
franciscoliu
Add VLLM tag
#7 opened over 1 year ago
by
osanseviero

Use different resolution images as input
#6 opened over 1 year ago
by
carolerxy
Use the model with code
π
3
1
#4 opened over 1 year ago
by
zidanehammouda
GGUF version please?
π
1
3
#3 opened over 1 year ago
by
brugoua
hugging face version coming?
π
6
3
#2 opened over 1 year ago
by
ctranslate2-4you
Quantizations?
π
8
2
#1 opened over 1 year ago
by
musicurgy