How to run tis model please help me
2
#40 opened 2 days ago
by
Agoogleuser
project_bb
#39 opened 3 days ago
by
salunkesayali
Update README.md
#37 opened 6 days ago
by
leonsas14
AssertionError: Torch not compiled with CUDA enabled
1
#35 opened 8 days ago
by
gokul9
Upload 4 files
#34 opened 8 days ago
by
amberfelts890
FP8 Quantized model now available! (only requires half the original model's VRAM)
#33 opened 10 days ago
by
mysticbeing
Reparación de Gadgets
#32 opened 12 days ago
by
gvilchis
Update README.md
#31 opened 12 days ago
by
MarcosMT
Request: DOI
1
#30 opened 16 days ago
by
Natwar
Update README.md
#27 opened 17 days ago
by
jinyongkenny
Is there an AWQ quant version?
#26 opened 18 days ago
by
imonsoon
Request: DOI
#25 opened 20 days ago
by
Arslanbey123
Suitable hardware config for usage this model
10
#22 opened 24 days ago
by
nimool
Templete Prompt
2
#20 opened 24 days ago
by
sadra
Any way I can run it on my low-mid tier HP Desktop? specs attached as a .png, btw i know its probably a long shot.
6
#18 opened 25 days ago
by
vgrowhouse
How to inference it on a 40 GB A100 and 80 GB Ram of Colab PRO?
1
#17 opened 26 days ago
by
SadeghPouriyan
nvdiallm
#16 opened 27 days ago
by
jhaavinash
Update README.md
#12 opened 28 days ago
by
Delcos
[EVALS] Metrics compared to 3.1-70b Instruct by Meta
2
#11 opened 29 days ago
by
ID0M
Congrats to the Nvidia team!
#10 opened 30 days ago
by
nickandbro
Will quantised version be available?
4
#9 opened 30 days ago
by
angerhang
Adding Evaluation Results
#8 opened about 1 month ago
by
leaderboard-pr-bot
the model is not optimize in term of inference
1
#7 opened about 1 month ago
by
Imran1
there are 3 "r"s in the playful "strawrberry"?
5
#6 opened about 1 month ago
by
JieYingZhang
405B version
2
#5 opened about 1 month ago
by
nonetrix
Other lanuage ablity
2
#4 opened about 1 month ago
by
nonetrix
What's the difference between Instruct-HF vs Instruct?
1
#2 opened about 1 month ago
by
Backup6
Turn inference ON?
2
#1 opened about 1 month ago
by
victor