Hugging Face
Models
Datasets
Spaces
Community
Docs
Enterprise
Pricing
Log In
Sign Up
29
5
Lance
clevnumb
Follow
21world's profile picture
1 follower
ยท
3 following
AI & ML interests
None yet
Recent Activity
new
activity
4 days ago
kontext-community/relighting-kontext-dev-lora-v3:
What is meant by "You should use `` to trigger the image generation." ?
new
activity
about 1 month ago
DFloat11/BAGEL-7B-MoT-DF11:
Running this in the Gradio interface, how long should a default setting generation take with a RTX-4090?
new
activity
about 1 month ago
DFloat11/BAGEL-7B-MoT-DF11:
Running this in the Gradio interface, how long should a default setting generation take with a RTX-4090?
View all activity
Organizations
None yet
clevnumb
's activity
All
Models
Datasets
Spaces
Papers
Collections
Community
Posts
Upvotes
Likes
Articles
New activity in
kontext-community/relighting-kontext-dev-lora-v3
4 days ago
What is meant by "You should use `` to trigger the image generation." ?
๐
5
1
#2 opened 4 days ago by
clevnumb
New activity in
DFloat11/BAGEL-7B-MoT-DF11
about 1 month ago
Running this in the Gradio interface, how long should a default setting generation take with a RTX-4090?
2
#2 opened about 1 month ago by
clevnumb
New activity in
unsloth/DeepSeek-R1-Distill-Qwen-7B-GGUF
5 months ago
Error loading on lm-studio
4
#1 opened 6 months ago by
victor-Des
New activity in
TheDrummer/Cydonia-22B-v1.3
6 months ago
What context size when using a 24GB VRAM card (4090) is best?
3
#1 opened 7 months ago by
clevnumb
New activity in
anthracite-org/magnum-v2-32b-exl2
11 months ago
Which Quant of this model will fit in VRAM entirely on a single 24GB video card (4090)
3
#1 opened 11 months ago by
clevnumb
New activity in
NeverSleep/Lumimaid-v0.2-12B
11 months ago
My alternate quantizations.
5
#3 opened 12 months ago by
ZeroWw
New activity in
bullerwins/Big-Tiger-Gemma-27B-v1-exl2_5.0bpw
12 months ago
Not loading in Latest Tabby (with SillyTavern) - ERROR
2
#2 opened 12 months ago by
clevnumb
New activity in
AzureBlack/PsyMedRP-v1-20B-8bpw-8h-exl2
about 1 year ago
Single 4090 using OogaBooga? (Windows 11, 96GB of RAM)
1
#1 opened over 1 year ago by
clevnumb
New activity in
LoneStriker/miquella-120b-3.0bpw-h6-exl2
over 1 year ago
Will any 120b model currently fit on a single 24GB VRAM card through any app I can run on PC? (aka 4090)
15
#1 opened over 1 year ago by
clevnumb
New activity in
h94/IP-Adapter-FaceID
over 1 year ago
Are there safetensor files for the models?
๐
3
7
#37 opened over 1 year ago by
wonderflex
New activity in
TeeZee/Kyllene-57B-v1.0-bpw3.0-h6-exl2
over 1 year ago
Glacially slow on a RTX 4090??
5
#1 opened over 1 year ago by
clevnumb
New activity in
LoneStriker/Nous-Capybara-34B-4.0bpw-h6-exl2
over 1 year ago
Which of these 34B model BPW will fit on a single 24GB card's (4090) VRAM?
9
#1 opened over 1 year ago by
clevnumb
New activity in
LoneStriker/Yi-34B-200K-4.0bpw-h6-exl2
over 1 year ago
RTX 4090 using Text-Generation-WebUI / Oogabooga FAILS to load this model with Exllama2 (or any method?)
11
#1 opened over 1 year ago by
clevnumb
New activity in
MetaIX/GPT4-X-Alpasta-30b-4bit
about 2 years ago
What are the different files for?
2
#9 opened about 2 years ago by
Arya123456
New activity in
TheBloke/guanaco-33B-GPTQ
about 2 years ago
Could this model be loaded in 3090 GPU?
24
#6 opened about 2 years ago by
Exterminant
New activity in
Aeala/GPT4-x-AlpacaDente2-30b
about 2 years ago
Is it unfiltered/uncensored?
2
#2 opened about 2 years ago by
sneedingface
New activity in
TheBloke/alpaca-lora-65B-GGML
about 2 years ago
Thank you very much!
๐
6
10
#2 opened about 2 years ago by
AiCreatornator
Load more