zR
zRzRzRzRzRzRzR
AI & ML interests
LLM & Agent
Recent Activity
updated
a model
about 23 hours ago
THUDM/cogagent-9b-20241220
liked
a model
1 day ago
Qwen/QVQ-72B-Preview
updated
a model
2 days ago
THUDM/cogagent-chat-hf
Organizations
zRzRzRzRzRzRzR's activity
vllm 部署的 glm4 能使用 tool call 吗?
2
#84 opened about 1 month ago
by
hiert
Converting to native Transformers
13
#17 opened 3 months ago
by
cyrilvallez
TypeError: BFloat16 is not supported on MPS
1
#11 opened about 2 months ago
by
hiepsiga
Please upgrade to THUDM/glm-4-9b-chat-hf model.
2
#86 opened about 1 month ago
by
zRzRzRzRzRzRzR
vram ,multi gpu
1
#8 opened about 1 month ago
by
tangxiaochu
请问prompt换了没
2
#1 opened 7 months ago
by
okcwang
使用trans来进行推理时候出现的错误:
#4 opened 7 months ago
by
shams123321
Multiple/Parallel function call?
1
#27 opened 7 months ago
by
Yhyu13
已解决
#54 opened 6 months ago
by
zhongyi1997cn
用什么部署,可以像智普API那样传工具,VLLM和Llama.cpp,ollama的,有没有现成的Modelfile?
1
#85 opened about 1 month ago
by
AubreyChen
Converting to native Transformers
22
#81 opened 3 months ago
by
cyrilvallez
Upload VAE in fp32
#13 opened about 1 month ago
by
a-r-r-o-w
Upload VAE in fp32
#13 opened about 1 month ago
by
a-r-r-o-w
Upload VAE in fp32
1
#23 opened about 1 month ago
by
a-r-r-o-w
Still no output
3
#21 opened about 1 month ago
by
Blacknoon
update tokenizer for compatibility with new transformers
3
#64 opened about 2 months ago
by
katuni4ka
Any optimization ways to accelerate the speed of inference
8
#7 opened about 1 month ago
by
mayukitan
Update README.md
#4 opened about 1 month ago
by
multimodalart
The PR for diffusers has not been merged yet
1
#1 opened about 1 month ago
by
zRzRzRzRzRzRzR
Got OOM when num_frames=161, fps=16
2
#5 opened about 1 month ago
by
vfychen