Colab no pytorch_model.bin, tf_model.h5, model.ckpt or flax_model.msgpack file
#27 opened over 1 year ago
by
Danis457845
how long does it take for split_long_conversation.py ?
1
#26 opened over 1 year ago
by
maxkarl
Question about The roles are still USER and ASSISTANT. Early stopping tokens bug.
5
#24 opened over 1 year ago
by
Goldenblood56
v230502 Testing and Discussion
89
#23 opened over 1 year ago
by
deleted
You may want to add an "act-order" GPTQ quantization.
5
#22 opened over 1 year ago
by
xzuyn
text-generation-webui: AttributeError: 'Offload_LlamaModel' object has no attribute 'preload', when trying to generate text
8
#21 opened over 1 year ago
by
hpnyaggerman
Fix for slow speed
1
#20 opened over 1 year ago
by
CyberTimon
Do I just copy over the config, tokenizer_config, and tokenizer.model from another 13B Vicuna Model to get this working in ooba? I am getting error.
10
#17 opened over 1 year ago
by
Goldenblood56
Some test
3
#16 opened over 1 year ago
by
yugihu
V4.3 Early Testing.
109
#15 opened over 1 year ago
by
deleted
Question about this Ai's identity (Solved)
2
#14 opened over 1 year ago
by
waynekenney
oobabooga model loading help? Thank you!
12
#12 opened over 1 year ago
by
waynekenney
The V4 is here
80
#11 opened over 1 year ago
by
TheYuriLover
USE 1.1 vicuna since it no longer has the issue of feedback / talkijng to its self?
2
#10 opened over 1 year ago
by
nucleardiffusion
Why does it insert so many unsolicited http-links?
1
#9 opened over 1 year ago
by
ai2p
talks to its self after answering the question
2
#8 opened over 1 year ago
by
nucleardiffusion
Is the q4 version in ggml format?
1
#4 opened over 1 year ago
by
ai2p
Thank you
12
#3 opened over 1 year ago
by
anon8231489123
Any chance we can get a 7B version?
7
#2 opened over 1 year ago
by
PushTheBrAIkes