Why is the generated content always the same when I use this model?
#7 opened 12 months ago
by
LiMuyi
Phi 3 tokenizer_config has been updated upstream
#6 opened over 1 year ago
by
smcleod

Mistake in readme instructions
🤝
2
2
#5 opened over 1 year ago
by
adamkdean

gibberish results when context is greater 2048
9
#4 opened over 1 year ago
by
Bakanayatsu
Do they work with ollama? How was the conversion done for 128K, llama.cpp/convert.py complains about ROPE.
8
#2 opened over 1 year ago
by
BigDeeper