need assistance
hey i liked your work to much, i am having to much fun while testing your diffrent models. you have done a great work. but i have a doubt, you curruntly have 200+ models, can you help me to select a best model, the work is simple. the model have to do role play and it should be great knowladge, also i have 15gb gpu so which is the best model that i can use
With 15GB VRAM you will want to limit yourself to 13B models.
I don't really look into role play myself, but all of the following models are good for chat, so may do what you want:
- https://huggingface.co/TheBloke/manticore-13b-chat-pyg-GPTQ
- https://huggingface.co/TheBloke/samantha-1.1-llama-13B-GPTQ
- https://huggingface.co/TheBloke/based-13b-GPTQ
- https://huggingface.co/TheBloke/minotaur-13B-fixed-GPTQ
- https://huggingface.co/TheBloke/Nous-Hermes-13B-GPTQ
- https://huggingface.co/TheBloke/Wizard-Vicuna-13B-Uncensored-GPTQ
- https://huggingface.co/TheBloke/chronos-wizardlm-uc-scot-st-13B-GPTQ
thanks for assistance, i belive they all works quite well. but can you do one of like in which we can give it instructions as well for chat. its not a priotized request, when you got a little time, can you that as well?
also i want to know 1 more thing, what thing you use for training and fine-tuning. like in my system i can run 13b model but i can only able to train 2-3b models. is there any prefered cloud-computing service there or you use your local system?
@TheBloke For the purpose of code generation, which model do you recommend for a 6GB (2060) graphics card? I see many models, but it's hard for me to choose the best for this purpose and limitation.
I guess in 6gb you only have a few options, you can try using a 7b model at 4bit quant. There are few models like starcoder etc. You can check according to your usecase or use can use an cloud GPU service if you want better results by loading bigger models