How to install this locally and use offline?
#1
by
Jlonge4
- opened
How to install this locally and use offline? Any references or videos would be great
this is actually the GGML version (my bad) - you should be able to use it with LLama.cpp https://github.com/ggerganov/llama.cpp
Then you want llama-cpp-python, which are llama.cpp bindings for Python. Allows you to load GGML files exactly the same as llama.cpp does, but easily accessible from code.
It can then be used either direct from your own Python code, or via an OpenAI-compatible API which you can put LangChain at, or any other client.
https://github.com/abetlen/llama-cpp-python
https://pypi.org/project/llama-cpp-python/
you could use this: https://www.youtube.com/watch?v=aO5ZpFYHa0A&t=1s
Thanks everyone, goal accomplished