How to use it in PyCharm for auto completion?

#6
by DrNicefellow - opened

As title

Likewise keen to know if this model can be used via a vLLM server with pycharm.

JetBrains org

Hi. We at JetBrains are currently preparing a solution to simplify local usage of Mellum with our IDEs. At the moment, cloud code completion from JetBrains AI Assistant automatically uses Mellum hosted by JetBrains.

If you want to make it self-hosted, current best option is to implement custom IntelliJ plugin and run Mellum with via ollama: https://ollama.com/JetBrains. As said above, stay tuned as we are working towards simplifying this!

Hi

I'm trying to run Mellum under ollama for code completion, not chat.

Following https://huggingface.co/JetBrains/Mellum-4b-base#fill-in-the-middle-with-additional-files-as-context-generation

I could not manage to get it to work completing the middle with user content : "return 42<fim_suffix><fim_prefix>def foo(<fim_middle>"

fim words are documented in https://huggingface.co/JetBrains/Mellum-4b-base/blob/main/tokenizer_config.json

would you kindly show an example of how to get completion ?

thanks

Sign up or log in to comment