Context window size?

#23
by JulienGuy - opened

Hello,

I am calling Meta-Llama-3.1-70B-Instruct using Haystack v2.0's HuggingFaceTGIGenerator in the context of a RAG application.

Although this model is advertised as having a 128k context window size, as opposed to Meta-Llama-3.1-70B-Instruct's 8k context window size, I get the following error:

error 422.png

I am puzzled as to whether this model's context window is purposefully limited to 8k token or if this is an issue of compatibility with Haystack 2.0? Any hints would be appreciated here.

Great Question. I also would like to know the answer.

Hi Ixex1,

Turns out it's easy: you deploy the model on a (paid) dedicated inference endpoint and configure it to accept long inputs.

Ofc it means you're paying for the hardware you're renting.

Awesome! Thanks so much. Is there a link you could share? I'm unsure of where to begin....

You go to your model card like here: https://huggingface.co/meta-llama/Meta-Llama-3.1-70B-Instruct

And you click on "deploy" and "Inference Endpoint".

You will have a number of options to configure the endpoint.

If you need the higher end hardware like Nvidia A100, beware that HuggingFace may not be currently able to fulfill your needs as availability is limited. Contacting customer support can help.

Sign up or log in to comment