max output token and knowledge cutoff for Mistral-8B-Instruct-2410

#24
by MengboZhou - opened

Hello,

I would like to ask two questions about the Mistral-8B-Instruct-2410:

  1. What is the maximum number of output tokens the model can generate during inference?

    • For example, is there a known limit like 2048, 8192 tokens?
  2. What is the knowledge cutoff date for this version?

    • Is the model trained on data up to a specific month or year (e.g., 2023-03, 2023-08, etc.)?

I’ve searched the documentation but couldn’t find a definitive answer to these two questions.

Thank you in advance for your help!

Sign up or log in to comment