This is the same as Yukang's Llama-2-70b-longlora-32k, except that the extra pad token has been stripped from the tokenizer to make it similar to the base Llama model (and it has been merged into the base model). Please refer to that page for more details.

It was created by merging LongAlpaca-70B-lora into Llama-2-70b, replacing the embed and norm layers as described in the LongLoRA repo, and removing the extra row and pad token.

This is not an instruct-tuned model, but a base model for further fine-tuning. It supports 32K of context with linear rope scaling of 8.

Downloads last month
22
Safetensors
Model size
69B params
Tensor type
FP16
·
Inference Providers NEW
This model is not currently available via any of the supported third-party Inference Providers, and the model is not deployed on the HF Inference API.

Model tree for grimulkan/llama2_70b_longlora_fp16_32k_ROPE8

Quantizations
2 models