Edit model card

This is an interleaved merge of Xwin-longLORA-70b-rope8-32k-fp16 and Euryale-1.3-longLORA-70b-rope8-32k-fp16, using the same merge formula as alpindale's goliath-120b.

There is no additional fine-tuning. The resulting model seems to not be broken... you can test whether it is truly the original model + 32K capability (use linear rope scaling 8).

ChuckMcSneed did a benchmark here, indicating 30% degradation with 8x the context length.

A 6-bit EXL2 quantization is available here. More EXL2 quants here, thanks to aikitoria.

See this discussion for how the original 70B merges were created with longLORA.

Downloads last month
12
Safetensors
Model size
118B params
Tensor type
BF16
·
Inference Examples
This model does not have enough activity to be deployed to Inference API (serverless) yet. Increase its social visibility and check back later, or deploy to Inference Endpoints (dedicated) instead.

Model tree for grimulkan/Goliath-longLORA-120b-rope8-32k-fp16

Quantizations
4 models