Domain-Fusion-r256-LoRA

This is a LoRA extracted from a language model. It was extracted using mergekit.

LoRA Details

This LoRA adapter was extracted from ChaoticNeutrals/Domain-Fusion-L3-8B and uses NousResearch/Meta-Llama-3-8B as a base.

Parameters

The following command was used to extract this LoRA adapter:

/usr/local/bin/mergekit-extract-lora --out-path=loras/Domain-Fusion-r256-LoRA --model=ChaoticNeutrals/Domain-Fusion-L3-8B --base-model=NousResearch/Meta-Llama-3-8B --no-lazy-unpickle --max-rank=256 --gpu-rich -v --skip-undecomposable --embed-lora
Downloads last month
2
Inference Providers NEW
This model isn't deployed by any Inference Provider. ๐Ÿ™‹ Ask for provider support

Model tree for kromcomp/L3-Domain-Fusion-r256-LoRA

Adapter
(1)
this model