Nexes the Elder
Nexesenex
AI & ML interests
Maintaining KoboldCPP fork Croco.Cpp, and Lllama.cpp fork NXS_Llama.cpp
Merging models sometimes.
Making quants of my merges and fav models, often specific to IKL/NXSL/Croco.Cpp.
Barking at many trees.
Thanks to Mradermacher and Bartowski for their imatrixes, I'm using them extensively.
Note : Don't make quants of my merges until they are versioned! (Vx or Vx.x)
Recent Activity
updated
a model
1 day ago
NexesQuants/google_gemma-3-27b-it-qat-q4_0-unquantized-iMat-NXS-GGUF
updated
a model
1 day ago
NexesQuants/Gemma-3-4b_X-Ray-Abli_Linear_v1.01-iMat-IKL-NXS-CQ-GGUF
updated
a model
1 day ago
NexesQuants/mistral-small-3.1-24b-instruct-2503-iMat-IKLQ-GGUF
Organizations
Llama 3 70B 128K
Legacy L3 70b models merged with Llama 3.1 70b Tess 3 (credit: Miguel Tissera) for 128K context capability and low perplexity.
-
Nexesenex/Llama_3.x_70b_L3.3_Dolphin_128K_v1.02
Text Generation • 71B • Updated • 10 -
Nexesenex/Llama_3.x_70b_Tess_Dolphin_128K_v1.2
Text Generation • 71B • Updated • 12 • 1 -
Nexesenex/Llama_3.x_70b_Tess_Cat_128K_v1.0
Text Generation • 71B • Updated • 28 -
Nexesenex/Llama_3.x_70b_L3.3_Athene_128K_v1.02
Text Generation • 71B • Updated • 31 • 1
My favorite models (benchs+usage/feel+innovation)
How I select models ? First, I bench them, to trim the overfit & dumbified models. Then, I test them. The smartest to my taste end-up here.
-
NexesQuants/alchemonaut_QuartetAnemoi-70B-iMat.GGUF
69B • Updated • 200 • 11 -
Nexesenex/MIstral-QUantized-70b_Miqu-1-70b-iMat.GGUF
69B • Updated • 1.76k • 70 -
NexesQuants/TeeZee_Kyllene-Yi-34B-v1.1-iMat.GGUF
34B • Updated • 698 • 24 -
Nexesenex/TomGrc_FusionNet_7Bx2_MoE_v0.1-iMat.GGUF
13B • Updated • 116 • 2
Releases
My merges worthy of being downloaded and used.
-
Nexesenex/Llama_3.x_70b_Hexagon_Blue_V1
Text Generation • 71B • Updated • 7 • 1 -
Nexesenex/Llama_3.x_70b_SmarTricks_v1.30_flat
Text Generation • 71B • Updated • 9 -
Nexesenex/Llama_3.x_70b_Hexagon_Pink_V1
Text Generation • 71B • Updated • 14 -
Nexesenex/Llama_3.x_70b_Hexagon_Purple_V2
Text Generation • 71B • Updated • 1.76k • 2
Experimental merges
With particular features..
The models I used the most along the way
Mradermacher's quants of my models.
Releases
My merges worthy of being downloaded and used.
-
Nexesenex/Llama_3.x_70b_Hexagon_Blue_V1
Text Generation • 71B • Updated • 7 • 1 -
Nexesenex/Llama_3.x_70b_SmarTricks_v1.30_flat
Text Generation • 71B • Updated • 9 -
Nexesenex/Llama_3.x_70b_Hexagon_Pink_V1
Text Generation • 71B • Updated • 14 -
Nexesenex/Llama_3.x_70b_Hexagon_Purple_V2
Text Generation • 71B • Updated • 1.76k • 2
Llama 3 70B 128K
Legacy L3 70b models merged with Llama 3.1 70b Tess 3 (credit: Miguel Tissera) for 128K context capability and low perplexity.
-
Nexesenex/Llama_3.x_70b_L3.3_Dolphin_128K_v1.02
Text Generation • 71B • Updated • 10 -
Nexesenex/Llama_3.x_70b_Tess_Dolphin_128K_v1.2
Text Generation • 71B • Updated • 12 • 1 -
Nexesenex/Llama_3.x_70b_Tess_Cat_128K_v1.0
Text Generation • 71B • Updated • 28 -
Nexesenex/Llama_3.x_70b_L3.3_Athene_128K_v1.02
Text Generation • 71B • Updated • 31 • 1
Experimental merges
With particular features..
My favorite models (benchs+usage/feel+innovation)
How I select models ? First, I bench them, to trim the overfit & dumbified models. Then, I test them. The smartest to my taste end-up here.
-
NexesQuants/alchemonaut_QuartetAnemoi-70B-iMat.GGUF
69B • Updated • 200 • 11 -
Nexesenex/MIstral-QUantized-70b_Miqu-1-70b-iMat.GGUF
69B • Updated • 1.76k • 70 -
NexesQuants/TeeZee_Kyllene-Yi-34B-v1.1-iMat.GGUF
34B • Updated • 698 • 24 -
Nexesenex/TomGrc_FusionNet_7Bx2_MoE_v0.1-iMat.GGUF
13B • Updated • 116 • 2
The models I used the most along the way