metadata
base_model:
- Tarek07/Dungeonmaster-V2.2-Expanded-LLaMa-70B
- Steelskull/L3.3-MS-Nevoria-70b
- Tarek07/Progenitor-V3.3-LLaMa-70B
library_name: transformers
tags:
- mergekit
- merge
Pinecone-Titan-70b
🌲Pinecone Series
The Pinecone Series is a collection of thoughtfully crafted model merges, combining the strengths of the best models among my personal favourites. Each version is curated to excel in roleplay, general knowledge, intelligence, and rich creative writing, while preserving the unique capabilities of its underlying models.
Version | Params | Strengths |
---|---|---|
Pinecone-Rune | 12B | Fast, lightweight, surprisingly capable for its size |
Pinecone-Sage | 24B | Balanced speed and performance, rich prose and RP |
Pinecone-Titan | 70B | Rich prose, better long context capabilities, top-tier roleplay & knowledge |
Recommended ST preset for RP :
☕ Support My Work
If you like my work, consider buying me a coffee to support future merges, GPU time, and experiments.
This is a merge of pre-trained language models created using mergekit.
Merge Details
Merge Method
This model was merged using the DARE TIES merge method using Steelskull/L3.3-MS-Nevoria-70b as a base.
Models Merged
The following models were included in the merge:
Configuration
The following YAML configuration was used to produce this model:
base_model: Steelskull/L3.3-MS-Nevoria-70b
chat_template: auto
merge_method: dare_ties
modules:
default:
slices:
- sources:
- layer_range: [0, 80]
model: Steelskull/L3.3-MS-Nevoria-70b
parameters:
weight: 0.4
- layer_range: [0, 80]
model: Tarek07/Dungeonmaster-V2.2-Expanded-LLaMa-70B
parameters:
weight: 0.3
- layer_range: [0, 80]
model: Tarek07/Progenitor-V3.3-LLaMa-70B
parameters:
weight: 0.3
out_dtype: bfloat16
parameters:
density: 1.0
tokenizer: {}