NeuralMona_MoE-4x7B / README.md
CultriX's picture
Upload folder using huggingface_hub
869c5ca verified
|
raw
history blame
4.23 kB
metadata
license: apache-2.0
tags:
  - moe
  - frankenmoe
  - merge
  - mergekit
  - lazymergekit
  - CultriX/MonaTrix-v4
  - mlabonne/OmniTruthyBeagle-7B-v0
  - CultriX/MoNeuTrix-7B-v1
  - paulml/OmniBeagleSquaredMBX-v3-7B
base_model:
  - CultriX/MonaTrix-v4
  - mlabonne/OmniTruthyBeagle-7B-v0
  - CultriX/MoNeuTrix-7B-v1
  - paulml/OmniBeagleSquaredMBX-v3-7B

NeuralMona_MoE-4x7B

NeuralMona_MoE-4x7B is a Mixture of Experts (MoE) made with the following models using LazyMergekit:

🧩 Configuration

base_model: CultriX/MonaTrix-v4
dtype: bfloat16
experts:
  - source_model: "CultriX/MonaTrix-v4"  # Historical Analysis, Geopolitics, and Economic Evaluation
    positive_prompts:
      - "Historic analysis"
      - "Geopolitical impacts"
      - "Evaluate significance"
      - "Predict impact"
      - "Assess consequences"
      - "Discuss implications"
      - "Explain geopolitical"
      - "Analyze historical"
      - "Examine economic"
      - "Evaluate role"
      - "Analyze importance"
      - "Discuss cultural impact"
      - "Discuss historical"
    negative_prompts:
      - "Compose"
      - "Translate"
      - "Debate"
      - "Solve math"
      - "Analyze data"
      - "Forecast"
      - "Predict"
      - "Process"
      - "Coding"
      - "Programming"
      - "Code"
      - "Datascience"
      - "Cryptography"

  - source_model: "mlabonne/OmniTruthyBeagle-7B-v0"  # Multilingual Communication and Cultural Insights
    positive_prompts:
      - "Describe cultural"
      - "Explain in language"
      - "Translate"
      - "Compare cultural differences"
      - "Discuss cultural impact"
      - "Narrate in language"
      - "Explain impact on culture"
      - "Discuss national identity"
      - "Describe cultural significance"
      - "Narrate cultural"
      - "Discuss folklore"
    negative_prompts:
      - "Compose"
      - "Debate"
      - "Solve math"
      - "Analyze data"
      - "Forecast"
      - "Predict"
      - "Coding"
      - "Programming"
      - "Code"
      - "Datascience"
      - "Cryptography"

  - source_model: "CultriX/MoNeuTrix-7B-v1"  # Problem Solving, Innovation, and Creative Thinking
    positive_prompts:
      - "Devise strategy"
      - "Imagine society"
      - "Invent device"
      - "Design concept"
      - "Propose theory"
      - "Reason math"
      - "Develop strategy"
      - "Invent"
    negative_prompts:
      - "Translate"
      - "Discuss"
      - "Debate"
      - "Summarize"
      - "Explain"
      - "Detail"
      - "Compose"

  - source_model: "paulml/OmniBeagleSquaredMBX-v3-7B"  # Explaining Scientific Phenomena and Principles
    positive_prompts:
      - "Explain scientific"
      - "Discuss impact"
      - "Analyze potential"
      - "Elucidate significance"
      - "Summarize findings"
      - "Detail explanation"
    negative_prompts:
      - "Cultural significance"
      - "Engage in creative writing"
      - "Perform subjective judgment tasks"
      - "Discuss cultural traditions"
      - "Write review"
      - "Design"
      - "Create"
      - "Narrate"
      - "Discuss"

💻 Usage

!pip install -qU transformers bitsandbytes accelerate

from transformers import AutoTokenizer
import transformers
import torch

model = "CultriX/NeuralMona_MoE-4x7B"

tokenizer = AutoTokenizer.from_pretrained(model)
pipeline = transformers.pipeline(
    "text-generation",
    model=model,
    model_kwargs={"torch_dtype": torch.float16, "load_in_4bit": True},
)

messages = [{"role": "user", "content": "Explain what a Mixture of Experts is in less than 100 words."}]
prompt = pipeline.tokenizer.apply_chat_template(messages, tokenize=False, add_generation_prompt=True)
outputs = pipeline(prompt, max_new_tokens=256, do_sample=True, temperature=0.7, top_k=50, top_p=0.95)
print(outputs[0]["generated_text"])