ICONN e1: The new era of Open-Source CoT in AI

**GPU poor? Less than 3x A100s? A e1 Lite model is coming with just 22B parameters alongside a model for consumer CPUs with 14B and 7B parameters.

  • Emotional Context Awareness
    ICONN e1 interprets emotional cues and adjusts tone, vocabulary, and response style—offering a more human-like, emotionally reactive experience.

  • ** ICONN Emotional Core (IEC) (Notice: Not available on Huggingface)**
    Powered by millions of small AI agents, IEC gives ICONN its emotional personality, with billions of simulated emotional states and detections.

  • Reasoning
    ICONN e1 is one of the most powerful reasoning open-source models, and most closed-source models in or out of Huggingface.

What is in the ICONN i1 MoE?

ICONN i1 MoE and Experts

ICONN e1, being a MoE just like it's base model ICONN 1, has multiple expert models. Keywords are taken from the user's input to choose which expert generates the output.

Expert Chosen User Input
ICONN-e1 'Hi!'
ICONN-e1-Pro Solve for m: m² − (2 + ∑₍ⱼ₌₁₎² j)·m + (1 + ∑₍ⱼ₌₁₎³ j² − 14) = 0.
ICONN-e1-Science If a stable isotope of Ununoctium (Uuo, now Og) could be synthesized in bulk, what would be its most likely physical state at STP and why, considering relativistic effects?
ICONN-e1-Code Create a zero-dependency quantum-safe VM in Zig that compiles a domain-specific language into a fully homomorphic encrypted IR, supports hot-reloading WebAssembly modules, parallel scheduling via lock-free fibers, and performs live introspection through a headless OpenGL debug overlay.

ICONN-e1:
ICONN's general-purpose reasoning model, designed for everyday tasks, logic, and conversation.

ICONN-e1-Pro:
ICONN's advanced reasoning model, optimized for complex problem-solving in math, logic, and professional domains.

ICONN-e1-Science:
ICONN's scientific expert model, trained on advanced science datasets to enhance precision in physics, chemistry, biology, and technical reasoning.

ICONN-e1-Code:
ICONN's coding specialist, trained for programming, compiler theory, software architecture, and technical code generation across multiple languages.

Usage

**First, make sure you have at least 4x Nvidia A100 or a single B100, and 120GB RAM and 120-192GB VRAM. Don't have this? Use our Lite model, coming soon.

Run the code below to run ICONN i1:

from transformers import AutoTokenizer, AutoModelForCausalLM, pipeline
import torch

def run_iconn_chatbot(model_name="ICONNAI/ICONN-e1"):

    tokenizer = AutoTokenizer.from_pretrained(model_name)
    model = AutoModelForCausalLM.from_pretrained(model_name)
    
    device = 0 if torch.cuda.is_available() else -1
    
    chat_pipeline = pipeline(
        "text-generation",
        model=model,
        tokenizer=tokenizer,
        device=device,
        max_length=1624,
        do_sample=True,
        top_p=0.9,
        temperature=0.4,
        pad_token_id=tokenizer.eos_token_id
    )
    
    print(f"ICONN chatbot running with model: {model_name}. Type 'exit' to quit.")
    conversation_history = ""
    
    while True:
        user_input = input("You: ")
        if user_input.lower() == "exit":
            print("Goodbye!")
            break
        
        conversation_history += f"User: {user_input}\nBot:"
        
        response = chat_pipeline(conversation_history, max_length=len(tokenizer.encode(conversation_history)) + 100)[0]['generated_text']
        
        bot_reply = response[len(conversation_history):].strip().split("\n")[0]
        
        print(f"Bot: {bot_reply}")
        
        conversation_history += f" {bot_reply}\n"

if __name__ == "__main__":
    run_iconn_chatbot()
Downloads last month
65
Safetensors
Model size
84B params
Tensor type
BF16
·
Inference Providers NEW
This model isn't deployed by any Inference Provider. 🙋 5 Ask for provider support

Model tree for ICONNAI/ICONN-e1

Quantizations
2 models

Collection including ICONNAI/ICONN-e1