Post
586
π Geilim-1B-Instruct β Implicit Deep Reasoning, Zero Verbosity
NoesisLab/Geilim-1B-Instruct
https://huggingface.co/collections/NoesisLab/geilim-large-language-models
No <think> tags. No long CoT.
Reasoning happens inside the hidden states, not in the output.
Whatβs different
π§ Implicit reasoning: deep causal reasoning without exposing chains
πΈοΈ ASPP (Adjacency-Structured Parallel Propagation): parent-only causal graph, O(n) message passing
π Ο-flow: internal probability-space refinement instead of token-level deliberation
βοΈ Hybrid gating: learns when to use structure vs attention
Why it matters
Lower latency & token cost
Cleaner, production-ready outputs
CoT-level reasoning depth without verbosity tax
Built on Llama-3.2-1B-Instruct, trained for math, logic, and commonsense.
Designed for small-model reasoning at the edge.
#ImplicitReasoning #SmallLLM #EfficientAI #ReasoningModels #ASPP #PiFlow
NoesisLab/Geilim-1B-Instruct
https://huggingface.co/collections/NoesisLab/geilim-large-language-models
No <think> tags. No long CoT.
Reasoning happens inside the hidden states, not in the output.
Whatβs different
π§ Implicit reasoning: deep causal reasoning without exposing chains
πΈοΈ ASPP (Adjacency-Structured Parallel Propagation): parent-only causal graph, O(n) message passing
π Ο-flow: internal probability-space refinement instead of token-level deliberation
βοΈ Hybrid gating: learns when to use structure vs attention
Why it matters
Lower latency & token cost
Cleaner, production-ready outputs
CoT-level reasoning depth without verbosity tax
Built on Llama-3.2-1B-Instruct, trained for math, logic, and commonsense.
Designed for small-model reasoning at the edge.
#ImplicitReasoning #SmallLLM #EfficientAI #ReasoningModels #ASPP #PiFlow