Skip to content

Consideration

Definition

Consideration is the ModelA family’s first full-scale implementation of Artificial Thinking. Unlike simple prompt-chaining or hidden reasoning steps, Consideration is a state that allows the model to analyze, plan, and structure its response before any text is surfaced to the user.

Key Properties

  • Not a Mode: Consideration is not activated manually or through user commands. It is a native property of the ModelA 9 series.

  • Structured Breakdown: The engine internally decomposes a prompt into subcomponents (intent, facts, reasoning paths, tool opportunities) before committing to an answer.

  • Adaptive Depth: The amount of Consideration scales based on model variant:

    • Pico: Minimal analysis, optimized for maximum throughput and direct execution.

    • Nano: Performs lightweight analysis, optimized for speed.

    • Standard: Balances efficiency with depth, applying full Consideration in complex queries.

    • Pro: Capable of extended reasoning with significantly higher maximum thought output.

Technical Behavior

  • Internal Thought Buffer: Responses are built in a hidden reasoning layer before being streamed to the user.

  • Decision Pathways: When uncertainty is detected, the model leverages Consideration to decide whether to:

    • Search the web (if tool access is available),

    • Admit lack of knowledge (“I don’t know”), or

    • Generate a probabilistic response.

  • Token Efficiency: By pre-structuring outputs, Consideration reduces redundancy, leading to more concise and direct answers.

Comparison to Other Systems

  • Competes with early “thinking” implementations from companies such as OpenAI (chain-of-thought variants), DeepSeek, and Anthropic.

  • Differentiated by being stateful, not optional — Consideration is woven into the engine’s core behavior, rather than an opt-in feature.

Impact on Users

  • Users receive clearer, less verbose responses that still retain depth when required.

  • Reduced rate of hallucination due to structured fallback strategies.

  • More intelligent tool usage, as the model evaluates when tools should be invoked rather than blindly attempting them.