AI Router

Use AI to intelligently route workflow execution based on prompt analysis

Node Type

Conditional

Category

AI & Logic

Icon

AI Brain

Overview

The AI Router node is a powerful conditional node that uses Large Language Models (LLMs) to analyze prompts and intelligently select which downstream node to follow. Instead of traditional rule-based routing, this node leverages AI to understand context and make smart decisions about workflow execution paths.

Key Features

  • AI-Powered Routing: Uses LLMs to analyze prompts and select optimal execution paths
  • Dynamic Node Analysis: Automatically discovers and analyzes available downstream nodes
  • Letter-Based Selection: Maps downstream nodes to letters (A, B, C, D...) for easy AI reference
  • Flexible Execution: Option to allow no execution if no suitable path exists
  • Context-Aware Decisions: Considers node descriptions, types, and categories when routing
  • Cost Tracking: Monitors LLM usage costs for optimization

Prerequisites

AI Service Access

Must have access to LLM services

LLM service properly configured and accessible
Valid API keys and authentication set up
Sufficient API credits for LLM calls

Workflow Structure

Downstream Nodes: Must have at least one connected downstream node for routing
Node Limits: Maximum 26 downstream nodes supported (A-Z mapping)
Proper Connections: Nodes must be properly connected in the workflow graph

Technical Requirements

Workflow Service: Access to workflow service for node discovery
Node Registry: Access to node registry for configuration information
Enum Switch Tool: LLM tool for structured decision making

Node Configuration

Required Fields

Prompt

Type:text
Required:Yes
Example:"Route to the node that handles email processing"

Describes how the LLM should choose a branch. The AI will analyze this prompt along with available downstream nodes to make the routing decision.

Optional Fields

Allow No Execution

Type:dropdown
Required:No
Default:false

When enabled, allows the AI to choose not to execute any branches if none are appropriate for the given prompt.

Technical Details

AI Decision Process

How the node analyzes prompts and makes routing decisions

Downstream Node Discovery

The node automatically discovers all connected downstream nodes and creates a letter-based mapping:

  • Node A: First downstream node (index 0)
  • Node B: Second downstream node (index 1)
  • Node C: Third downstream node (index 2)
  • Maximum: 26 nodes (A-Z range)

Enhanced Prompt Creation

The node creates an enhanced prompt that includes detailed information about each available downstream node:

  • • Node title and type
  • • Node description and category
  • • Full node configuration JSON
  • • Letter mapping for easy reference

LLM Integration

How the node interfaces with Large Language Models

Enum Switch Tool

Uses a specialized LLM tool that constrains the AI's response to valid options only. This ensures the AI can only select from the available downstream nodes or "NONE" if no execution is allowed.

Cost Tracking

Monitors and returns the cost of LLM API calls, allowing users to track and optimize their AI usage expenses across workflows.

Fallback Handling

If the LLM call fails, the node gracefully falls back to selecting the first available downstream node, ensuring workflow execution continues even with AI service issues.

Branch Selection Logic

How the selected branch is processed and executed

Index Calculation

Converts the AI's letter selection back to a numerical index for workflow execution. For example, if the AI selects "C", the node calculates index 2 and routes to the third downstream node.

No Execution Handling

When "Allow No Execution" is enabled and the AI selects "NONE", the workflow continues without executing any downstream branches, useful for conditional workflows that may not always need execution.

Validation

Validates that the selected index is within the valid range of downstream nodes, throwing an error if the AI somehow selects an invalid option.

Examples & Use Cases

Content Type Routing

Route based on content analysis

{
  "prompt": "Route to the appropriate node based on the content type. If it's an email, use the email processor. If it's a document, use the document analyzer. If it's an image, use the image processor."
}

The AI will analyze the content and route to the most appropriate processing node based on the content type.

Priority-Based Routing

Route based on urgency or priority

{
  "prompt": "Route urgent requests to the high-priority queue, standard requests to the normal queue, and low-priority items to the batch processor."
}

The AI determines the priority level and routes accordingly, ensuring proper resource allocation.

Conditional Execution with No-Execution

Allow the AI to skip execution when appropriate

{
  "prompt": "Only process items that require immediate attention. Skip processing if the item is already handled or doesn't need action.",
  "allowNoExecution": true
}

The AI can choose "NONE" if no downstream nodes are appropriate, preventing unnecessary processing.

Workflow Examples

Intelligent Content Processing Pipeline

Route different content types to specialized processors

Workflow Structure

📥 Input → 🧠 AI Router → 📧 Email Processor / 📄 Doc Analyzer / 🖼️ Image Processor → 📊 Results

AI Router Configuration

{
  "prompt": "Analyze the input content and route to the appropriate processor. Emails go to email processor, documents to document analyzer, and images to image processor."
}

Downstream Nodes

  • Node A: Email Processor - Handles email content and metadata
  • Node B: Document Analyzer - Processes text documents and PDFs
  • Node C: Image Processor - Analyzes and processes images

Customer Support Ticket Routing

Automatically route support tickets to appropriate teams

Use Case

Automatically analyze customer support tickets and route them to the most appropriate support team based on the issue type, urgency, and team expertise.

AI Router Prompt

{
  "prompt": "Route this support ticket to the appropriate team. Technical issues go to engineering, billing questions to finance, general inquiries to customer service, and urgent issues to the priority queue."
}

Implementation

  • AI analyzes ticket content and urgency
  • Routes to appropriate team based on issue type
  • Handles edge cases and unclear routing scenarios
  • Provides consistent and fair ticket distribution

Best Practices

Do's

  • • Write clear, specific prompts that describe routing logic
  • • Use descriptive node titles and descriptions for better AI understanding
  • • Limit downstream nodes to 26 or fewer for optimal performance
  • • Test prompts with various input scenarios
  • • Monitor LLM costs and optimize prompt efficiency
  • • Use "Allow No Execution" when appropriate
  • • Provide context about what each downstream node does

Don'ts

  • • Don't write overly complex or ambiguous prompts
  • • Avoid having too many downstream nodes (max 26)
  • • Don't ignore LLM costs in high-volume workflows
  • • Avoid prompts that require subjective judgment
  • • Don't assume the AI will always make perfect decisions
  • • Avoid routing logic that changes frequently
  • • Don't forget to handle edge cases in your prompts
💡
Pro Tip: When designing prompts, think about how you would explain the routing logic to another person. Clear, specific instructions with examples often lead to better AI decision-making and more predictable workflow behavior.

Troubleshooting

Common Issues

No Downstream Nodes Found

Symptoms: Node fails with "No downstream nodes found" error

Solution: Ensure the AI Router node is properly connected to at least one downstream node. Check your workflow connections and verify the node placement.

LLM Service Errors

Symptoms: Node falls back to default routing or fails to execute

Solution: Check your LLM service configuration, API keys, and service availability. The node will fall back to the first downstream node if LLM calls fail.

Unexpected Routing Decisions

Symptoms: AI makes routing decisions that don't match expectations

Solution: Review and refine your prompt. Make it more specific, provide clearer examples, and ensure downstream node descriptions are accurate and helpful.

High LLM Costs

Symptoms: Unexpectedly high costs from frequent AI Router usage

Solution: Optimize prompts to be more concise, implement caching strategies, and consider using simpler conditional logic for high-volume workflows.

Related Resources