Animated data flow diagram

Dynamic AI Chat Agent via OpenRouter & n8n

Version: 1.0.0 | Last Updated: 2025-05-16

Integrates with:

OpenRouter Langchain

Overview

Unlock Flexible LLM Interaction with this AI Agent

This n8n AI Agent empowers you to connect with a vast array of Large Language Models (LLMs) through OpenRouter. Instead of being locked into a single provider, you can dynamically choose and switch between models from OpenAI, Google, Anthropic, Mistral AI, and many others, all within a single, streamlined workflow. This agent is designed for building versatile AI chat solutions, enabling you to test different model capabilities, optimize for cost, or tailor responses for specific tasks.

It listens for chat inputs, processes them using your selected LLM via OpenRouter, and maintains conversational context using built-in memory. This setup is perfect for solopreneurs, founders, and technical leaders looking to experiment with or deploy AI chat functionalities with maximum flexibility and control.

Key Features & Benefits

  • Multi-Model Access: Connect to hundreds of LLMs via OpenRouter (e.g., GPT series, Gemini, Claude, Llama, Mistral models) without managing multiple API integrations.
  • Dynamic Model Selection: Easily switch the LLM used by the agent by changing a single 'model' variable – ideal for A/B testing, cost management, or task-specific model routing.
  • Conversational Context: Utilizes n8n's Langchain memory nodes to remember previous interactions in a session for more coherent conversations.
  • Rapid Prototyping: Quickly build and test AI chat features with different models before committing to a specific provider or fine-tuning approach.
  • Cost-Effective Experimentation: Leverage OpenRouter's platform to find the most cost-effective models for your needs, including access to free-tier models where available.
  • Seamless n8n Integration: Built with standard n8n Langchain nodes for easy understanding, customization, and extension within your n8n environment.

Use Cases

  • **E-commerce Customer Service**: Develop a highly adaptable customer service AI. Use OpenRouter to switch between a fast, cost-effective LLM for common FAQs (e.g., Mistral 7B free tier) and a more powerful model (e.g., Claude 3 Sonnet) for complex inquiries or personalized recommendations, optimizing both user experience and operational costs.
  • **SaaS Internal Tools**: Create internal AI assistants for diverse tasks. Empower your sales team with an AI for drafting outreach emails using one LLM, while your support team uses another LLM optimized for technical troubleshooting, all managed through this unified n8n agent.
  • **SaaS Product Feature**: Offer flexible AI-powered features within your SaaS product by leveraging various LLMs through OpenRouter. This allows you to adapt to new model releases or choose the best model for specific user-generated content tasks without re-engineering your core integration.
  • **Founder/CTO Prototyping**: Quickly prototype and validate AI-driven product features. Test different LLMs (e.g., comparing summarization quality of DeepSeek Coder vs. Gemini Flash) to find the optimal balance of performance, cost, and suitability before significant investment.

Prerequisites

  • An n8n instance (Cloud or self-hosted).
  • An OpenRouter API Key. You can get one by signing up at https://openrouter.ai/.
  • Crucial Credential Setup: In n8n, navigate to Credentials and create a new 'OpenAI API' credential. Use your OpenRouter API Key in the 'API Key' field. For the 'Base URL' field, enter https://openrouter.ai/api/v1. Name this credential something like 'OpenRouter Creds' for easy identification in the workflow.

Setup Instructions

  1. Download the n8n workflow JSON file (this agent's template).
  2. Import the workflow into your n8n instance.
  3. Locate the 'LLM Model' node (it's an @n8n/n8n-nodes-langchain.lmChatOpenAi type node).
    • In its parameters, select the credential you created in the Prerequisites step (e.g., 'OpenRouter Creds').
  4. Go to the 'Settings' node (a Set node).
    • Modify the model string value to specify your desired LLM from OpenRouter. For example: openai/gpt-4o-mini, google/gemini-1.5-flash, anthropic/claude-3-haiku-20240307, mistralai/mistral-7b-instruct:free.
    • Consult the 'Sticky Note2' within the workflow or visit https://openrouter.ai/models for a comprehensive list of available model identifiers.
    • The prompt variable is pre-set to use {{ $json.chatInput }} from the 'When chat message received' trigger. The sessionId variable uses {{ $json.sessionId }} for conversation memory.
  5. Test the workflow: You can connect the 'When chat message received' trigger to an actual chat interface or manually run the workflow, providing sample JSON input for chatInput and sessionId when prompted.
  6. Activate the workflow to make it live.

Tags:

AI AgentAutomationOpenRouterLLMConversational AIChatbotAPI IntegrationGenerative AI

Want your own unique AI agent?

Talk to us - we know how to build custom AI agents for your specific needs.

Schedule a Consultation