Animated data flow diagram

AI Chat Agent with Ollama & n8n for Local LLM Interaction

Version: 1.0.0 | Last Updated: 2025-05-16

Integrates with:

Ollama LangChain

Overview

Unlock Interactive AI Chat with Your Own Models using this AI Agent

This n8n workflow transforms your local Ollama setup into an interactive AI Chat Agent. It listens for chat inputs, processes them using a specified Ollama model (like Llama 3.2), and then intelligently returns a structured JSON object containing both the user's prompt and the AI's response. This AI Agent is designed for those who want to leverage powerful open-source LLMs locally, maintaining data privacy and control while enabling sophisticated conversational AI capabilities.

Key Features & Benefits

  • Local LLM Power: Connects directly to your Ollama instance, allowing you to use models like Llama 3.2 (or any other compatible model) running on your own hardware.
  • AI-Driven Conversations: Provides a chat interface to interact with your chosen language model for tasks like Q&A, content generation, or technical assistance.
  • Structured JSON Output: The AI is prompted to return responses in a clean JSON format ({"Prompt": "user's_question", "Response": "ai's_answer"}), making it easy to parse and use the output in other automations or applications.
  • Customizable Prompts: Tailor the system prompt within the LangChain LLM node to guide the AI's behavior and response style for your specific needs.
  • Data Privacy & Control: Keep your data on your own systems by using local models via Ollama.
  • Error Handling: Includes a basic error path to manage instances where the LLM chain might encounter issues, ensuring a fallback response.

Use Cases

  • B2C E-commerce: Develop an internal AI assistant for staff to query local product databases or internal documentation securely.
  • B2B SaaS: Create a sandboxed environment for developers to experiment with local LLM responses and integrate structured AI output into new product features.
  • Solopreneurs: Build a personalized, private AI assistant for brainstorming, drafting content, or coding, without relying on third-party cloud services.
  • CTOs/Founders: Rapidly prototype and test applications of various open-source LLMs for R&D, ensuring data confidentiality and cost control.
  • Heads of Automation: Automate tasks requiring natural language understanding and structured output by feeding chat inputs programmatically and consuming the JSON results.

Prerequisites

  • An n8n instance (Cloud or self-hosted).
  • Ollama installed and running (e.g., on your local machine or a server accessible by n8n).
  • At least one model pulled in Ollama (workflow defaults to llama3.2:latest, but can be changed).
  • n8n LangChain nodes installed (@n8n/n8n-nodes-langchain package).
  • Network connectivity between your n8n instance and your Ollama API endpoint.

Setup Instructions

  1. Download the n8n workflow JSON file.
  2. Import the workflow into your n8n instance.
  3. Configure the 'Ollama Model' node: a. Create or select your Ollama API credentials. The 'Base URL' should point to your Ollama server (e.g., http://localhost:11434 if running locally). b. Verify the 'Model' parameter (default is llama3.2:latest) matches a model available in your Ollama installation. You can change this to any model you have.
  4. (Optional) Customize the master prompt in the 'Basic LLM Chain' node. The text parameter contains instructions for the LLM, including the JSON output format. Adjust this for your specific needs.
  5. (Optional) Modify the 'Structured Response' node to change how the final chat message is displayed or structured for the chat output.
  6. The 'When chat message received' node acts as the trigger. You can use n8n's built-in chat UI to interact with this agent.
  7. Activate the workflow. Test by sending a message through the chat interface.

Tags:

AI AgentOllamaLlama3ChatbotNLPAutomationLangChainLocal LLMStructured Data

Want your own unique AI agent?

Talk to us - we know how to build custom AI agents for your specific needs.

Schedule a Consultation