Local LLM Chat Agent: Interact with Self-Hosted Models via n8n & Ollama
Integrates with:
Overview
Unlock Private & Customizable AI Conversations with this AI Agent
This AI Agent empowers you to directly interact with your self-hosted Large Language Models (LLMs) through a user-friendly chat interface within n8n. By seamlessly connecting to Ollama, a powerful tool for running and managing local LLMs (like Llama3, Mistral, and more), you can send prompts and receive AI-generated responses, all while keeping your data private and your AI capabilities highly customizable. This agent's primary skill is providing direct, local conversational AI.
Key Features & Benefits
- Direct Local LLM Interaction: Chat with LLMs (e.g., Llama3, Mistral) running on your own hardware via Ollama, facilitated by Langchain nodes.
- Full Data Privacy: Keep sensitive data in-house by processing prompts and responses locally, crucial for founders and businesses handling proprietary information.
- Model Flexibility: Easily switch between different Ollama-supported LLMs to find the best fit for your specific task or business need.
- n8n Chat Interface: Utilizes n8n's built-in chat trigger for a smooth and integrated user experience for testing and internal use.
- Rapid Prototyping: Quickly test and iterate on AI-driven ideas, features, or internal tools without relying on external APIs or incurring per-call costs.
- AI-driven Automation: Leverages the power of local LLMs managed by Ollama for intelligent text generation, Q&A, and other conversational tasks.
Use Cases
- Founders & Solopreneurs: Experiment with custom AI assistants for tasks like content drafting, code generation, or brainstorming, maintaining full data control and privacy.
- B2C E-commerce: Develop and test internal tools for generating product description variants or answering complex customer queries using company-specific data on a local LLM.
- B2B SaaS Companies: Enable R&D and automation teams to rapidly prototype and evaluate different local LLMs for features like automated support ticket summarization or internal knowledge base querying within a secure n8n environment.
- CTOs & Heads of Automation: Assess the performance and suitability of various open-source LLMs for specific business processes, ensuring data governance and cost control by hosting locally.
Prerequisites
- An n8n instance (Cloud or self-hosted).
- Ollama installed and running on a machine accessible by your n8n instance (e.g., your local machine, a server).
- At least one LLM downloaded via Ollama (e.g., run
ollama pull llama3
in your terminal). - n8n 'Ollama' credentials configured in your n8n instance, pointing to your Ollama server's base URL (typically
http://localhost:11434
).
Setup Instructions
- Download the n8n workflow JSON file.
- Import the workflow into your n8n instance.
- Ensure Ollama is installed and running on your machine (or a server accessible by n8n). You can verify by opening
http://localhost:11434
(or your Ollama address) in a browser. - Pull an LLM model using Ollama if you haven't already (e.g., open your terminal and run
ollama pull llama3
). - Configure the 'Ollama Chat Model' node in the workflow:
a. Under 'Connect to Ollama', select your pre-configured Ollama credential or create a new one. The 'Base URL' should be your Ollama server address (e.g.,
http://localhost:11434
). b. In the 'Model' parameter, select the specific LLM you want to use from the dropdown list (it populates with models available in your Ollama instance). c. (Optional) Adjust other parameters like 'Temperature' or define a 'System Message' via an expression for advanced control. - If you're running n8n in a Docker container and Ollama is on your host machine, ensure n8n can reach Ollama. For Docker Desktop,
http://host.docker.internal:11434
might work for the Base URL. For Linux hosts, using--net=host
for the n8n container is an option, or ensure they are on the same Docker network and use the appropriate internal IP/hostname. - Activate the workflow.
- To interact with the agent, use the n8n chat interface. Open the workflow, click the 'Execute Workflow' button (or the play button on the 'When chat message received' node if testing manually), and send a message in the chat panel that appears.
Want your own unique AI agent?
Talk to us - we know how to build custom AI agents for your specific needs.
Schedule a Consultation