DeepSeek AI Integration Quick Start: Chat & Reasoning Agent
Integrates with:
Overview
Unlock Advanced AI Capabilities with this DeepSeek Quick Start Agent
This n8n workflow is your launchpad for integrating DeepSeek's cutting-edge language models into your automations. It's designed as a comprehensive quick start guide, showcasing multiple methods to connect and interact with deepseek-chat
(V3 for general conversation) and deepseek-reasoner
(R1 for complex reasoning tasks). Whether you prefer cloud APIs, LangChain's structured approach, or local models via Ollama, this agent has you covered.
Use this AI Agent to:
- Rapidly prototype conversational AI applications.
- Explore the reasoning power of DeepSeek-R1 for data analysis or problem-solving.
- Learn best practices for calling LLMs within n8n.
Key Features & Benefits
- Multiple DeepSeek Models: Demonstrates usage of both
deepseek-chat
(V3) anddeepseek-reasoner
(R1). - Versatile Connection Methods:
- Direct API Calls: Examples using n8n's HTTP Request node for
deepseek-chat
anddeepseek-reasoner
. - LangChain Integration: Utilizes the
LMChatOpenAI
node, leveraging DeepSeek's OpenAI API compatibility. - Local Model Support: Shows how to connect to DeepSeek models running locally via Ollama using the
LMChatOllama
node.
- Direct API Calls: Examples using n8n's HTTP Request node for
- Conversational Agent Setup: Includes a LangChain agent with window buffer memory for building interactive chat experiences.
- Practical Examples: Pre-configured nodes and sticky notes explain each approach, making it easy to adapt and extend.
- Empowers AI-Driven Automation: Quickly add sophisticated natural language understanding, generation, and reasoning abilities to any n8n workflow.
Use Cases
- B2C E-commerce: Develop an AI-powered customer support chatbot using DeepSeek Chat V3 to handle inquiries, provide product recommendations, and resolve common issues 24/7.
- B2B SaaS: Build an intelligent in-app assistant to guide users through complex features or troubleshoot problems using DeepSeek's reasoning capabilities.
- B2C E-commerce: Automate personalized product description generation based on customer queries or new inventory, enhancing product appeal.
- B2B SaaS: Automate the summarization and analysis of customer feedback or technical documentation for faster insights and product development.
- Internal Tools: Create AI assistants for developers that leverage DeepSeek-R1 for code explanation or debugging assistance.
Prerequisites
- An n8n instance (Cloud or self-hosted).
- DeepSeek API Key (obtainable from platform.deepseek.com/api_keys).
- (Optional) Ollama setup with DeepSeek models (e.g.,
deepseek-r1
) downloaded for local execution. - n8n credentials configured for DeepSeek API (e.g., 'Generic Credential Type: HTTP Header Auth' for HTTP nodes, or an OpenAI compatible credential for LangChain nodes pointing to DeepSeek's API endpoint).
- (Optional) n8n credentials for your local Ollama instance if using Ollama nodes.
Setup Instructions
- Download the n8n workflow JSON file.
- Import the workflow into your n8n instance.
- Configure DeepSeek API Access:
- For the 'DeepSeek' LangChain node (using OpenAI compatibility): Create or select an n8n credential for OpenAI API. In its configuration, set the 'Base URL' to
https://api.deepseek.com/v1
and use your DeepSeek API key. - For the 'DeepSeek JSON Body' / 'DeepSeek Raw Body' HTTP Request nodes: Create or select an n8n 'HTTP Header Auth' credential. Add a header with 'Name' as
Authorization
and 'Value' asBearer YOUR_DEEPSEEK_API_KEY
(replaceYOUR_DEEPSEEK_API_KEY
with your actual key).
- For the 'DeepSeek' LangChain node (using OpenAI compatibility): Create or select an n8n credential for OpenAI API. In its configuration, set the 'Base URL' to
- (Optional) Configure Ollama:
- Ensure Ollama is running on your system or a reachable server.
- Pull the desired DeepSeek model (e.g.,
ollama pull deepseek-r1:14b
). - In the 'Ollama DeepSeek' node, select or create an n8n credential for your Ollama instance, specifying its base URL (e.g.,
http://localhost:11434
).
- Review the different connection examples provided (LangChain Agent with memory, Basic LLM Chain with Ollama, direct HTTP Requests). The workflow is structured with sticky notes explaining each section.
- The 'When chat message received' trigger is pinned with example data. You can connect it to any of the demonstrated DeepSeek interaction paths to test.
- Customize the prompts, models (
deepseek-chat
for chat,deepseek-reasoner
for reasoning), and parameters within the nodes as needed for your specific use case. - Activate the workflow to use your DeepSeek AI Agent.
Want your own unique AI agent?
Talk to us - we know how to build custom AI agents for your specific needs.
Schedule a Consultation