AI-Driven Content Fact-Checker & Summarizer with Ollama
Integrates with:
Overview
Unlock Accurate Content with this AI-Driven Fact-Checking Agent
This n8n AI Agent automates the often tedious process of content verification. It takes an input text and a source 'facts' document, intelligently breaks down the input text into individual claims (sentences), and then uses local Large Language Models (LLMs) via Ollama to check each claim against the provided facts. Finally, it compiles a structured summary detailing any statements found to be incorrect, helping you maintain high-quality, accurate content with less manual effort.
Key Features & Benefits
- AI-Powered Fact-Checking: Leverages local Ollama LLMs (like the specialized
bespoke-minicheck
) for nuanced claim verification. - Intelligent Text Segmentation: A custom code node accurately splits text into sentences, respecting date formats and list structures.
- Source-Based Verification: Checks claims directly against your provided 'facts' document for contextual accuracy.
- Automated Inaccuracy Reporting: Generates a clear, markdown-formatted summary of incorrect statements, including an overall assessment.
- Dual LLM Process: Uses one LLM for claim checking and another (e.g.,
qwen2.5:1.5b
) for summarizing findings, optimizing for each task. - Flexible Input: Can be triggered manually with sample data or integrated into larger workflows to receive 'facts' and 'text' dynamically.
- Local & Private: By using Ollama, your data can remain on your own infrastructure, enhancing privacy and control.
- Customizable Models: Easily swap out Ollama models to experiment with different AI capabilities or languages.
- Error Filtering: Focuses only on claims identified as incorrect, streamlining the review process.
- Ideal for Content Teams & Researchers: Drastically reduces time spent on manual fact-checking and improves content integrity.
Use Cases
- Automating the verification of factual accuracy in articles or blog posts before publication.
- Streamlining the review of user-generated content for potential misinformation.
- Assisting researchers in quickly identifying discrepancies between texts and source materials.
- Enhancing editorial workflows by automatically pinpointing factual errors for correction.
- Validating claims in marketing copy or technical documentation against a knowledge base.
Prerequisites
- An n8n instance (Cloud or self-hosted).
- Ollama installed and running, accessible by your n8n instance.
- Required Ollama models pulled, specifically:
ollama pull bespoke-minicheck:latest
(for claim verification)ollama pull qwen2.5:1.5b
(for summarization, or your chosen alternative)
- n8n credentials configured for your Ollama API.
Setup Instructions
- Download the n8n workflow JSON file.
- Import the workflow into your n8n instance.
- Ensure your Ollama instance is running and accessible from n8n. Confirm the necessary Ollama models (
bespoke-minicheck:latest
,qwen2.5:1.5b
, or your equivalents) are downloaded (ollama pull <model_name>
). - Configure the 'Ollama Chat Model' node:
- Select your Ollama API credential.
- Set the 'Model' parameter to
bespoke-minicheck:latest
(or your preferred claim-checking model).
- Configure the 'Ollama Model' node:
- Select your Ollama API credential.
- Set the 'Model' parameter to
qwen2.5:1.5b
(or your preferred summarization model).
- To test, you can use the 'When clicking ‘Test workflow’' trigger. The 'Edit Fields' node contains sample 'facts' and 'text'. You can modify these for your tests.
- For production use, trigger this workflow using the 'When Executed by Another Workflow' node. Ensure the calling workflow passes
facts
(the source document text) andtext
(the content to be checked) as input parameters. - Review and customize the prompts within the 'Basic LLM Chain4' (for individual claim checking) and 'Basic LLM Chain' (for final summary generation) nodes if needed.
- Activate the workflow.
Want your own unique AI agent?
Talk to us - we know how to build custom AI agents for your specific needs.
Schedule a Consultation