Animated data flow diagram

AI Agent for Dynamic Prompt Management with GitHub & Ollama

Version: 1.0.0 | Last Updated: 2025-05-16

Integrates with:

GitHub Ollama Langchain

Overview

Unlock Dynamic AI Prompting with this AI Agent

This AI Agent revolutionizes how you manage and utilize AI prompts. It empowers you to store your prompt templates externally in a GitHub repository, enabling version control, collaboration, and easy updates. The agent then dynamically fetches a specified prompt, injects custom variables (like client names, product details, or campaign specifics) into it, and processes the personalized prompt using a powerful, locally-run Ollama LLM via a Langchain agent. This means you can maintain a clean library of base prompts and generate countless variations on the fly, perfectly tailored for each specific task.

This workflow is ideal for solopreneurs and teams looking to scale their AI-driven content generation, data processing, or automated communication tasks without hardcoding prompts or juggling numerous slightly different workflow versions. It brings structure, AI-driven automation, and flexibility to your AI operations.

Key Features & Benefits

  • Centralized Prompt Management: Store and version control your AI prompts in GitHub, making them easy to manage, update, and share.
  • Dynamic Variable Injection: Personalize prompts on the fly by injecting specific data like customer details, product features, or keywords from n8n variables.
  • AI-Powered Processing: Leverages a Langchain AI Agent connected to an Ollama chat model for flexible and customizable text generation or analysis based on your dynamic prompts.
  • Local LLM Control: Utilizes Ollama, giving you control over the model choice, enhancing privacy, and potentially reducing costs associated with cloud-based LLMs.
  • Robust Error Handling: Includes a check for missing variables in your prompts, preventing errors and ensuring your AI tasks run smoothly.
  • Modular & Adaptable: Easily modify the GitHub source, variables, or the Ollama model to suit diverse AI automation needs.
  • Streamlined AI Workflows: Avoids prompt duplication and simplifies the creation of varied AI outputs from a single, well-managed template.

Use Cases

  • B2C E-commerce: Automate the generation of personalized product descriptions or marketing emails by fetching base prompt templates from GitHub and injecting product specifics (e.g., `{{ productName }}`, `{{ featuresList }}`) and customer segment data.
  • B2B SaaS: Streamline creation of tailored sales outreach sequences or support documentation. Load prompt structures from GitHub and dynamically insert prospect company information (e.g., `{{ companyName }}`, `{{ industry }}`) or user query details.
  • Develop consistent AI-generated blog posts or social media updates by managing master prompts in GitHub and populating them with current topics or keywords.
  • Enable rapid prototyping and iteration of AI-driven features by keeping prompt logic version-controlled and separate from the core n8n workflow, processed by an AI Agent.

Prerequisites

  • An n8n instance (Cloud or self-hosted).
  • GitHub account with a repository containing your prompt files (e.g., Markdown .md files).
  • GitHub credentials configured in n8n.
  • An Ollama instance set up and accessible by your n8n instance, with desired models downloaded (e.g., Llama3, Mistral).
  • Ollama API credentials/details configured in the n8n Ollama node.

Setup Instructions

  1. Download the n8n workflow JSON file.
  2. Import the workflow into your n8n instance.
  3. Configure the 'setVars' node:
    • Update Account with your GitHub username or organization name.
    • Update repo with your GitHub repository name where prompts are stored.
    • Update path to the directory within the repository containing your prompt file (e.g., Prompts/SEO/). Ensure it ends with a / if it's a directory.
    • Update prompt with the filename of the prompt you want to use (e.g., blog_post_outline.md).
    • Add or modify any other key-value pairs in this node. These will be the variables available for injection into your prompt (e.g., company: "My Business", product: "New Service").
  4. If your GitHub repository is private, ensure the 'GitHub' node has correctly configured credentials with access to the repository.
  5. In your GitHub prompt files, use {{ variableName }} placeholders for dynamic content. The variableName should match a key you defined in the 'setVars' node (e.g., if 'setVars' has product: "XYZ", your prompt can use {{ product }}).
  6. Configure the 'Ollama Chat Model' node:
    • Select your configured Ollama account/credentials.
    • Specify the Ollama model you wish to use (e.g., llama3).
    • Adjust any other model parameters as needed.
  7. The 'AI Agent' node is pre-configured to use the text from the 'Set Completed Prompt' node. Review its settings if you have advanced Langchain requirements.
  8. Test the workflow with the manual trigger. If the 'Stop and Error' node activates, check its output for missing variables in your prompt compared to what's defined in 'setVars'.
  9. Activate the workflow for operational use.

Tags:

AI AgentAutomationGitHubOllamaLangchainPrompt EngineeringDynamic ContentLLM

Want your own unique AI agent?

Talk to us - we know how to build custom AI agents for your specific needs.

Schedule a Consultation