RENDRD
/DOCS
← BACK TO HOME
Installation screenshot

Installation

Content Engine runs entirely on your own infrastructure. Once you purchase a license, you will receive a private download link and a license key.

Requirements

  • Docker and Docker Compose installed on your server
  • VPS — minimum 2 GB RAM, 20 GB disk (4 GB RAM recommended for video rendering)
  • A public IP or domain pointing to your server

How It Works

After purchase, you'll receive a setup package containing everything you need to get running. The installation process takes under 5 minutes — run a single setup script, enter your credentials, and the system starts automatically.

No build step required. Pre-built Docker images are pulled and all five services start with one command.

⚠️Your license key is tied to your installation. Do not share it or use it on multiple servers.

Need Help Getting Started?

After purchase, you have two options:

  • We install it for you — Share your VPS credentials and we'll have everything running within 24 hours. Free of charge.
  • Self-install — Follow the step-by-step setup guide included in your license package. Takes under 5 minutes. Run one script, answer a few prompts, done.

Either way, you'll be generating content the same day.


First Login screenshot

First Login

After installation, log in and get familiar with the system before generating your first piece of content.

How to Log In

  1. Open http://your-server-ip in your browser
  2. Enter the username and password you configured in setup.sh or .env
  3. You will land on the Dashboard

What You Will See

  • The Dashboard shows a production overview — recent content, queue status, and cost summary
  • The left navigation gives access to all pages: Generate, Batch Studio, Queue Monitor, and more
  • Settings (bottom-left) is where you enter your API keys — do this first before generating content
💡Go to Settings and enter your API keys immediately after first login. Without them, the pipeline cannot run.

API Keys — Settings screenshot

API Key Setup

Content Engine orchestrates multiple AI services. You need to provide your own API keys for each service you intend to use.

Where to Enter Keys

  1. Navigate to Settings in the left sidebar
  2. Open the API Keys section
  3. Paste your keys for each service
  4. Click Test Connection to verify each key before saving

Required Keys

  • OpenAI — hook, script, and image prompt generation (GPT-4o-mini). Get your key at platform.openai.com
  • Anthropic — optional alternative LLM for script generation. Get your key at console.anthropic.com
  • ElevenLabs — primary voice synthesis. Get your key at elevenlabs.io
  • FAL.ai — image generation via Flux 2 Pro (recommended). Get your key at fal.ai
  • Gemini — alternative image generation via Gemini Flash. Get your key at aistudio.google.com

Minimum Required

  • At minimum, you need OpenAI + ElevenLabs to run the full pipeline
  • For images: either FAL.ai, DALL-E 3 (via OpenAI), or Gemini — at least one required
💡Start with OpenAI + ElevenLabs + FAL.ai for the best results with the lowest cost per video.

Dashboard screenshot

Dashboard

The main overview screen. Gives you a real-time snapshot of your production system without needing to navigate anywhere.

What You Can Do

  • See your most recently generated and published content
  • Monitor pending and active render jobs
  • Track your total AI spend for the current period
  • Jump directly to Generate, Queue Monitor, or Analytics via quick-access buttons

How to Use

  1. The Dashboard loads automatically after login
  2. Review the queue section to check if any jobs are stalled or failed
  3. Use the cost summary to stay on top of your API spend before it grows
  4. Click any content card to open the full content detail view

Onboarding screenshot

Onboarding

A step-by-step setup wizard that greets you after your first login. Designed to walk you through the essential configuration so the engine is ready to produce content as quickly as possible.

Wizard Steps

  1. Channel Info — Enter your channel name, niche, target audience, language, and tone of voice. This information forms the foundation of your Channel DNA and shapes every piece of content the engine generates.
  2. API Key Setup — Enter your API keys for OpenAI, Anthropic, ElevenLabs, and at least one image provider (FAL.ai or Gemini). Each key is tested live before you advance to the next step.
  3. First Category — Create your first content category. Define its Hook DNA, Script DNA, and Visual DNA to give the AI a clear creative direction for that category.
  4. Test Content — The wizard generates a sample piece of content using your configuration. This confirms the full pipeline is functional end-to-end before you commit to a real batch.
  5. Done — You are redirected to the Dashboard. Production can begin immediately.
💡You can skip the wizard and configure everything manually via Channel DNA and Settings. However, the onboarding wizard is the fastest path to a working setup for first-time installs.
⚠️The Generate page will not function without API keys. At minimum, OpenAI and ElevenLabs keys must be entered before any content can be produced.

Generate screenshot

Generate

The primary content creation interface. Enter a topic and configure the pipeline — the engine handles everything from hook to rendered video.

What You Can Do

  • Input a topic and select a content category (pulled from your Channel DNA)
  • Choose your voice engine: ElevenLabs or OpenAI TTS, and select a specific voice ID
  • Select a Remotion template: Base, Slow, Pulse, Noir, Doc
  • Set a Quality Strategy: Conservative, Balanced, Aggressive, or Smart
  • Adjust Motion Intensity: Slow, Balanced, Dynamic, Aggressive
  • Add a Creative Direction note to steer the AI
  • Override visuals: AI Generate or pick from your Assets library
  • Fine-tune Grain, Vignette, and Ken Burns sliders in the Customize panel
  • Preview cost estimate and AI quality prediction in the right panel

How to Use

  1. Type your topic in the Topic field
  2. Select the appropriate category
  3. Adjust settings or leave defaults
  4. Click Generate — the job enters the queue immediately
  5. Open Queue Monitor to watch the pipeline progress in real time
💡Smart quality strategy lets the AI evaluate and choose the best-performing image before advancing to render — recommended for new topics.

Batch Studio screenshot

Batch Studio

Produce multiple pieces of content simultaneously. Load a list of topics, configure shared settings, and send the entire batch into the queue at once.

What You Can Do

  • Enter multiple topics at once (one per line or CSV import)
  • Apply a shared category and pipeline settings to all topics in the batch
  • Review the estimated total cost before launching
  • Monitor batch progress in Queue Monitor
  • Schedule automatic batch runs on a daily or custom interval

How to Use

  1. Paste or type your topics into the topic pool
  2. Select a category and configure quality/voice settings
  3. Review the cost estimate
  4. Click Send to Queue to start all jobs
💡Use Batch Studio with the Weekly Planner to prepare your week's content in a single session.

Weekly Planner screenshot

Weekly Planner

Plan your content calendar for the entire week. Assign topics to specific days, generate AI topic suggestions, and dispatch everything to production in one click.

What You Can Do

  • View a 7-day calendar grid and assign topics to specific time slots
  • Generate AI-suggested topics for any day based on your Channel DNA
  • Send individual days or the full week to Batch Studio for rendering
  • Track which planned items have been generated, are in queue, or are complete

How to Use

  1. Open Weekly Planner and select the current or upcoming week
  2. Click any day slot to add a topic — manually or via AI suggestion
  3. Once your week is filled, click Send All to Queue
  4. Monitor progress per day in the planner status column

Motion Lab screenshot

Motion Lab

Fine-tune visual motion for your videos. Customize Ken Burns effects, zoom direction, and apply motion presets before rendering.

What You Can Do

  • Control Ken Burns effect intensity and direction (in/out, left/right, diagonal)
  • Apply motion presets: Cinematic Slow, Dynamic, Pulse, and more
  • Preview the motion pattern before committing to render
  • Override per-video motion settings independently from global defaults

How to Use

  1. Open Motion Lab from the sidebar
  2. Select a content item or configure global motion defaults
  3. Adjust sliders and pick a zoom direction
  4. Preview the effect, then save and proceed to render

Drafts screenshot

Drafts

A holding area for generated content that hasn't been rendered yet. Review, edit, or discard before committing to the render pipeline.

What You Can Do

  • View all content items that completed generation but are pending render
  • Edit the script, hook, or image prompt before rendering
  • Send individual drafts or all drafts to the render queue
  • Delete drafts you don't want to render

How to Use

  1. Open Drafts from the sidebar
  2. Review the generated hook and script for each item
  3. Make any manual edits needed
  4. Click Send to Render for items you approve
💡Editing the script in Drafts does not re-run the AI — it directly modifies the text before render. Useful for quick manual corrections.

Archive screenshot

Archive

The full history of all content ever produced by the engine. Search, filter, and retrieve any item regardless of status.

What You Can Do

  • Browse the complete content history with pagination
  • Filter by category, date range, or status (draft, rendered, uploaded, failed)
  • Search by topic keyword
  • Open any item's detail view to inspect hooks, scripts, and render output
  • Perform bulk actions: re-render, delete, or export metadata

How to Use

  1. Open Archive from the sidebar
  2. Use the filter bar to narrow down results
  3. Click any item card to open its full detail view
  4. Use checkboxes for bulk operations

Renders screenshot

Renders

View completed video renders, preview them inline, download the MP4 files, or send them to YouTube Machine for upload.

What You Can Do

  • See all rendered videos with their status: processing, completed, or failed
  • Preview a video inline without downloading
  • Download the MP4 file directly to your machine
  • Send a completed render to YouTube Machine for immediate or scheduled upload
  • Re-trigger a failed render from this view

How to Use

  1. Open Renders from the sidebar
  2. Wait for the status column to show completed
  3. Click the preview icon to watch the video
  4. Click Upload to YouTube to hand it off to YouTube Machine

Queue Monitor screenshot

Queue Monitor

Real-time visibility into every active job in the pipeline. Track each stage of production and diagnose failures as they happen.

What You Can Do

  • Watch all active jobs and their current pipeline stage in real time
  • Track each of the five stages: Hook → Script → Image → Voice → Render
  • View the full error log for failed jobs
  • Cancel a running job before it advances to the next stage
  • Retry a failed job with one click

How to Use

  1. Open Queue Monitor after triggering a Generate or Batch job
  2. Watch job cards update in real time as each stage completes
  3. If a job fails, expand the log to read the error message
  4. Click Cancel or Retry as needed
💡Keep Queue Monitor open while running large batches. It's the fastest way to catch a misconfigured API key or quota error before it wastes your entire batch.

Analytics screenshot

Analytics

Production performance metrics and trend data. Understand how much content you're generating, how it's distributed, and where things break.

What You Can Do

  • View total content generated over daily, weekly, and monthly windows
  • See category distribution — which categories produce the most content
  • Track success and failure rates per pipeline stage
  • Monitor average render times and stage-by-stage duration breakdowns

How to Use

  1. Open Analytics from the sidebar
  2. Use the date range selector to adjust the reporting window
  3. Review stage failure rates — a high Image failure rate usually signals an API key or quota issue

Cost Control screenshot

Cost Control

Track and manage your API spend across all services. Understand what each piece of content costs and set budget alerts to avoid surprises.

What You Can Do

  • See a cost breakdown by service: OpenAI, Anthropic, ElevenLabs, FAL.ai, Gemini
  • View daily and monthly spend totals
  • Calculate the average cost per piece of content
  • Set budget thresholds to receive alerts when spend approaches your limit
  • Compare cost per category to identify expensive content types

How to Use

  1. Open Cost Control from the sidebar
  2. Review the service breakdown chart to see where most of your spend goes
  3. Set a daily or monthly budget alert in the Budget section
💡ElevenLabs typically accounts for 40–60% of total cost per video. Switching to OpenAI TTS for lower-priority content can reduce per-video cost significantly.

A/B Testing screenshot

A/B Testing

Generate multiple hook or script variants for the same topic and compare their performance to find what works best for your audience.

What You Can Do

  • Create 2–4 variants of a hook or script for the same topic
  • Render each variant as a separate video
  • Compare view and engagement metrics after upload (requires YouTube Machine connection)
  • Mark the winning variant to inform future Channel DNA prompts

How to Use

  1. Open A/B Testing from the sidebar
  2. Enter a topic and select the number of variants to generate
  3. Review and approve each variant before rendering
  4. After upload and data collection, mark the winner

Assets screenshot

Assets

Your channel's visual library. Upload your own images or pull from Unsplash to use instead of AI-generated visuals in your videos.

What You Can Do

  • Upload your own photos and graphics (JPEG, PNG, WebP)
  • Search and import from Unsplash directly within the panel
  • Organize assets by tag or category
  • Use the From Assets toggle on the Generate page to pick a specific image instead of generating one

How to Use

  1. Open Assets from the sidebar
  2. Click Upload and select your image files, or use the Unsplash search
  3. Tag your assets for easier retrieval
  4. On the Generate page, switch Visual Override to From Assets and select your image
💡When using your own uploaded images, enable Disable Filters in the Generate panel to preserve the original colors without cinematic grading applied on top.

Prompts screenshot

Prompts

View and edit the prompt templates that power every stage of the AI pipeline. Full control over what the engine sends to each model.

What You Can Do

  • Browse all prompt templates: Hook, Script, and Image Prompt
  • Customize templates per category independently
  • Track version history and roll back to a previous prompt version
  • Test a prompt against the live AI without triggering a full pipeline run

How to Use

  1. Open Prompts from the sidebar
  2. Select the stage (Hook / Script / Image) and the category you want to edit
  3. Modify the template — use the provided variables ({{topic}}, {{channel_dna}}, etc.)
  4. Save and optionally test before deploying to production
⚠️Editing production prompts takes effect immediately. Test thoroughly before saving — a broken prompt will fail the entire pipeline for that stage.

Channel DNA screenshot

Channel DNA

The foundational configuration for your channel's identity. Everything the AI generates — hooks, scripts, visuals — is shaped by the DNA you define here.

What You Can Do

  • Define your Channel Identity: channel name, niche, target audience, language, and tone of voice
  • Create and manage content Categories, each with its own Hook DNA, Script DNA, and Visual DNA
  • Assign a visual style per category: Dark & Cinematic, Documentary, Standard, Slow & Emotional, Energetic
  • Add example hooks and reference videos to guide the AI's output style

How to Use

  1. Open Channel DNA from the sidebar
  2. Fill in the Channel Identity section completely
  3. Create at least one Category and define its Hook DNA, Script DNA, and Visual DNA
  4. Save — the next Generate run will immediately reflect your DNA
💡The more specific your Channel DNA, the more consistent and on-brand the AI output will be. Generic DNA produces generic content. Treat it like a creative brief.

YouTube Machine screenshot

YouTube Machine

Connect your YouTube channel and automate the upload process — from AI-generated metadata to scheduled publishing.

What You Can Do

  • Authenticate your YouTube channel via Google OAuth
  • Configure automatic upload settings: default visibility, category, and audience
  • Generate AI-written titles, descriptions, and tag sets for each video
  • Set a publishing schedule or upload immediately
  • View upload history and YouTube performance data synced every 6 hours

How to Use

  1. Open YouTube Machine from the sidebar
  2. Click Connect YouTube Channel and complete Google OAuth
  3. Configure your default upload settings
  4. From the Renders page, click Upload to YouTube on any completed video
⚠️If your Google Cloud project's OAuth consent screen is set to "Testing" mode (not Published), access tokens expire every 7 days and must be re-authorized. Publish your OAuth consent screen to avoid this.

slide 1

Settings

Global system configuration. API keys, storage backend, watermark, YouTube credentials, and render quality are all managed from here.

API Keys

  • Enter and update keys for: OpenAI, Anthropic, ElevenLabs, FAL.ai, Gemini
  • Use the Test Connection button next to each key to validate before saving
  • Keys are stored encrypted — they are never exposed in the UI after saving

Image Engine

  • Select the model used for image generation
  • FAL.ai (Flux 2 Pro) — paid, high quality; Gemini Flash — free tier, daily limit
  • Default: Auto — AI selects the best option based on cost and quality

Voice Engine

  • Select the voice synthesis service: ElevenLabs or OpenAI TTS
  • Set the default voice profile used when no override is specified
  • ElevenLabs produces more natural results; OpenAI TTS is lower cost per character

Video Engine

  • Select the fal.ai model for AI video clip generation: Kling, Luma, Minimax, Wan
  • Configure default clip count, clip duration, and loop mode

Publishing

  • Configure automatic YouTube upload settings
  • Set default visibility: public, unlisted, or private
  • Define default title and description templates for auto-generated metadata

Video Quality

  • Output resolution: 1080p or 4K
  • Frame rate: 24fps, 30fps, or 60fps

Watermark

  • Upload a logo PNG to embed as a watermark on all rendered videos
  • Adjust opacity and toggle on/off globally

Channel Language

  • Select the language for content generation
  • TTS voice and script prompts will follow this language setting

Storage

  • Local disk (default): generated images and videos are stored on your VPS disk
  • Cloudflare R2: unlimited cloud storage — enter Bucket name, Account ID, and API token
  • Disk usage indicator: yellow at 80%, red at 90% capacity
  • Option to migrate existing local files to R2 when switching storage backends

Storage & R2 screenshot

Storage & R2

By default, all generated media is stored on your VPS local disk. For production channels with high output volume, Cloudflare R2 provides a cost-effective unlimited storage alternative.

Local Disk

  • Default mode — no configuration needed
  • All images and videos are written to the /data volume inside the Docker containers
  • Disk usage is monitored: yellow alert at 80%, red alert at 90%
  • Recommended for: low-volume channels, development, testing

Switching to Cloudflare R2

  1. Create a Cloudflare account at cloudflare.com
  2. Go to R2 Object Storage and create a new bucket
  3. Generate an API token with R2 Read & Write permissions
  4. In Settings → Storage, enter your: Bucket name, Account ID, and API token
  5. Click Test Connection, then Enable R2
  6. Optionally run the migration tool to move existing local files to R2
💡R2 has no egress fees for bandwidth. It's ideal for channels producing 10+ videos per week.

Video Engine screenshot

Video Engine

The AI video generation layer powered by fal.ai. Enables short motion clips to be used in place of static images inside your Shorts.

What You Can Do

  • Generate 3–6 second motion clips using models: Kling, Luma, Minimax, Wan
  • Control clip count per video and individual clip duration
  • Enable Loop mode for seamlessly looping video segments
  • Configure Motion DNA: camera movement style, subject motion intensity
  • Activate video mode per content item via video_mode=true in the ContentDetail view

How to Use

  1. Ensure a FAL.ai API key is set in Settings
  2. Open a content item in the ContentDetail view
  3. Toggle Video Mode to enable AI video generation for that item
  4. Configure motion settings and select a model
  5. Send to render — video clips will be composited by Remotion
⚠️AI video generation costs significantly more per clip than static image generation. Monitor Cost Control closely when using Video Engine at scale.

Render Pipeline screenshot

Render Pipeline

The five-stage automated pipeline that takes a topic from raw input to a finished MP4. Every stage runs sequentially in the background via BullMQ.

Pipeline Stages

  1. Hook — The AI reads your Channel DNA and generates a high-scoring opening line for the Short. Hook candidates are scored using a hybrid of deterministic rules (length, readability, monotony penalty) and LLM evaluation. Only the top-scoring hook advances.
  2. Script — The winning hook is extended into a full 30–60 second script, structured for vertical video pacing and your category's tone of voice.
  3. Image — The Visual DNA is used to construct an image generation prompt. The prompt is sent to FAL.ai Flux 2 Pro, DALL-E 3, or Gemini Flash (depending on your configuration). Smart Quality mode generates multiple candidates and selects the best.
  4. Voice — The script is sent to ElevenLabs or OpenAI TTS for synthesis. The audio file is normalized and trimmed to match video duration.
  5. Render — Remotion CLI composites the image (or video clips), voiceover audio, Ken Burns motion, captions, grain/vignette overlays, and watermark into a final 9:16 MP4 file ready for upload.

Monitoring

  • Every stage is visible in real time in Queue Monitor
  • Failed stages surface the exact error — API quota, timeout, or generation failure
  • Jobs can be retried from any failed stage without restarting the full pipeline
💡The entire pipeline from topic input to finished MP4 typically completes in 90–180 seconds depending on image model and voice length.