Installation
Content Engine runs entirely on your own infrastructure. Once you purchase a license, you will receive a private download link and a license key.
Requirements
- Docker and Docker Compose installed on your server
- VPS — minimum 2 GB RAM, 20 GB disk (4 GB RAM recommended for video rendering)
- A public IP or domain pointing to your server
How It Works
After purchase, you'll receive a setup package containing everything you need to get running. The installation process takes under 5 minutes — run a single setup script, enter your credentials, and the system starts automatically.
No build step required. Pre-built Docker images are pulled and all five services start with one command.
Need Help Getting Started?
After purchase, you have two options:
- We install it for you — Share your VPS credentials and we'll have everything running within 24 hours. Free of charge.
- Self-install — Follow the step-by-step setup guide included in your license package. Takes under 5 minutes. Run one script, answer a few prompts, done.
Either way, you'll be generating content the same day.
First Login
After installation, log in and get familiar with the system before generating your first piece of content.
How to Log In
- Open
http://your-server-ipin your browser - Enter the username and password you configured in
setup.shor.env - You will land on the Dashboard
What You Will See
- The Dashboard shows a production overview — recent content, queue status, and cost summary
- The left navigation gives access to all pages: Generate, Batch Studio, Queue Monitor, and more
- Settings (bottom-left) is where you enter your API keys — do this first before generating content
API Key Setup
Content Engine orchestrates multiple AI services. You need to provide your own API keys for each service you intend to use.
Where to Enter Keys
- Navigate to Settings in the left sidebar
- Open the API Keys section
- Paste your keys for each service
- Click Test Connection to verify each key before saving
Required Keys
- OpenAI — hook, script, and image prompt generation (GPT-4o-mini). Get your key at
platform.openai.com - Anthropic — optional alternative LLM for script generation. Get your key at
console.anthropic.com - ElevenLabs — primary voice synthesis. Get your key at
elevenlabs.io - FAL.ai — image generation via Flux 2 Pro (recommended). Get your key at
fal.ai - Gemini — alternative image generation via Gemini Flash. Get your key at
aistudio.google.com
Minimum Required
- At minimum, you need OpenAI + ElevenLabs to run the full pipeline
- For images: either FAL.ai, DALL-E 3 (via OpenAI), or Gemini — at least one required
Dashboard
The main overview screen. Gives you a real-time snapshot of your production system without needing to navigate anywhere.
What You Can Do
- See your most recently generated and published content
- Monitor pending and active render jobs
- Track your total AI spend for the current period
- Jump directly to Generate, Queue Monitor, or Analytics via quick-access buttons
How to Use
- The Dashboard loads automatically after login
- Review the queue section to check if any jobs are stalled or failed
- Use the cost summary to stay on top of your API spend before it grows
- Click any content card to open the full content detail view
Onboarding
A step-by-step setup wizard that greets you after your first login. Designed to walk you through the essential configuration so the engine is ready to produce content as quickly as possible.
Wizard Steps
- Channel Info — Enter your channel name, niche, target audience, language, and tone of voice. This information forms the foundation of your Channel DNA and shapes every piece of content the engine generates.
- API Key Setup — Enter your API keys for OpenAI, Anthropic, ElevenLabs, and at least one image provider (FAL.ai or Gemini). Each key is tested live before you advance to the next step.
- First Category — Create your first content category. Define its Hook DNA, Script DNA, and Visual DNA to give the AI a clear creative direction for that category.
- Test Content — The wizard generates a sample piece of content using your configuration. This confirms the full pipeline is functional end-to-end before you commit to a real batch.
- Done — You are redirected to the Dashboard. Production can begin immediately.
Generate
The primary content creation interface. Enter a topic and configure the pipeline — the engine handles everything from hook to rendered video.
What You Can Do
- Input a topic and select a content category (pulled from your Channel DNA)
- Choose your voice engine: ElevenLabs or OpenAI TTS, and select a specific voice ID
- Select a Remotion template: Base, Slow, Pulse, Noir, Doc
- Set a Quality Strategy: Conservative, Balanced, Aggressive, or Smart
- Adjust Motion Intensity: Slow, Balanced, Dynamic, Aggressive
- Add a Creative Direction note to steer the AI
- Override visuals: AI Generate or pick from your Assets library
- Fine-tune Grain, Vignette, and Ken Burns sliders in the Customize panel
- Preview cost estimate and AI quality prediction in the right panel
How to Use
- Type your topic in the Topic field
- Select the appropriate category
- Adjust settings or leave defaults
- Click Generate — the job enters the queue immediately
- Open Queue Monitor to watch the pipeline progress in real time
Batch Studio
Produce multiple pieces of content simultaneously. Load a list of topics, configure shared settings, and send the entire batch into the queue at once.
What You Can Do
- Enter multiple topics at once (one per line or CSV import)
- Apply a shared category and pipeline settings to all topics in the batch
- Review the estimated total cost before launching
- Monitor batch progress in Queue Monitor
- Schedule automatic batch runs on a daily or custom interval
How to Use
- Paste or type your topics into the topic pool
- Select a category and configure quality/voice settings
- Review the cost estimate
- Click Send to Queue to start all jobs
Weekly Planner
Plan your content calendar for the entire week. Assign topics to specific days, generate AI topic suggestions, and dispatch everything to production in one click.
What You Can Do
- View a 7-day calendar grid and assign topics to specific time slots
- Generate AI-suggested topics for any day based on your Channel DNA
- Send individual days or the full week to Batch Studio for rendering
- Track which planned items have been generated, are in queue, or are complete
How to Use
- Open Weekly Planner and select the current or upcoming week
- Click any day slot to add a topic — manually or via AI suggestion
- Once your week is filled, click Send All to Queue
- Monitor progress per day in the planner status column
Motion Lab
Fine-tune visual motion for your videos. Customize Ken Burns effects, zoom direction, and apply motion presets before rendering.
What You Can Do
- Control Ken Burns effect intensity and direction (in/out, left/right, diagonal)
- Apply motion presets: Cinematic Slow, Dynamic, Pulse, and more
- Preview the motion pattern before committing to render
- Override per-video motion settings independently from global defaults
How to Use
- Open Motion Lab from the sidebar
- Select a content item or configure global motion defaults
- Adjust sliders and pick a zoom direction
- Preview the effect, then save and proceed to render
Drafts
A holding area for generated content that hasn't been rendered yet. Review, edit, or discard before committing to the render pipeline.
What You Can Do
- View all content items that completed generation but are pending render
- Edit the script, hook, or image prompt before rendering
- Send individual drafts or all drafts to the render queue
- Delete drafts you don't want to render
How to Use
- Open Drafts from the sidebar
- Review the generated hook and script for each item
- Make any manual edits needed
- Click Send to Render for items you approve
Archive
The full history of all content ever produced by the engine. Search, filter, and retrieve any item regardless of status.
What You Can Do
- Browse the complete content history with pagination
- Filter by category, date range, or status (draft, rendered, uploaded, failed)
- Search by topic keyword
- Open any item's detail view to inspect hooks, scripts, and render output
- Perform bulk actions: re-render, delete, or export metadata
How to Use
- Open Archive from the sidebar
- Use the filter bar to narrow down results
- Click any item card to open its full detail view
- Use checkboxes for bulk operations
Renders
View completed video renders, preview them inline, download the MP4 files, or send them to YouTube Machine for upload.
What You Can Do
- See all rendered videos with their status: processing, completed, or failed
- Preview a video inline without downloading
- Download the MP4 file directly to your machine
- Send a completed render to YouTube Machine for immediate or scheduled upload
- Re-trigger a failed render from this view
How to Use
- Open Renders from the sidebar
- Wait for the status column to show completed
- Click the preview icon to watch the video
- Click Upload to YouTube to hand it off to YouTube Machine
Queue Monitor
Real-time visibility into every active job in the pipeline. Track each stage of production and diagnose failures as they happen.
What You Can Do
- Watch all active jobs and their current pipeline stage in real time
- Track each of the five stages: Hook → Script → Image → Voice → Render
- View the full error log for failed jobs
- Cancel a running job before it advances to the next stage
- Retry a failed job with one click
How to Use
- Open Queue Monitor after triggering a Generate or Batch job
- Watch job cards update in real time as each stage completes
- If a job fails, expand the log to read the error message
- Click Cancel or Retry as needed
Analytics
Production performance metrics and trend data. Understand how much content you're generating, how it's distributed, and where things break.
What You Can Do
- View total content generated over daily, weekly, and monthly windows
- See category distribution — which categories produce the most content
- Track success and failure rates per pipeline stage
- Monitor average render times and stage-by-stage duration breakdowns
How to Use
- Open Analytics from the sidebar
- Use the date range selector to adjust the reporting window
- Review stage failure rates — a high Image failure rate usually signals an API key or quota issue
Cost Control
Track and manage your API spend across all services. Understand what each piece of content costs and set budget alerts to avoid surprises.
What You Can Do
- See a cost breakdown by service: OpenAI, Anthropic, ElevenLabs, FAL.ai, Gemini
- View daily and monthly spend totals
- Calculate the average cost per piece of content
- Set budget thresholds to receive alerts when spend approaches your limit
- Compare cost per category to identify expensive content types
How to Use
- Open Cost Control from the sidebar
- Review the service breakdown chart to see where most of your spend goes
- Set a daily or monthly budget alert in the Budget section
A/B Testing
Generate multiple hook or script variants for the same topic and compare their performance to find what works best for your audience.
What You Can Do
- Create 2–4 variants of a hook or script for the same topic
- Render each variant as a separate video
- Compare view and engagement metrics after upload (requires YouTube Machine connection)
- Mark the winning variant to inform future Channel DNA prompts
How to Use
- Open A/B Testing from the sidebar
- Enter a topic and select the number of variants to generate
- Review and approve each variant before rendering
- After upload and data collection, mark the winner
Assets
Your channel's visual library. Upload your own images or pull from Unsplash to use instead of AI-generated visuals in your videos.
What You Can Do
- Upload your own photos and graphics (JPEG, PNG, WebP)
- Search and import from Unsplash directly within the panel
- Organize assets by tag or category
- Use the From Assets toggle on the Generate page to pick a specific image instead of generating one
How to Use
- Open Assets from the sidebar
- Click Upload and select your image files, or use the Unsplash search
- Tag your assets for easier retrieval
- On the Generate page, switch Visual Override to From Assets and select your image
Prompts
View and edit the prompt templates that power every stage of the AI pipeline. Full control over what the engine sends to each model.
What You Can Do
- Browse all prompt templates: Hook, Script, and Image Prompt
- Customize templates per category independently
- Track version history and roll back to a previous prompt version
- Test a prompt against the live AI without triggering a full pipeline run
How to Use
- Open Prompts from the sidebar
- Select the stage (Hook / Script / Image) and the category you want to edit
- Modify the template — use the provided variables (
{{topic}},{{channel_dna}}, etc.) - Save and optionally test before deploying to production
Channel DNA
The foundational configuration for your channel's identity. Everything the AI generates — hooks, scripts, visuals — is shaped by the DNA you define here.
What You Can Do
- Define your Channel Identity: channel name, niche, target audience, language, and tone of voice
- Create and manage content Categories, each with its own Hook DNA, Script DNA, and Visual DNA
- Assign a visual style per category: Dark & Cinematic, Documentary, Standard, Slow & Emotional, Energetic
- Add example hooks and reference videos to guide the AI's output style
How to Use
- Open Channel DNA from the sidebar
- Fill in the Channel Identity section completely
- Create at least one Category and define its Hook DNA, Script DNA, and Visual DNA
- Save — the next Generate run will immediately reflect your DNA
YouTube Machine
Connect your YouTube channel and automate the upload process — from AI-generated metadata to scheduled publishing.
What You Can Do
- Authenticate your YouTube channel via Google OAuth
- Configure automatic upload settings: default visibility, category, and audience
- Generate AI-written titles, descriptions, and tag sets for each video
- Set a publishing schedule or upload immediately
- View upload history and YouTube performance data synced every 6 hours
How to Use
- Open YouTube Machine from the sidebar
- Click Connect YouTube Channel and complete Google OAuth
- Configure your default upload settings
- From the Renders page, click Upload to YouTube on any completed video
Settings
Global system configuration. API keys, storage backend, watermark, YouTube credentials, and render quality are all managed from here.
API Keys
- Enter and update keys for: OpenAI, Anthropic, ElevenLabs, FAL.ai, Gemini
- Use the Test Connection button next to each key to validate before saving
- Keys are stored encrypted — they are never exposed in the UI after saving
Image Engine
- Select the model used for image generation
- FAL.ai (Flux 2 Pro) — paid, high quality; Gemini Flash — free tier, daily limit
- Default: Auto — AI selects the best option based on cost and quality
Voice Engine
- Select the voice synthesis service: ElevenLabs or OpenAI TTS
- Set the default voice profile used when no override is specified
- ElevenLabs produces more natural results; OpenAI TTS is lower cost per character
Video Engine
- Select the fal.ai model for AI video clip generation: Kling, Luma, Minimax, Wan
- Configure default clip count, clip duration, and loop mode
Publishing
- Configure automatic YouTube upload settings
- Set default visibility: public, unlisted, or private
- Define default title and description templates for auto-generated metadata
Video Quality
- Output resolution: 1080p or 4K
- Frame rate: 24fps, 30fps, or 60fps
Watermark
- Upload a logo PNG to embed as a watermark on all rendered videos
- Adjust opacity and toggle on/off globally
Channel Language
- Select the language for content generation
- TTS voice and script prompts will follow this language setting
Storage
- Local disk (default): generated images and videos are stored on your VPS disk
- Cloudflare R2: unlimited cloud storage — enter Bucket name, Account ID, and API token
- Disk usage indicator: yellow at 80%, red at 90% capacity
- Option to migrate existing local files to R2 when switching storage backends
Storage & R2
By default, all generated media is stored on your VPS local disk. For production channels with high output volume, Cloudflare R2 provides a cost-effective unlimited storage alternative.
Local Disk
- Default mode — no configuration needed
- All images and videos are written to the
/datavolume inside the Docker containers - Disk usage is monitored: yellow alert at 80%, red alert at 90%
- Recommended for: low-volume channels, development, testing
Switching to Cloudflare R2
- Create a Cloudflare account at
cloudflare.com - Go to R2 Object Storage and create a new bucket
- Generate an API token with R2 Read & Write permissions
- In Settings → Storage, enter your: Bucket name, Account ID, and API token
- Click Test Connection, then Enable R2
- Optionally run the migration tool to move existing local files to R2
Video Engine
The AI video generation layer powered by fal.ai. Enables short motion clips to be used in place of static images inside your Shorts.
What You Can Do
- Generate 3–6 second motion clips using models: Kling, Luma, Minimax, Wan
- Control clip count per video and individual clip duration
- Enable Loop mode for seamlessly looping video segments
- Configure Motion DNA: camera movement style, subject motion intensity
- Activate video mode per content item via video_mode=true in the ContentDetail view
How to Use
- Ensure a FAL.ai API key is set in Settings
- Open a content item in the ContentDetail view
- Toggle Video Mode to enable AI video generation for that item
- Configure motion settings and select a model
- Send to render — video clips will be composited by Remotion
Render Pipeline
The five-stage automated pipeline that takes a topic from raw input to a finished MP4. Every stage runs sequentially in the background via BullMQ.
Pipeline Stages
- Hook — The AI reads your Channel DNA and generates a high-scoring opening line for the Short. Hook candidates are scored using a hybrid of deterministic rules (length, readability, monotony penalty) and LLM evaluation. Only the top-scoring hook advances.
- Script — The winning hook is extended into a full 30–60 second script, structured for vertical video pacing and your category's tone of voice.
- Image — The Visual DNA is used to construct an image generation prompt. The prompt is sent to FAL.ai Flux 2 Pro, DALL-E 3, or Gemini Flash (depending on your configuration). Smart Quality mode generates multiple candidates and selects the best.
- Voice — The script is sent to ElevenLabs or OpenAI TTS for synthesis. The audio file is normalized and trimmed to match video duration.
- Render — Remotion CLI composites the image (or video clips), voiceover audio, Ken Burns motion, captions, grain/vignette overlays, and watermark into a final 9:16 MP4 file ready for upload.
Monitoring
- Every stage is visible in real time in Queue Monitor
- Failed stages surface the exact error — API quota, timeout, or generation failure
- Jobs can be retried from any failed stage without restarting the full pipeline






















