Video Composition with Background Removal

Automated n8n workflow to remove backgrounds and composite videos. Perfect for AI-generated actors, product demos, and talking heads.

G2 Logo30k+ creators

What This Workflow Does

Background Removal

Remove backgrounds from any video automatically. Works with AI-generated videos from Sora, Veo, HeyGen, and more.

Video Composition

Composite over custom backgrounds with positioning templates: ai_ugc_ad, centered, picture-in-picture, or fullscreen.

Audio Mixing

Mix audio from foreground and background videos with adjustable volumes. Professional audio blending included.

Google Drive Upload

Automatically save processed videos to Google Drive with shareable links. No manual downloads required.

One-click import into n8n • Setup time: ~7 minutes

How It Works

1. Upload Videos

  • Provide foreground video URL (your AI-generated actor)
  • Provide background video URL (your custom scene)
  • Both must be publicly accessible
  • Supports any resolution up to 4K

2. Background Removal

  • API removes foreground background automatically
  • Preserves original quality and motion
  • Processing: ~20-60 seconds per video
  • Handles AI artifacts and edge cleanup

3. Composition & Export

  • Composites with selected template (ai_ugc_ad default)
  • Mixes audio (30% background + 100% foreground)
  • Uploads to Google Drive automatically
  • Returns shareable link and metadata

Video Composition Template FAQ

What AI video models does this work with?

This workflow is model-agnostic. It works with any AI video generation tool: Sora 2, Veo 3, Runway, Pika, HeyGen, D-ID, Synthesia, and more. As long as you can export the video to a URL, this workflow can process it. Just generate your video, get the URL, and run it through the workflow.

How do I install this workflow?

Installation takes about 7 minutes: 1) Copy the workflow JSON URL, 2) In n8n, click Workflows → Import from URL, 3) Paste the URL and import, 4) Add your VideoBGRemover API key to n8n environment variables as VIDEOBGREMOVER_KEY, 5) Connect your Google Drive account, 6) Test with the manual trigger using sample URLs provided in the workflow.

Can I customize the composition positioning?

Yes! The workflow uses the ai_ugc_ad template by default (bottom-right corner, 50% canvas size), but you can change it to: centered (full frame), picture_in_picture (top-right corner, 25% size), or fullscreen (covers entire canvas). You can also adjust positioning manually with offset_x, offset_y, size_percent, and opacity parameters in the 'Start Composition' node.

How do I batch process multiple videos?

The workflow supports batch processing through n8n triggers. Connect it to: Google Sheets (process rows of video URLs), Google Drive (watch folder for new uploads), Airtable (process records), webhooks (trigger from external apps), or scheduled cron jobs. Each video is processed sequentially with proper error handling.

What video formats and lengths are supported?

The VideoBGRemover API supports videos up to 120 minutes long and all common formats (MP4, MOV, WebM). Outputs are available as MP4 with alpha channel, transparent WebM, or ProRes/MOV for professional editing. Supports resolutions from 480p to 4K. Processing time is typically 3-5 minutes per minute of video.

Can I adjust the audio mixing levels?

Yes! The workflow mixes audio by default at 30% background + 100% foreground, but you can adjust these in the 'Start Composition' node. Change background_audio_volume (0.0 to 1.0) and foreground volume separately. You can also disable background audio entirely by setting background_audio_enabled to false.

Does this work with n8n Cloud or self-hosted only?

Both! The workflow works on n8n Cloud and self-hosted instances. No special setup required. The only requirements are: 1) VideoBGRemover API key, 2) Google Drive connection, 3) n8n environment variables configured. Everything else is handled by the pre-built workflow.

What if processing fails or times out?

The workflow includes built-in error handling and retry logic. It polls job status every 20 seconds and can detect failed jobs automatically. If a job fails, it captures error details and returns them in a structured response. For webhook triggers, errors are returned to the caller. For manual runs, check the execution log for details.

Works with Any AI Video Model

Sora, Veo, Runway, Pika & more

Start Compositing AI Videos Automatically

Import the workflow and process your first video in minutes