How to Use Runway Gen-4 Multi Motion Brush for Precise Character Animation

How to Use Runway Gen-4 Multi Motion Brush for Precise Character Animation

Runway Gen-4’s Multi Motion Brush is a breakthrough tool that lets you paint independent movement zones directly onto your generated or uploaded frames. Instead of relying on a single global motion prompt, you can isolate specific regions—a character’s arm, a background element, or a camera pan—and assign unique directional vectors to each. This guide walks you through the complete workflow from setup to advanced multi-layer animation.

Prerequisites and Setup

  • Runway Account: You need a Runway Pro or Unlimited plan to access Gen-4 and the Multi Motion Brush feature.- API Access (Optional): If you want to automate generations via the Runway API, install the SDK.

Installing the Runway Python SDK

pip install runwayml

Authenticating via API

from runwayml import RunwayML

client = RunwayML(api_key="YOUR_API_KEY")

# Verify connection
print(client.account.retrieve())

CLI Quick Start

# Install Runway CLI
npm install -g @runwayml/cli

# Authenticate
runway auth login --token YOUR_API_KEY

# Check available models
runway models list --filter gen-4

Step-by-Step: Multi Motion Brush Workflow

Step 1: Upload or Generate Your Base Frame

Start by uploading a high-resolution still image (minimum 1280×768) or generate one using Gen-4's text-to-image mode. The base frame defines all the elements you'll animate independently. # Generate a base frame via API task = client.image_generation.create( model="gen-4", prompt="A dancer standing in a sunlit studio with flowing curtains in the background", width=1280, height=768 ) print(f"Image ID: {task.output.image_id}") ### Step 2: Open the Multi Motion Brush Panel

In the Runway web editor, select your base frame and click **Motion Brush** in the right-hand toolbar. You'll see a canvas overlay with brush tools. Gen-4 supports up to **5 independent motion regions**, each color-coded (Region 1 = Blue, Region 2 = Green, Region 3 = Red, Region 4 = Yellow, Region 5 = Purple).

Step 3: Paint Your First Motion Zone (Subject Body)

  • Select Region 1 (Blue) from the brush palette.- Adjust the brush size to match your subject’s torso and legs.- Paint over the dancer’s body, excluding the arms and head for now.- In the region settings panel, set the motion parameters:
  • Direction: Horizontal = 0.0, Vertical = -0.3 (slight upward sway)- Intensity: 2.5 (scale 0–5)- Proximity Falloff: Soft (blends edges naturally)

Step 4: Isolate Secondary Movement (Arms)

  • Select Region 2 (Green).- Paint over both arms with a smaller brush.- Set directional vectors:
  • Direction: Horizontal = 0.6, Vertical = 0.4 (sweeping diagonal motion)- Intensity: 3.8- Proximity Falloff: Sharp (prevents bleed into torso region)

Step 5: Add Background Motion (Curtains)

  • Select Region 3 (Red).- Paint over the curtains in the background.- Configure: Direction Horizontal = -0.5, Vertical = 0.1, Intensity = 1.5, Falloff = Soft.

Step 6: Set Camera Motion

Camera motion applies globally but composites with your painted regions. In the Camera Control panel:

ParameterValueEffect
Pan Horizontal-1.2Slow left pan
Pan Vertical0.0No vertical shift
Zoom0.3Subtle push-in
Roll0.0No rotation
Motion Intensity2.0Moderate camera speed
### Step 7: Generate and Review # Trigger generation via API with motion brush config task = client.video_generation.create( model="gen-4", image_id="YOUR_IMAGE_ID", duration=5, motion_brush={ "regions": [ {"id": 1, "mask": "mask_body_b64", "direction": [0.0, -0.3], "intensity": 2.5, "falloff": "soft"}, {"id": 2, "mask": "mask_arms_b64", "direction": [0.6, 0.4], "intensity": 3.8, "falloff": "sharp"}, {"id": 3, "mask": "mask_curtains_b64", "direction": [-0.5, 0.1], "intensity": 1.5, "falloff": "soft"} ], "camera": {"pan": [-1.2, 0.0], "zoom": 0.3, "roll": 0.0, "intensity": 2.0} }, text_prompt="Fluid dance movement with billowing curtains" ) print(f"Video Task: {task.id} — Status: {task.status}")
# Poll for completion
import time
while task.status != "completed":
    time.sleep(5)
    task = client.tasks.retrieve(task.id)
    print(f"Status: {task.status}")

print(f”Download URL: {task.output.video_url}“)

Pro Tips for Power Users

  • Layer Intensity Balancing: Keep the sum of all region intensities below 15. Exceeding this can cause warping artifacts where regions overlap.- Use Ambient Motion Sparingly: The ambient motion slider (found under Advanced Settings) adds micro-movement to unpainted areas. Set it to 0.5–1.0 to keep static areas from looking frozen without introducing unwanted drift.- Directional Vector Math: Direction values use normalized coordinates where [1.0, 0.0] = full rightward, [0.0, -1.0] = full upward. Combine values for diagonal motion: [0.7, 0.7] creates a 45-degree downward-right sweep.- Mask Precision with Edge Detection: Hold Alt + Click on the canvas to activate auto-edge snapping, which constrains your brush strokes to detected object boundaries.- Extend Duration Without Quality Loss: Generate in 5-second segments and use Runway’s built-in Extend feature with the last frame as the new seed. Maintain the same motion brush configuration for continuity.

Troubleshooting Common Issues

IssueCauseSolution
Regions bleed into each otherOverlapping painted masks with soft falloffSwitch overlapping edges to Sharp falloff; leave a 5–10px gap between regions
Subject warps or distortsIntensity too high on small regionReduce intensity below 3.0 for regions smaller than 15% of frame area
Camera motion overrides brush motionCamera intensity competing with region vectorsLower camera intensity to 1.0–1.5 when using 3+ motion brush regions
Generation fails with timeoutComplex multi-region + camera + long durationReduce to 3 regions or shorten duration to 4 seconds; retry
API returns 429 rate limitToo many concurrent generation requestsImplement exponential backoff: wait 10s, 20s, 40s between retries
## Combining Camera and Subject Motion: Best Practices

The key to natural-looking results is treating camera motion as the foundational layer and subject motion as the detail layer. Set your camera motion first at a low intensity (1.0–2.0), then paint subject regions at higher intensities (2.5–4.0). This mimics real cinematography where the camera provides context and the subject provides action. For parallax effects, paint foreground and background as separate regions with opposing horizontal directions. A foreground subject moving right at [0.4, 0.0] combined with a background moving left at [-0.2, 0.0] and a camera panning slightly right creates convincing depth. ## Frequently Asked Questions

How many motion brush regions can I use simultaneously in Runway Gen-4?

Gen-4 supports up to 5 independent motion brush regions per generation. Each region can have its own directional vector, intensity, and falloff setting. For optimal results, use 2–3 regions for most scenes and reserve 4–5 regions only for complex compositions where distinct elements require truly independent movement paths.

Can I combine Multi Motion Brush with text prompts for additional control?

Yes. The text prompt works as a semantic guide that influences the style and nature of movement, while the motion brush controls the spatial direction and intensity. For example, you can paint upward vectors on a character’s hair while using the text prompt “wind blowing gently from the left” to add contextual realism. The two systems are complementary, not mutually exclusive.

Why does my character’s face distort when I paint motion directly over it?

Facial regions are highly sensitive to motion vectors because Gen-4’s model tries to maintain facial consistency. Avoid painting directly over faces. Instead, paint the head and neck as one region with very low intensity (0.5–1.0) and let the ambient motion handle subtle facial micro-expressions. If you need head turning, use a text prompt like “character slowly turns head to the right” alongside a gentle horizontal vector on the head region.

Explore More Tools

Antigravity AI Content Pipeline Automation Guide: Google Docs to WordPress Publishing Workflow Guide Bolt.new Case Study: Marketing Agency Built 5 Client Dashboards in One Day Case Study Bolt.new Best Practices: Rapid Full-Stack App Generation from Natural Language Prompts Best Practices ChatGPT Advanced Data Analysis (Code Interpreter) Complete Guide: Upload, Analyze, Visualize Guide ChatGPT Custom GPTs Advanced Guide: Actions, API Integration, and Knowledge Base Configuration Guide ChatGPT Voice Mode Guide: Build Voice-First Customer Service and Internal Workflows Guide Claude API Production Chatbot Guide: System Prompt Architecture for Reliable AI Assistants Guide Claude Artifacts Best Practices: Create Interactive Dashboards, Documents, and Code Previews Best Practices Claude Code Hooks Guide: Automate Custom Workflows with Pre and Post Execution Hooks Guide Claude MCP Server Setup Guide: Build Custom Tool Integrations for Claude Code and Claude Desktop Guide Cursor Composer Complete Guide: Multi-File Editing, Inline Diffs, and Agent Mode Guide Cursor Case Study: Solo Founder Built a Next.js SaaS MVP in 2 Weeks with AI-Assisted Development Case Study Cursor Rules Advanced Guide: Project-Specific AI Configuration and Team Coding Standards Guide Devin AI Team Workflow Integration Best Practices: Slack, GitHub, and Code Review Automation Best Practices Devin Case Study: Automated Dependency Upgrade Across 500-Package Python Monorepo Case Study ElevenLabs Case Study: EdTech Startup Localized 200 Course Hours to 8 Languages in 6 Weeks Case Study ElevenLabs Multilingual Dubbing Guide: Automated Video Localization Workflow for Global Content Guide ElevenLabs Voice Design Complete Guide: Create Consistent Character Voices for Games, Podcasts, and Apps Guide Gemini 2.5 Pro vs Claude Sonnet 4 vs GPT-4o: AI Code Generation Comparison 2026 Comparison Gemini API Multimodal Developer Guide: Image, Video, and Document Analysis with Code Examples Guide