How to Use Runway Gen-4 Motion Brush for Precise Camera and Subject Movement Control

Mastering Runway Gen-4 Motion Brush for AI Video Generation

Runway Gen-4 introduces an advanced Motion Brush tool that gives creators granular control over how subjects move and how the camera behaves in AI-generated videos. Unlike simple text-to-video prompts, the Motion Brush lets you paint movement directly onto specific regions of your frame, unlocking cinematic precision that was previously impossible with generative AI. This guide walks you through the complete workflow—from setup to export—so you can produce professional-quality AI videos with intentional, controlled motion.

Step 1: Set Up Your Runway Environment

Before using the Motion Brush, ensure you have the right account tier and API access configured.

  • Create or upgrade your account at app.runwayml.com. Motion Brush is available on the Standard plan and above.- Install the Runway Python SDK for programmatic workflows:pip install runwayml- Authenticate your environment:
    import runwayml

client = runwayml.RunwayML(api_key=“YOUR_API_KEY”)- Verify your credits and quota:

account = client.accounts.retrieve()
print(f”Credits remaining: {account.credits}“)

Step 2: Upload Your Source Image or Frame

The Motion Brush works on a reference image that serves as your first frame. For best results, use a high-resolution image (at least 1280×768) with clearly defined subjects. # Upload a source image for video generation image_upload = client.assets.create( file="./scene_reference.png", name="motion-brush-source" ) print(f"Asset ID: {image_upload.id}") ## Step 3: Define Motion Brush Regions

The core of the Motion Brush is region-based motion assignment. You paint masks over parts of your image, then assign directional vectors and intensity values to each region independently.

Key Motion Brush Parameters

ParameterTypeRangeDescription
directionVector (x, y)-1.0 to 1.0Movement direction for the painted region
speedFloat0.0 to 10.0Velocity of the motion within the region
ambient_strengthFloat0.0 to 1.0Organic micro-motion in unpainted areas
proximity_weightFloat0.0 to 1.0How sharply motion falls off at mask edges
camera_motionPreset StringSee list belowGlobal camera behavior for the entire clip
### Camera Motion Presets - pan_left, pan_right — Horizontal camera sweep- tilt_up, tilt_down — Vertical camera angle shift- zoom_in, zoom_out — Focal length simulation- orbit_cw, orbit_ccw — Circular movement around subject- dolly_in, dolly_out — Physical camera translation forward/backward- static — Locked camera, subject-only motion ## Step 4: Generate Video with Motion Brush via API Combine your source image, brush regions, and camera motion into a single generation call. task = client.image_to_video.create( model="gen4", image_asset_id=image_upload.id, duration=5, motion_brush={ "regions": [ { "mask": "subject_upper_body", "direction": [0.3, -0.1], "speed": 2.5, "proximity_weight": 0.7 }, { "mask": "background_clouds", "direction": [-0.5, 0.0], "speed": 1.0, "proximity_weight": 0.3 } ], "ambient_strength": 0.15, "camera_motion": "dolly_in" }, prompt="cinematic slow motion, golden hour lighting" ) print(f"Task ID: {task.id}") ## Step 5: Poll for Results and Download
import time

while True: status = client.tasks.retrieve(task.id) if status.status == “SUCCEEDED”: print(f”Video URL: {status.output[0]}”) break elif status.status == “FAILED”: print(f”Error: {status.failure_reason}”) break time.sleep(10)

Step 6: Layer Multiple Motion Passes (Advanced)

For complex scenes, generate separate motion passes and composite them. Paint your foreground subject with fast lateral motion while keeping the background on a slow drift, then apply a counter-directional camera pan to create parallax depth. # Foreground-focused pass fg_task = client.image_to_video.create( model="gen4", image_asset_id=image_upload.id, duration=5, motion_brush={ "regions": [ {"mask": "character", "direction": [0.8, 0.0], "speed": 4.0, "proximity_weight": 0.9} ], "ambient_strength": 0.05, "camera_motion": "pan_right" }, prompt="dynamic action sequence, shallow depth of field" ) ## Pro Tips for Power Users - **Combine opposing vectors**: Paint a subject moving right while setting the camera to pan_left. This creates dramatic speed illusion without cranking the speed parameter, which can introduce artifacts.- **Use low ambient strength**: Keep ambient_strength between 0.05 and 0.2 for professional results. Higher values cause jelly-like warping in static areas.- **Leverage proximity weight**: A value of 0.8–1.0 gives hard-edge motion isolation (ideal for a person walking). Values below 0.4 feather the motion outward, great for flowing fabrics or smoke.- **Prompt synergy**: Your text prompt should reinforce—not contradict—the brush vectors. If the brush pushes a subject left, avoid prompting "walking to the right."- **Frame rate control**: Request 24fps for cinematic feel or 30fps for smoother web content. Higher frame rates consume more credits per second of output.- **Iterate with short clips**: Test with 2-second generations before committing to full 10-second renders. This saves credits and accelerates creative iteration. ## Troubleshooting Common Issues

ProblemCauseSolution
Subject morphs or deformsSpeed value too high for the region sizeReduce speed to below 3.0 and increase proximity_weight
Background warps unnaturallyHigh ambient_strength with conflicting brush regionsLower ambient to 0.1 and ensure no overlapping mask regions
Camera motion overrides brushStrong camera preset competing with subtle brush vectorsUse static camera when brush precision matters most
Generation fails with timeoutSource image too large or complex sceneResize source to 1280×768 and reduce region count to 3 or fewer
API returns 429 rate limitToo many concurrent generation requestsImplement exponential backoff: time.sleep(2 ** retry_count)
Motion looks jitteryConflicting direction vectors in adjacent regionsEnsure neighboring regions share similar directional tendencies at their borders
## FAQ

Can I use Motion Brush with text-to-video or only image-to-video?

Motion Brush in Gen-4 is designed for image-to-video workflows. You need a source image to paint brush regions onto. For text-to-video, you can first generate a still frame from a text prompt, then use that frame as your Motion Brush source image for a two-step workflow.

How many motion brush regions can I define in a single generation?

Gen-4 supports up to 5 independent motion brush regions per generation. Each region can have its own direction, speed, and proximity weight. For scenes requiring more complexity, use the layered pass approach—generate separate clips with different region configurations and composite them in post-production.

Does the Motion Brush work with Gen-4 Turbo or only the standard model?

Motion Brush is available on both the standard Gen-4 model and Gen-4 Turbo. The Turbo variant processes faster but may produce slightly less nuanced motion at extreme speed values. For maximum fidelity on critical shots, use the standard Gen-4 model with the model=“gen4” parameter.

Explore More Tools

Antigravity AI Content Pipeline Automation Guide: Google Docs to WordPress Publishing Workflow Guide Bolt.new Case Study: Marketing Agency Built 5 Client Dashboards in One Day Case Study Bolt.new Best Practices: Rapid Full-Stack App Generation from Natural Language Prompts Best Practices ChatGPT Advanced Data Analysis (Code Interpreter) Complete Guide: Upload, Analyze, Visualize Guide ChatGPT Custom GPTs Advanced Guide: Actions, API Integration, and Knowledge Base Configuration Guide ChatGPT Voice Mode Guide: Build Voice-First Customer Service and Internal Workflows Guide Claude API Production Chatbot Guide: System Prompt Architecture for Reliable AI Assistants Guide Claude Artifacts Best Practices: Create Interactive Dashboards, Documents, and Code Previews Best Practices Claude Code Hooks Guide: Automate Custom Workflows with Pre and Post Execution Hooks Guide Claude MCP Server Setup Guide: Build Custom Tool Integrations for Claude Code and Claude Desktop Guide Cursor Composer Complete Guide: Multi-File Editing, Inline Diffs, and Agent Mode Guide Cursor Case Study: Solo Founder Built a Next.js SaaS MVP in 2 Weeks with AI-Assisted Development Case Study Cursor Rules Advanced Guide: Project-Specific AI Configuration and Team Coding Standards Guide Devin AI Team Workflow Integration Best Practices: Slack, GitHub, and Code Review Automation Best Practices Devin Case Study: Automated Dependency Upgrade Across 500-Package Python Monorepo Case Study ElevenLabs Case Study: EdTech Startup Localized 200 Course Hours to 8 Languages in 6 Weeks Case Study ElevenLabs Multilingual Dubbing Guide: Automated Video Localization Workflow for Global Content Guide ElevenLabs Voice Design Complete Guide: Create Consistent Character Voices for Games, Podcasts, and Apps Guide Gemini 2.5 Pro vs Claude Sonnet 4 vs GPT-4o: AI Code Generation Comparison 2026 Comparison Gemini API Multimodal Developer Guide: Image, Video, and Document Analysis with Code Examples Guide