Type a sentence, get a full 3D character animation. Motion Gen bridges natural language and physics-based motion synthesis directly inside Blender — no mocap studio, no keyframing, no pipeline juggling.
Motion Gen is a Blender addon that lets you generate realistic 3D humanoid animations from plain-English text prompts. Under the hood it orchestrates a multi-model AI pipeline: a large language model (Qwen 3-8B) interprets your prompt and translates it into a structured motion description, a CLIP vision-language encoder aligns the semantics, and a 1-billion-parameter diffusion network synthesises the actual joint rotations and root translations frame-by-frame.
The result is automatically mapped onto a built-in SMPL-X character armature and imported into your Blender scene — ready to render, retarget, or refine. The entire process runs locally on your machine; no cloud API, no subscription, no internet required after initial setup.
A complete prompt-to-animation pipeline, entirely offline and deeply integrated into Blender.
Describe any human motion in natural language — from "a person casually walking" to "a martial arts spinning kick" — and watch it materialise as keyframed animation data.
A fully standalone Python 3.11 environment lives inside the addon folder. Zero conflicts with Blender's built-in Python — install once, forget forever.
Generated motion is automatically applied to a bundled SMPL-X armature model. The character appears in your scene with all keyframes baked and ready.
Tune inference steps, CFG scale, seed, and duration. Use Draft Mode (20 steps) for rapid iteration, then switch to Production (50-60 steps) for final quality.
Optimised for NVIDIA GPUs with CUDA, but includes a "Force CPU" toggle for AMD, Intel, and Apple Silicon users — or anyone without a dedicated GPU.
After initial model download, everything runs locally. Your prompts and generated data never leave your machine.
Every animation below was generated entirely from a text prompt — no keyframing, no mocap data.
A person jumping
Picking a heavy object
Picking a small object
Running & kicking
Sitting down
Slow running
Uses dual-hardware architecture: System RAM for text translation, GPU for 3D physics rendering.
Enable the Force CPU toggle. Entire multi-model pipeline runs in System RAM.
Three steps: install the Blender addon, set up the backend, then import the AI model weights.
motion_gen.zip file (do not extract it).motion_gen.zip, then click Install Add-on.
Due to their large size, the AI models must be downloaded separately.
N to open the Sidebar, go to the Motion Gen tab, type a prompt, and click Generate & Load Motion.
Every control in the Motion Gen sidebar panel explained — from prompt input to the final generate button.
Describe the animation in plain English. Focus on concrete physical actions. Keep under 60 words.
Animation length in seconds. Longer = more frames but slower generation. Caps at 12s (360 frames).
Locks steps to 20 for fast rough previews. Turn off for final quality.
Diffusion iterations. More = smoother physics. 20–30 for drafts, 50–60 for production.
Prompt adherence vs. physics. Low (2–4) = natural flow. High (5–7) = strict prompt. Above 8 may jitter.
Set -1 for random variation each time. Use a fixed number to reproduce a specific result.
Runs the full pipeline — encodes your prompt, generates motion via diffusion, and imports a rigged character with the baked animation. Shows elapsed time and live status during generation.
The Motion Gen panel lives in the 3D Viewport sidebar. Press N to toggle it.
Motion Gen generates animation onto a built-in SMPL-X armature. To transfer that animation to your own custom character, use Auto-Rig Pro's Remap tool. Here's a step-by-step walkthrough.
Use Motion Gen as usual — type your prompt, adjust settings, and click Generate & Load Motion. This creates a rigged SMPL-X character with baked keyframes in your scene. This will be the Source armature.
Alt+G and Alt+R, or manually set them to zero in the N-panel.S.Tab) on each to verify.Alt+Click the rotation mode to switch all bones to XYZ Euler before tweaking.Getting great results starts with writing great prompts. Follow these guidelines to get the most out of Motion Gen's text-to-motion pipeline.
Please use English. For optimal results, keep your prompt under 60 words.
Focus on action descriptions or detailed movements of the limbs and torso. Concrete physical actions produce the best results.
Animations for animals or non-human creatures.
Descriptions of complex emotions, clothing, or physical appearance.
Descriptions of objects, scenes, or camera angles.
Motions involving two or more people.
Seamless loop or in-place animations.
A person performs a squat, then pushes a barbell overhead using the power from standing up.
A person climbs upward, moving up the slope.
A person stands up from the chair, then stretches their arms.
A person walks unsteadily, then slowly sits down.
Motion Gen is powerful, but understanding its boundaries will help you get the best results.
Currently supports a single built-in SMPL-X character template. Custom character retargeting is not yet available. The addon requires a standalone Python 3.11 runtime (~3 GB) in addition to the model weights.
The diffusion model generates humanoid motion only — no quadrupeds, no multi-character interaction, no finger/facial animation. Very long prompts or unusual descriptions may produce unpredictable results. Physics accuracy degrades past ~10 seconds of continuous motion.
NVIDIA GPUs with at least 8 GB VRAM are required for real-time generation. CPU-only execution works but is 40–60× slower. AMD and Intel GPUs are not directly supported for accelerated inference.
Results are highly dependent on prompt wording. Simple, descriptive sentences work best. The model responds better to concrete actions ("a person jumps over a box") than abstract concepts ("feeling happy").
The automated 1-click installer only works on Windows 10/11 (64-bit). macOS and Linux users must manually set up the Python environment via terminal.
Using a fixed seed produces consistent results on the same hardware. Different GPU architectures or CPU vs GPU may yield slightly different outputs for the same seed.
| Version | Date | Highlights |
|---|---|---|
v1.0 |
April 2026 | Initial release — prompt-to-motion generation, SMPL-X auto-rigging, GPU + CPU fallback, manual model import, isolated Python 3.11 runtime, Draft & Production quality modes. |
Motion Gen orchestrates several third-party AI models. Use of this addon requires explicit agreement to the following licenses via the in-Blender "Click-Wrap" agreement.