Everything you need to know about Motion Gen's internals, cross-platform setup, common questions, and how to fix issues when things go wrong.
When you click Generate & Load Motion, Motion Gen runs a multi-stage AI pipeline entirely on your local machine.
Qwen 3-8B (quantized GGUF) interprets your English prompt and converts it into a structured motion description the diffusion model understands. This runs on CPU / System RAM (~6 GB).
CLIP ViT-L/14 encodes the structured description into a latent vector, aligning text semantics with motion space. Lightweight â runs on whichever device is available.
The HY-Motion 1B parameter diffusion network synthesises joint rotations and root translations frame-by-frame. This is the most compute-heavy step and benefits enormously from GPU acceleration.
Generated motion data is mapped onto the bundled SMPL-X armature, imported into your scene with all keyframes baked â ready to render or retarget.
Motion Gen is designed for Windows but works cross-platform with some manual setup.
x86_64 architectures supported; ARM64 Windows is untestedForce CPU in addon preferences# 1. Install Python 3.11 (if not installed)
brew install python@3.11
# 2. Create a venv inside the addon folder
cd ~/Library/Application\ Support/Blender/4.2/scripts/addons/motion_gen
python3.11 -m venv runtime
# 3. Activate and install dependencies
source runtime/bin/activate
pip install torch torchvision --index-url https://download.pytorch.org/whl/cpu
pip install transformers einops scipy llama-cpp-python
# 4. Download models manually and place them:
# HY-Motion-1.0/latest.ckpt
# GGUF/Qwen3-8B-UD-Q5_K_XL.gguf
# 1. Navigate to addon directory
cd ~/.config/blender/4.2/scripts/addons/motion_gen
# 2. Create venv
python3.11 -m venv runtime
# 3. Install with CUDA (NVIDIA) or CPU
source runtime/bin/activate
# For NVIDIA GPU:
pip install torch torchvision --index-url https://download.pytorch.org/whl/cu121
# For CPU only:
pip install torch torchvision --index-url https://download.pytorch.org/whl/cpu
pip install transformers einops scipy llama-cpp-python
# 4. Place model files in HY-Motion-1.0/ and GGUF/ directories
runtime and placed directly inside the addon root directory. The addon looks for runtime/python.exe (Windows) or runtime/bin/python (macOS/Linux).Quick answers to the most common questions from users.
No. After you download the models and install the runtime, everything runs 100% offline. Your prompts and generated data never leave your machine.
Not currently. The diffusion pipeline requires CUDA, which is NVIDIA-only. AMD users should enable Force CPU in addon preferences. ROCm and Intel oneAPI are not supported at this time.
The addon installs a full standalone Python 3.11 environment with PyTorch, CUDA runtime libraries, Transformers, and other ML dependencies. This ensures zero conflicts with Blender's built-in Python. The AI models (latest.ckpt + Qwen GGUF) add another ~7.6 GB on top.
The addon currently imports motion onto its bundled SMPL-X armature only. However, you can use Blender's built-in retargeting tools or addons like Rokoko or Auto-Rig Pro to transfer the animation to your custom rig after generation.
The duration slider goes up to 30 seconds, but physics accuracy degrades noticeably past ~10 seconds. For best results, keep animations between 2â8 seconds and chain multiple generations together in Blender's NLA editor.
No. The HY-Motion model is trained exclusively on humanoid motion capture data. It can only produce single-person, bipedal human motion.
Draft Mode (20 steps) is great for quickly testing a prompt idea â the overall motion shape will be correct but limb trajectories may be less smooth. For final renders, disable Draft Mode and use 50â60 steps.
No. The addon locks the Generate button while a generation is in progress. Running concurrent generations would exceed VRAM/RAM capacity and likely crash. Wait for the current generation to finish before starting another.
The addon requires Blender 3.0 or higher (4.2+ recommended). Older versions have incompatible Python APIs and may fail during addon registration or FBX import.
A fixed seed produces consistent results on the same hardware. Switching between GPU and CPU, or between different GPU architectures (e.g., RTX 3060 vs RTX 4090), may yield slightly different results for the same seed due to floating-point precision differences.
Step-by-step solutions for the most reported problems.
Cause: The standalone Python environment hasn't been installed, or the runtime folder was moved/deleted.
Fix:
Cause: The bundled SMPL-X character FBX file is missing or corrupted.
Fix:
assets/wooden_models/boy_Rigging_smplx_tex.fbx existsCause: On 16 GB RAM systems, running Blender + the full ML pipeline can exhaust system memory.
Fix:
Cause: Your GPU doesn't have enough VRAM for the diffusion model. Minimum is 8 GB.
Fix:
Force CPU in preferencesCause: Usually a result of CFG Scale set too high, or an overly complex/ambiguous prompt.
Fix:
Cause: The runtime environment was partially installed or corrupted.
Fix:
runtime folder entirely and reinstall from scratchpip install einops torch transformers scipy llama-cpp-pythonCause: The character may have been imported at a very small scale or placed at an extreme location.
Fix:
Numpad . to frame the selected objectNumpad / to toggle)Techniques for getting the most out of Motion Gen.
Generate multiple short (2â4s) animations and blend them using Blender's NLA Editor. This produces better results than a single long generation, since physics degrade past ~10s.
Use Draft Mode with different seeds to rapidly explore variations. Once you find a motion you like, note the seed, disable Draft, and regenerate at full quality (50â60 steps).
Low CFG (2â3): Natural, flowing motion with some creative freedom. Medium (4â5): Balanced adherence. High (6â7): Strict prompt following but may sacrifice fluidity. Above 8: Likely to introduce jitter â avoid.
Start with a base action ("a person walks"), then add modifiers one at a time ("a person walks slowly", "a person walks slowly while limping"). This iterative approach gives you fine control.
After generating, use Blender's Copy Transforms constraints or an addon like Auto-Rig Pro to transfer the SMPL-X animation onto your custom character rig.
You can script batch generation via Blender's Python console by calling bpy.ops.hymotion.generate_and_load() with different scene property values â useful for generating motion libraries.
Key terms used throughout this documentation.
A parametric 3D body model that represents the human body with a skeleton and surface mesh. Used as the character template for generated animations.
A type of generative AI that creates data by iteratively removing noise from a random starting point. Motion Gen's core network generates motion data this way.
A file format for quantized large language models. The Qwen 3-8B model is stored in this format to reduce memory usage while maintaining quality.
Classifier-Free Guidance â controls how strongly the generated output follows the text prompt versus allowing natural physics to dominate.
Contrastive Language-Image Pre-training â an OpenAI model that bridges text and visual semantics. Used here to encode prompt meaning into a vector the diffusion model understands.
The number of denoising iterations the diffusion model performs. More steps = smoother, more accurate motion at the cost of speed.
Video RAM â dedicated memory on your GPU. The motion diffusion model requires at least 8 GB VRAM for GPU-accelerated generation.
Blender's Non-Linear Animation editor â allows you to blend, layer, and sequence animation clips.