📖 Detailed Guide

Under the Hood

Everything you need to know about Motion Gen's internals, cross-platform setup, common questions, and how to fix issues when things go wrong.

How Generation Works

When you click Generate & Load Motion, Motion Gen runs a multi-stage AI pipeline entirely on your local machine.

1

Prompt Translation

Qwen 3-8B (quantized GGUF) interprets your English prompt and converts it into a structured motion description the diffusion model understands. This runs on CPU / System RAM (~6 GB).

2

Semantic Encoding

CLIP ViT-L/14 encodes the structured description into a latent vector, aligning text semantics with motion space. Lightweight — runs on whichever device is available.

3

Motion Diffusion

The HY-Motion 1B parameter diffusion network synthesises joint rotations and root translations frame-by-frame. This is the most compute-heavy step and benefits enormously from GPU acceleration.

4

Rigging & Import

Generated motion data is mapped onto the bundled SMPL-X armature, imported into your scene with all keyframes baked — ready to render or retarget.

⚡
Performance tip: Steps 1 & 2 take ~5 seconds regardless of hardware. Step 3 is where GPU vs CPU makes the biggest difference — 10s on GPU vs 10+ minutes on CPU.

OS Compatibility Guide

Motion Gen is designed for Windows but works cross-platform with some manual setup.

🊟

Windows 10 / 11

Full Support
  • 1-click installer handles Python runtime, PyTorch + CUDA, and all dependencies automatically
  • NVIDIA GPU acceleration works out of the box
  • Tested on Windows 10 21H2+ and Windows 11
  • Both x86_64 architectures supported; ARM64 Windows is untested
✅ Recommended platform — zero manual steps required
🍎

macOS (Intel & Apple Silicon)

Partial — CPU Only
  • The automated installer does not run on macOS — you must set up the Python environment manually via Terminal
  • No CUDA support; must enable Force CPU in addon preferences
  • Apple Silicon (M1/M2/M3/M4) machines need at least 32 GB unified memory
  • MPS (Metal) backend is not yet supported — all inference runs on CPU
  • Generation takes 10–15 minutes per animation

Manual Setup (macOS)

# 1. Install Python 3.11 (if not installed)
brew install python@3.11

# 2. Create a venv inside the addon folder
cd ~/Library/Application\ Support/Blender/4.2/scripts/addons/motion_gen
python3.11 -m venv runtime

# 3. Activate and install dependencies
source runtime/bin/activate
pip install torch torchvision --index-url https://download.pytorch.org/whl/cpu
pip install transformers einops scipy llama-cpp-python

# 4. Download models manually and place them:
#    HY-Motion-1.0/latest.ckpt
#    GGUF/Qwen3-8B-UD-Q5_K_XL.gguf
🐧

Linux (Ubuntu / Fedora / Arch)

Partial — Manual Setup
  • Automated installer is Windows-only — manual terminal setup required
  • NVIDIA GPUs with CUDA are supported if you install PyTorch with the correct CUDA index
  • AMD GPUs via ROCm are experimental and unsupported by this addon
  • Tested on Ubuntu 22.04 and Fedora 39

Manual Setup (Linux)

# 1. Navigate to addon directory
cd ~/.config/blender/4.2/scripts/addons/motion_gen

# 2. Create venv
python3.11 -m venv runtime

# 3. Install with CUDA (NVIDIA) or CPU
source runtime/bin/activate

# For NVIDIA GPU:
pip install torch torchvision --index-url https://download.pytorch.org/whl/cu121

# For CPU only:
pip install torch torchvision --index-url https://download.pytorch.org/whl/cpu

pip install transformers einops scipy llama-cpp-python

# 4. Place model files in HY-Motion-1.0/ and GGUF/ directories
⚠ïļ
Important: On all platforms, the Python venv folder must be named runtime and placed directly inside the addon root directory. The addon looks for runtime/python.exe (Windows) or runtime/bin/python (macOS/Linux).

Frequently Asked Questions

Quick answers to the most common questions from users.

Is an internet connection required after setup?

No. After you download the models and install the runtime, everything runs 100% offline. Your prompts and generated data never leave your machine.

Can I use an AMD or Intel GPU for acceleration?

Not currently. The diffusion pipeline requires CUDA, which is NVIDIA-only. AMD users should enable Force CPU in addon preferences. ROCm and Intel oneAPI are not supported at this time.

Why does installation download ~3 GB of data?

The addon installs a full standalone Python 3.11 environment with PyTorch, CUDA runtime libraries, Transformers, and other ML dependencies. This ensures zero conflicts with Blender's built-in Python. The AI models (latest.ckpt + Qwen GGUF) add another ~7.6 GB on top.

Can I retarget the generated animation to my own character?

The addon currently imports motion onto its bundled SMPL-X armature only. However, you can use Blender's built-in retargeting tools or addons like Rokoko or Auto-Rig Pro to transfer the animation to your custom rig after generation.

What's the maximum animation length?

The duration slider goes up to 30 seconds, but physics accuracy degrades noticeably past ~10 seconds. For best results, keep animations between 2–8 seconds and chain multiple generations together in Blender's NLA editor.

Can I generate animations for animals or non-human characters?

No. The HY-Motion model is trained exclusively on humanoid motion capture data. It can only produce single-person, bipedal human motion.

Does Draft Mode affect quality significantly?

Draft Mode (20 steps) is great for quickly testing a prompt idea — the overall motion shape will be correct but limb trajectories may be less smooth. For final renders, disable Draft Mode and use 50–60 steps.

Can I run multiple generations simultaneously?

No. The addon locks the Generate button while a generation is in progress. Running concurrent generations would exceed VRAM/RAM capacity and likely crash. Wait for the current generation to finish before starting another.

Will Motion Gen work with older Blender versions?

The addon requires Blender 3.0 or higher (4.2+ recommended). Older versions have incompatible Python APIs and may fail during addon registration or FBX import.

Does the seed guarantee identical output?

A fixed seed produces consistent results on the same hardware. Switching between GPU and CPU, or between different GPU architectures (e.g., RTX 3060 vs RTX 4090), may yield slightly different results for the same seed due to floating-point precision differences.

Common Issues & Fixes

Step-by-step solutions for the most reported problems.

✕

"Python runtime not found" error

Cause: The standalone Python environment hasn't been installed, or the runtime folder was moved/deleted.

Fix:

  1. Open Edit → Preferences → Add-ons
  2. Find Motion Gen and expand the details
  3. Click Install Python Runtime
  4. Wait for installation to complete (check Window → Toggle System Console for progress)
✕

"No Armature returned from template" error

Cause: The bundled SMPL-X character FBX file is missing or corrupted.

Fix:

  1. Navigate to the addon folder and verify that assets/wooden_models/boy_Rigging_smplx_tex.fbx exists
  2. If missing, re-download the addon zip and reinstall
  3. Make sure the FBX Importer addon is enabled in Blender (Edit → Preferences → Add-ons → Import-Export: FBX format)
⚠

Blender freezes during generation

Cause: On 16 GB RAM systems, running Blender + the full ML pipeline can exhaust system memory.

Fix:

  1. Close all other applications (browsers, IDEs, game launchers)
  2. If on CPU mode, ensure you have at least 32 GB RAM
  3. Try reducing the animation duration
  4. Check Task Manager — if RAM is at 95%+, you need more memory or must use a machine with a dedicated GPU
⚠

CUDA out of memory

Cause: Your GPU doesn't have enough VRAM for the diffusion model. Minimum is 8 GB.

Fix:

  1. Close GPU-heavy applications (games, other 3D software)
  2. Reduce animation duration to 2–3 seconds
  3. If your GPU has less than 8 GB VRAM, enable Force CPU in preferences
⚠

Generation produces jittery or glitchy motion

Cause: Usually a result of CFG Scale set too high, or an overly complex/ambiguous prompt.

Fix:

  1. Lower CFG Scale to 3.0–5.0
  2. Simplify your prompt — focus on one clear action
  3. Increase inference steps to 50+ (disable Draft Mode)
  4. Try a different seed
✕

ModuleNotFoundError: No module named 'einops' / 'torch'

Cause: The runtime environment was partially installed or corrupted.

Fix:

  1. Go to addon Preferences and click Reinstall / Repair Environment
  2. If the problem persists, delete the runtime folder entirely and reinstall from scratch
  3. On Mac/Linux, manually activate the venv and run pip install einops torch transformers scipy llama-cpp-python
⚠

Animation imports but character is invisible

Cause: The character may have been imported at a very small scale or placed at an extreme location.

Fix:

  1. Press Numpad . to frame the selected object
  2. Check the Outliner for the imported Armature and mesh objects
  3. Ensure you're not in local view (Numpad / to toggle)

Power User Tips

Techniques for getting the most out of Motion Gen.

🔗 Chaining Animations

Generate multiple short (2–4s) animations and blend them using Blender's NLA Editor. This produces better results than a single long generation, since physics degrade past ~10s.

ðŸŽē Seed Hunting

Use Draft Mode with different seeds to rapidly explore variations. Once you find a motion you like, note the seed, disable Draft, and regenerate at full quality (50–60 steps).

🎚 CFG Scale Tuning

Low CFG (2–3): Natural, flowing motion with some creative freedom. Medium (4–5): Balanced adherence. High (6–7): Strict prompt following but may sacrifice fluidity. Above 8: Likely to introduce jitter — avoid.

📝 Prompt Engineering

Start with a base action ("a person walks"), then add modifiers one at a time ("a person walks slowly", "a person walks slowly while limping"). This iterative approach gives you fine control.

ðŸĶī Retargeting Workflow

After generating, use Blender's Copy Transforms constraints or an addon like Auto-Rig Pro to transfer the SMPL-X animation onto your custom character rig.

ðŸ’ū Batch Generation

You can script batch generation via Blender's Python console by calling bpy.ops.hymotion.generate_and_load() with different scene property values — useful for generating motion libraries.

Glossary

Key terms used throughout this documentation.

SMPL-X

A parametric 3D body model that represents the human body with a skeleton and surface mesh. Used as the character template for generated animations.

Diffusion Model

A type of generative AI that creates data by iteratively removing noise from a random starting point. Motion Gen's core network generates motion data this way.

GGUF

A file format for quantized large language models. The Qwen 3-8B model is stored in this format to reduce memory usage while maintaining quality.

CFG Scale

Classifier-Free Guidance — controls how strongly the generated output follows the text prompt versus allowing natural physics to dominate.

CLIP

Contrastive Language-Image Pre-training — an OpenAI model that bridges text and visual semantics. Used here to encode prompt meaning into a vector the diffusion model understands.

Inference Steps

The number of denoising iterations the diffusion model performs. More steps = smoother, more accurate motion at the cost of speed.

VRAM

Video RAM — dedicated memory on your GPU. The motion diffusion model requires at least 8 GB VRAM for GPU-accelerated generation.

NLA Editor

Blender's Non-Linear Animation editor — allows you to blend, layer, and sequence animation clips.