LTX 2.3 Prompt Generator & ComfyUI Workflow JSON
Match your GPU, generate ComfyUI workflows, and enhance prompts for LTX-2.3 and LTX-2.3 Distilled. Supports FP8 quantized variants — runs on 16GB+ VRAM.
VRAM Adapter
Select your GPU VRAM to see compatible LTX-2.3 models:
LTX 2.3 Prompt Generator
Basic prompt enhancement — sign in for director-level cinematic prompts
ComfyUI Workflow JSON Generator
LTX-2.3 Model Downloads
LTX-2.3 Dev
Officialltx-2.3-22b-dev.safetensorsFull dev model. Flexible and trainable. Recommended 32GB+ VRAM.
LTX-2.3 Distilled
Recommendedltx-2.3-22b-distilled.safetensorsDistilled version. 8 steps, CFG=1. Faster inference, same quality.
LTX-2.3 Dev (FP8, Kijai)
16GB VRAMltx-2.3-22b-dev_transformer_only_fp8_input_scaled.safetensorsFP8 quantized by Kijai. Runs on 16GB VRAM. Requires 40xx+ GPU for fp8 matmuls. Place in models/checkpoints/.
LTX-2.3 Distilled (FP8 v3, Kijai)
16GB VRAMltx-2.3-22b-distilled_transformer_only_fp8_input_scaled_v3.safetensorsFP8 distilled v3 by Kijai. Best for 16GB VRAM. 8 steps, CFG=1.
Spatial Upscaler x2
ltx-2.3-spatial-upscaler-x2-1.0.safetensorsspatial upscaler x2 for two-stage pipelines. Place in models/latent_upscale_models/.
LTX-2.3 VAE
Requiredtaeltx2_3.safetensorsVAE by Kijai. Required for ComfyUI workflows. Place in models/vae/.
Official Resources
LTX-2.3 on HuggingFace →
Official model weights by Lightricks
ComfyUI-LTXVideo →
Official ComfyUI nodes & example workflows
Kijai FP8 Models →
FP8 quantized variants for 16GB VRAM
LTX-Video GitHub →
Official LTX-Video model repository
ComfyUI →
Node-based UI for running diffusion models
ComfyUI Manager →
Install LTXVideo nodes via Manager