Experiments in generative visuals, custom workflows, and model training
What is AI Lab?
The AI Lab is an ongoing space for research, experimentation, and workflow development at the intersection of motion design and generative systems.
Here, I explore how AI-driven image and video tools can be integrated into professional motion design pipelines — focusing on control, repeatability, and visual consistency. The work ranges from abstract visual studies to system-based experiments, often built through multi-stage workflows combining generative models, animation, and traditional compositing.
Created using WAN combined with Embroidery LoRA
This project started with an interest in treating embroidery as a moving material rather than a static texture, inspired by various commercials and Houdini simulations.
I trained a custom Embroidery LoRA to capture stitched patterns and thread animations. The goal was to preserve the tactile feel of embroidery while still allowing variation in form, movement, and composition.
LoRA was integrated in an I2V workflow, combined with Z-image and an LLM to generate Images to final videos.
Tools Used: AI toolkit, ComfyUI
Models: Wan 2.2 (Video), Z-Image Turbo (Images), Gemini 2.5 (Prompts)
Created using FFLF WAN combined with Morphing LoRA
This LoRA was inspired by classic hand-drawn morphing sequences, where shapes transform fluidly while maintaining visual continuity.
I set out to capture that feeling with a custom WAN 2.2 LoRA. Trained to emphasize continuous motion, smooth transitions, and liquid-like deformations, this LoRA generates frames that flow seamlessly while preserving visual continuity. The focus is on achieving fluid, intentional motion suitable for motion design .
Tools Used: AI toolkit, ComfyUI
Models: Wan 2.2 (Video), Qwen Image, Gemini 2.5/Qwen VL 8B (Prompts)
Generated using First-Last frames with WAN 2.2 combined with morphing lora
This project grew out of a desire to recreate the imperfections and warmth of hand-drawn pencil animation, particularly the style from Award winning short film - "Dear Basketball", using generative tools.
A custom LoRA was trained to capture pencil sketch-like line quality, cross-hatching texture and shading variation. The main challenge was avoiding overly clean or “digital” results, while still keeping enough consistency for motion.
Tools Used: AI toolkit, ComfyUI
Models: Wan 2.2 (Video)
Generated using Text2Video WAN 2.2 with Pencil Sketch LoRA
Mechanical Transformation, Glitch Transformation LoRA - WAN
Inflated effect, Ripple Wave effect LoRA - WAN
360 degree Cam rotation LoRA trained for WAN2.2
Samples Generated using one input Image each.
Created using Qwen IE combined with Custom LoRA
This project came out of a need for translating written scripts into visually consistent storyboards and animation-ready assets.
I trained a LoRA to transfer character features and overall visual style from a single/multiple reference images, enabling consistent character representation and visual style across multiple storyboard frames. This approach allows rapid iteration while maintaining visual continuity throughout the frames.
The generated images were then processed through a custom Python-based web app to convert them into clean, editable SVG files, making them directly usable within Illustrator and downstream animation pipelines.
Tools Used: AI toolkit, ComfyUI
Models: Qwen Image Edit, Gemini 2.5 (Prompts), Gemini + Claude (Python App dev)