AI Lab

Experiments in generative visuals, custom workflows, and model training

What is AI Lab?

The AI Lab is an ongoing space for research, experimentation, and workflow development at the intersection of motion design and generative systems.
Here, I explore how AI-driven image and video tools can be integrated into professional motion design pipelines — focusing on control, repeatability, and visual consistency. The work ranges from abstract visual studies to system-based experiments, often built through multi-stage workflows combining generative models, animation, and traditional compositing.

Embroidery lora

Created using WAN combined with Embroidery LoRA

WAN Embroidery LoRA

This project started with an interest in treating embroidery as a moving material rather than a static texture, inspired by various commercials and Houdini simulations.

I trained a custom Embroidery LoRA to capture stitched patterns and thread animations. The goal was to preserve the tactile feel of embroidery while still allowing variation in form, movement, and composition.

LoRA was integrated in an I2V workflow, combined with Z-image and an LLM to generate Images to final videos.
 
Tools Used: AI toolkit, ComfyUI
Models: Wan 2.2 (Video), Z-Image Turbo (Images), Gemini 2.5 (Prompts)

Knitting LoRA — Visual Explorations (Z-image + WAN 2.2)
knitting
knitting
knitting
knitting
Mobirise Website Builder

Created using FFLF WAN combined with Morphing LoRA

Seamless Liquid Morphing LoRA

This LoRA was inspired by classic hand-drawn morphing sequences, where shapes transform fluidly while maintaining visual continuity.

I set out to capture that feeling with a custom WAN 2.2 LoRA. Trained to emphasize continuous motion, smooth transitions, and liquid-like deformations, this LoRA generates frames that flow seamlessly while preserving visual continuity. The focus is on achieving fluid, intentional motion suitable for motion design .

Tools Used: AI toolkit, ComfyUI
Models: Wan 2.2 (Video), Qwen Image, Gemini 2.5/Qwen VL 8B (Prompts)

Faces Morphing - Liquid motion LoRA experiment

Generated using First-Last frames with WAN 2.2 combined with morphing lora

Sketch

Created using WAN T2V combined with Pencil Sketch LoRA

Pencil Sketch Style LoRA 

This project grew out of a desire to recreate the imperfections and warmth of hand-drawn pencil animation, particularly the style from Award winning short film - "Dear Basketball", using generative tools. 

A custom LoRA was trained to capture pencil sketch-like line quality, cross-hatching texture and shading variation. The main challenge was avoiding overly clean or “digital” results, while still keeping enough consistency for motion.
 
Tools Used: AI toolkit, ComfyUI
Models: Wan 2.2 (Video)

Winter Hunt - Pencil LoRA exploration

Generated using Text2Video WAN 2.2 with Pencil Sketch LoRA

Trained style LoRAs for WAN

Animated
3d Animated Style WAN

A LoRA trained to capture the 3d animated style with painterly brush textures.

Comic
Comic Style WAN

A LoRA trained to capture expressive, comic-inspired illustration styles.
It focusses on exaggerated animations,  and flat 2d shading. The emphasis is on character and expressions.

half
Half Illustration Style WAN

A LoRA trained to capture the style for 2d illustrations overlaid on cinematic realism. A concept which is extremely hard to accomplish with default models.

Various transitions and effects LoRAs for WAN

A growing collection of LoRAs built specifically for motion transitions and effects.

Exploded effect LoRA - Link

Water Morphing effect LoRA - Link

Mechanical Transformation, Glitch Transformation LoRA - WAN

Inflated effect, Ripple Wave effect LoRA - WAN

360 degree Cam rotation LoRA trained for WAN2.2
Samples Generated using one input Image each.

Restyle lora

Created using Qwen IE combined with Custom LoRA

Qwen Edit 2d Restyle LoRA

This project came out of a need for translating written scripts into visually consistent storyboards and animation-ready assets.

I trained a LoRA to transfer character features and overall visual style from a single/multiple reference images, enabling consistent character representation and visual style across multiple storyboard frames. This approach allows rapid iteration while maintaining visual continuity throughout the frames.

The generated images were then processed through a custom Python-based web app to convert them into clean, editable SVG files, making them directly usable within Illustrator and downstream animation pipelines.

Tools Used: AI toolkit, ComfyUI
Models: Qwen Image Edit, Gemini 2.5 (Prompts), Gemini + Claude (Python App dev)

© Copyright 2025 AshmoTV - All Rights Reserved