For the last three years I have created three album length visualisers to journal the growing capabilities of AI video creation. Honestly, the latest one already seems dated as I made most of the videos earlier in the year, but that just goes to show how quickly things are evolving now.

In 2023 I used Stable Diffusion. If I remember correctly, it was taking me about 50hrs to create each of these individual videos. Breaking previously shot video down into individual frames and then having AI interpret each single image. It was clunky, massively time consuming, but the aesthetic is really something else… Pain Clinic

In 2024 I moved to Runway which made the process a whole lot quicker, but I wasn't using pre-shot video anymore. These videos are better described as AI animated. I would use AI to dream up the image and then use brushes to map out areas of the image I wanted to move, stretch or sway… It's Relative

Most recently in 2025 I dove into full video generation. Still with Runway, I must admit that by this point the videos were becoming a bit of a money sink, especially if you wanted to be very specific about your output. Lucky for me I was happy being experimental and I kind of like the hallucinatory qualities of AI video going off the rails. At times this was exciting and a lot of fun to make however the censorship guard rails were often frustrating and limiting when trying to make specific scenarios come to life… Peak Mental Performance

Leave a Reply