MAGICIAN
If I could go back and redo Episode 1 (The Magician), I probably would. Almost all of it.
Since I started posting “The Fool”. I’ve gotten a ton of questions about the process, what software I’m using and how I’m making it. I’ve also gotten a great deal of hate comments, back-handed compliments and similarly intended feedback from people who likewise don’t understand the process.
I’ll be using this platform to explore ideas, headlines and topics on AI, as well as being transparent about my process making AI animation, hopefully a bit of esoteric wisdom compliments of the Tarot and all the trails and tribulations of creative trial and error in between.
What is “The Fool”?
The Fool is a 22-episode, 60-second animated vertical where each episode is based on a card from the major arcana of the Tarot. It’s made mostly with AI tools, each episode stars myself in a rapid fire, fever dream logic narrative style that’s both absurd, experimental and at times deeply personal. Moreover, I figure by the time I finish all 22 eps, I’ll have a very strong grasp on what these tools can and can’t do and can channel that into a more ambitious project.
Episode 1: The Magician
A month or two ago I ran across a Pitchfork article on “Italian brainrot”. I’d been wanting to experiment with AI animation and formulate a side-project. The idea was something mindless and weird, and that’s about it. I collected themes I was interested in and had a loose concept I liked. I knew I’d star in it. I knew it’d be weird, vibey - a show about nothing - with layers of esoteric and pop culture symbolism. But that was about it. Most of “The Magician” is that very concept taking shape before your eyes. All the shots you’re seeing are the first images I generated for the project.
Like most of my projects, it was also a speed test. Can I make something entertaining, fast, and cheap? There wasn’t time for trial and error and I didn’t want to spend a lot of money on these AI credits (they aren’t cheap). Throughout this one, I embraced the glitches and inconsistencies and kept moving without thinking much past the next shot. I honestly didn’t know if I’d ever finish it, much less show it to anyone.
When the Magician shows up at the end - I at first thought it’d be fun to have him serving free samples in the aisle, but that was actually the moment I figured out that every episode could be about an archetype in the major arcana. Now having an assignment for every episode, the concept had taken shape.
If I could change one more thing, I would’ve made this Episode 2, not 1, so that the episode numbers would match the card numbers. But I didn’t backtrack. I wanted to keep the momentum, so I jumped into The High Priestess and kept moving. I’ll get to The Fool eventually. I think it’ll serve as an overture and give me a way to weave in recurring themes once I have the full arc in place. The theme-song for the whole series.
AESTHETIC
Once I locked in the theme, my next concern was the look. I knew I wanted to avoid that sharp, clean, hyper-saturated, uncanny AI gloss that’s synonymous with the “slop” we all abhor mores every day.
Taking a cue from verticals like “Backrooms”, I tried leaning into VHS textures and low-res grit. Runway ML is great for this in a few ways and what I used for the entire episode. It lets you upload up to three images from which it will pull faces, colors, aesthetic, etc. I as thinking largely in texture, so for reference images, I intentionally pulled low quality photos directly from Google image search. For the video animation. I used prompts that asked for analog glitch prompts and grain filters. The effect is subtle, but it’s there if you look (I found out Runway isn’t the best option for that exactly, but we’ll talk more about Midjourney’s superior style later).
VISUALS
Starring in the show was a practical choice at first. I didn’t want to run into any trouble using someone else’s likeness and I didn’t want to invent a character. Also putting myself in it added a signature no one else had. It says something about the process because I can’t just type “Seth Graves” in a text-to-video prompt and expect the model to know who I am. That said, there’s a learning curve to that and why I look completely different in almost every shot. It hadn’t yet occurred to me to make a reference character for consistency ( I wouldn’t figure that out until episode 3).
The supermarket background, characters were all referenced with images as well: the soup can, the older women in produce, the unhoused man pushing the cart and his sharping cart. And yes, that is David Blaine as the Magician (the first cameo of many). That is to say, very little of this entirely AI generated. I wanted almost every object to be something real that was mimicked by the image model to avoid the “AI slop” look.
ANIMATION
Everything was made using image-to-video on Runway. I created the images with reference tools and animated them on the same platform.
I didn’t care that the soup can floated out of my hand. I didn’t care that my knees dropped below the floor. I didn’t care that the Grim Reaper appeared in the background for no reason. Honestly, if he hadn’t popped in, I might not have used the shot at all.
This is also the only episode where I voice The Fool directly. From then on, i’ve only used nonsense dialogue or snippets from film or tv shows. It’s a stylistic thing.
One big lesson from this episode: the edit will reveal what’s missing. I ended up going back to create transitional shots I hadn’t planned for. Now I leave Premiere open while I animate. As soon as I get a clip I like, I drop it into the timeline to see how it plays. That alone has saved hours.
SOUND
The rough drafts are completely silent when I’m done animating. Yes, models like VEO and Kling are now experimenting with sound design but I prefer to do my own. All sound effect placement happens in Premiere first. On this one, I did all sound design in Ableton first and found that’s a terrible idea. It’s much easier to line everything up visually in Premiere. Then I export to Ableton for scoring.
For sound effects, I use freesound.org and ElevenLabs. Both have their pros and cons. Freesound is hit or miss, but its library is vast and user generated. These are almost always real sounds, but often poorly recorded. ElevenLabs can be surprisingly good with specific prompts (“a ham sandwich slapped with a rubber chicken”), but sometimes spits out nonsense words instead of sounds. But when you combine them, you’ve got a foley studio at your fingertips
For synths, I mostly used Roland Cloud’s SH-101, Xeneology’s “Chill Keys” presets, and Serum 2. There’s also a diegetic supermarket song at the beginning that I made in Suno - and the only time i’ve used Suno so far in the 10 episodes i’ve made (or am making).
PROMPTING
ChatGPT was both a blessing and a curse. It helped shape prompts, but overcomplicates things constantly — especially with horror. I’d ask for “moody lighting” and get “zombie corpse in aisle 7.”
Lesson learned: don’t use conceptual or flowery language. AI image models don’t like it.
Also: never use negative prompts. If you say “no donuts or fireballs,” you’ll get donuts and fireballs. The model ignores the “no.” Just leave it out.
Weirdly, you’ll remember this better than GPT will.
ESOTERIC LAYER
This episode didn’t start out being about The Magician. That was a decision I made late in the process, almost as an afterthought. So this one had the least amount of intention in terms of interpreting the tarot than the rest.
But even in hindsight, the symbolism fits. A lone figure wandering a liminal space, collecting strange items with the potential to transform - that’s textbook Magician energy. Tools of alchemy, scattered across the mundane.
Even if it was unintentional, the archetype found its way in.
LESSONS LEARNED
The first thing you learn when trying to animate AI video is everything it can’t do. In fact, the hurdle really makes you doubt the doomsayers and paranoia about AI taking over all media - it’s very much not there yet and what seem like even the simplest movements and sequences can be very difficult to maneuver.
For example: That last scene where I interact with the Magician was supposed to be a two-shot over-the-shoulder conversation in an aisle. I didn’t have the patience to finesse it, so I took the messy, incongruent version the model gave me.
What I’ll be talking about in future posts is prompt strategy, storyboarding, outsourcing across platforms to exploit their strengths and hopefully a little more philosophy and pontificating on the nature of AI animation itself, the ethics involved and why I feel like this is important to do. Until next time, enjoy the show.