4 ways in which AI improves our work processes at ZENTRALNORDEN

2024 | 11 min Lesezeit

AI: Revolution or toolbox? How artificial intelligence is changing our workflows in motion design!

Artificial Intelligence (AI) is revolutionising countless industries, and the design industry is no exception. In fact, AI is having a significant impact on our motion design workflows at ZENTRALNORDEN. By automating various processes, providing advanced tools and even predicting future trends, AI enables us as motion designers to be more efficient, effective and innovative.

This post is also available in German over here. | Diesen Blogbeitrag gibt es hier auch auf Deutsch.

In this blog post, I will take a closer look at how AI is changing the motion design workflow at ZENTRALNORDEN, focusing on four key areas: Art Direction, automating manual tasks, implementing last-minute changes, and training. I will also discuss some of the potential AI implementations we see for the future at ZN. This is by no means an exhaustive list of all possible uses, but rather a quick taste of what is currently available and in use here at ZN.

AI Directed Art Direction

As a motion design art director, I find AI tools such as midjourney, stable diffusion and chatGPT invaluable for exploring as many avenues as possible when developing a concept for a campaign or experimenting with different looks and feels. The AI can simulate different design outcomes, allowing me to visualise the potential impact of my decisions and make the necessary adjustments. This not only improves decision-making, but also encourages innovation and creativity in shaping the overall visual style and tone of design projects. 

Exploring different moods with midjourney, checking storylines and dramaturgy with chatGPT, doing 'style transfers' from one design to another with comfyUI and Automatik1111 are just some of the endless ways AI can enhance my work at ZN, providing a safety net for ideas by making sure I haven't overlooked a potential direction that needs to be explored in the campaign. Often AI is simply a tool to help me weed out the first obvious ideas to make way for more unique and creative concepts.

Exploration of one of our brand’s key icons, done by one of our designers using midjourney.

A recent application I have found is the use of different trained models to achieve a particular illustrated look for a project. As a non-illustrator myself, I find it extremely useful to explore different trained models on civit.ai, plug a prompt into the model (checkpoint) and control the look with some additional smaller models (LORAs), controlNets, masks and prompt adjustments.

ComfyUI’s interface for generating images using stable diffusion
Civitai.com is a website that curates various trained models that can be used for the production of images and videos with stable scattering.
I was able to create 30 variations of coffee machine illustrations in a matter of minutes on a recent project.
Once I have the look I want, I can convert some of these illustrations into vector graphics using vectorizer.ai and paste them into Illustrator and then After Effects for further editing.

AI in Automating Manual Tasks

One of the most immediate and tangible benefits of AI in motion design is the automation of manual and repetitive tasks. This includes tasks such as rotoscoping, keyframing and object tracking, which are essential to any motion design project, but can take a significant amount of time and effort when done manually. AI can perform these tasks quickly and accurately, allowing us designers to spend more time on the creative aspects of our work.

As a motion designer, one of the most labour-intensive tasks is rotoscoping. In After Effects, this task can be semi-automated with the help of AI and a tool called Rotobrush. With a few brush strokes, the tool can 'recognise' the foreground object we want to keep, while removing the background frame by frame, quickly and accurately. More recently, other AI tools such as Runway ML have offered similar ways to remove background from footage.

Another task that is essentially the flip side of the first is removing an object from footage, which can now be easily achieved using generative fill in After Effects. What used to take hours of painstakingly removing the object frame by frame can now be done by exporting a keyframe from the sequence, painting over the object we want to remove in Photoshop, and then automatically propagating the change to the rest of the footage in After Effects. The recent addition of generative fill to Photoshop makes this task even easier, and other solutions are being developed, such as inpainting (the process of replacing part of an image using generated artwork) with stable diffusion within Photoshop.  

In a recent project, we used AI and chatGPT to build automated scripts for After Effects using the Expression coding language. This has allowed us to create custom scripts for specific actions, as well as smarter templates for our essential graphics workflow.

Klutz GPT is a third party plugin that allows After Effect users to generate automated scripts

AI not only saves us time by performing these tasks faster than we could, it also reduces the risk of human error. This ensures a more accurate and higher quality end product. As AI continues to improve, we can expect even more tasks to be automated, allowing us to focus on what we do best: designing and animating.

AI in un-Predicted Design challenges

As well as automating manual tasks, AI also plays a crucial role in managing project timelines and ensuring we deliver on time. As a design agency specialising in both 2D and 3D motion design, we need to be ready for anything. 3D projects are notoriously slower than 2D when it comes to last-minute changes, and this is where AI comes to the rescue.

Rendering is the most time-consuming task in a 3D workflow because it relies on the power of a single graphics card, and even with the latest developments in GPU-based rendering, even the shortest 10-second animation can take hours to render. At higher resolutions, render times increase exponentially, making 4k rendering a luxury. AI tools such as Topaz Video and comfyUI can turn a standard resolution render into a detailed 4k render in less than a few minutes. It may not always be perfect, but it can be a lifesaver when you need an unplanned resolution at the last minute.

The same goes for frame rates. If part of a rendered animation is needed at a higher frame rate than the standard 25 fps, we can use AI tools such as Topaz Video and the free Flowframes tool to 'fill in' the missing frames to create a slower version of the animation.

Flowframes is a free tool that uses AI to complete in between frames in a slowed down footage

AI can also be used as an aid to look development for 3D projects. Rendering just a "clay" version of a 3D scene, without materials and textures, can later be "dressed up" with generated materials using AI workflows in compfyUI. This allows for rapid iteration and extensive exploration of the many directions a scene's look can take.

AI as an educational tool

The recent emergence of chatGPT bots, which are trained language models that specialise in a particular topic, is proving to be an invaluable tool for finding information, solutions and learning about different aspects of the software we use. Bots such as AE GPT and C4D GPT dig into the literature that comes with the software and find the right solution to your problem within seconds. This saves the time needed to read through hundreds of pages of manuals and is a more accurate way of applying the concept of "just-in-time learning". Furthermore, Google’s Gemini has the ability to search through its entire database of YouTube videos, to find the exact information we are looking for.

C4D GPT and AE GPT are bots that were trained on the training material from these two motion design software and allows you to quickly search through hundred of pages of software manuals.

Future implementations

AI is a powerful tool that can significantly improve motion design workflows in design agencies. But at ZENTRALNORDEN we also see many possibilities for future use of the technology in project management, design projects, as well as training our own models for specific needs, video to video, and even live installations. For the time being, I have chosen not to discuss AI as a production tool for producing actual content, as the tools being developed for video creation or 3D animation creation are still in their infancy and are changing and developing at a rate that is impossible to keep track of. Tools such as RunwayML, Stable Diffusion Video, and openAI's latest addition, Sora, show great potential for creating short videos from images or text input, but are far from being a reliable tool for our everyday workflows.

Feel free to contact us for solutions, advice, or just pop by our offices in Berlin.

Mentioned tools and their links:

Want to see more of our cool projects? Click here:

Portfolio
Schlagwörter: AI, KI, Midjourney, Motion Design, Animation, Workflow, Art Direction, Generative Images

Ilan Yona

Art Director Motion Design | ZENTRALNORDEN

A fan of karaoke and with a background in art and education, Ilan is art director for motion design at ZENTRALNORDEN and has various responsibilities ranging from storyboarding and concept development to animation in 2D and 3D. He is passionate about design, cinema, new technologies and is always up to date with the latest trends.