click for Index Page for SO DAMN TUFF

click for The Pitch

click for The Short Story

click for The Long Story

click for Media Reference Page

click for Technical Vision




Tone, Style and Technical Vision


We are going to use a hybrid workflow, using both human touch and computers to produce a realistic highly detailed animated end product.

By leaning on a hybrid workflow we are able to tell stories set in real life places, imagined by humans, in an animated style using a combination of drawn art, AI, the character movement of stunt actors and the emotion of voice actors set in real locations around the world captured by LiDAR scans.

It will look like a modern version of WAKING LIFE, a polished version of ÆON FLUX utilizng the technology we have available today.  But beyond those references, exploring and incorporating fine art into this process, and compute power for AI and other digital techniques will allow us to come up with a relatable end product that stands out for people to connect to.

Here’s the trailer for WAKING LIFE.  It should be noted we’re just looking at this from the physical look, not at all for the content.



Below is the ÆON FLUX opening, and here’s a link to Internet Archive which has the full series archived on it’s site.




LiDAR Locations

Since this is set in 2008, we are able to redress the city back to how it looked in that time period using reference images without having to redress a location.  This opens up unimaginable possibilities for exterior shots in locations that we couldn’t normally go into without spending an obscene amount of money.  We can shoot on streets we would normally have to close off.    

We can start to develop a private database so that we can lean on AI in the future and eventually tell stories set in any location or city and in anytime period, eventually using AI to generate the exterior.

This goes for interiors as well.

Here’s a video using a LiDAR Scanner by Leica to capture locations which will better explain how this works.




We can use this technology as well to create items and props to take the load off of animators and artists and add to the realism.



Creating Characters

Visual artists are essential to this project.  Comic book artists and fine artists can all be considered for this role.  Meaning we have wide net to choose from.  An artist can create 2D drawings of characters, and we can fairly easily turn those characters into 3D models using AI with several free tools available for DIY solutions.  A 3D sculptor needs to help refine the model.

There are more than several ways to go about doing this.  Here’s a screenshot from my computer just to show how this task is easily done.


Here’s another direction from an IG reel which can be used for capture as well, but much more expensive:




Character Movement

There are many ways to animate a character.  The best way to mimic human movement is by taking the tracking of a person’s movement in video and applying it to a 3D character.  There are many options for doing this, the most powerful one I am aware of is Wonder Dynamics.  

https://wonderdynamics.com/

Here’s a video of my niece juggling a soccer ball, with a character I quickly created on top from a Grainger ad I copied from IG, via Meshy’s platform, and passed through Wonder Dynamics and rendered.


 


We can use stunt actors to perform different movements.  These actors should be similar to the animated character in appearance so that their movements make sense with the character.



Food Photography

Because this project is food driven, we really don’t want to loose the texture and the detail (that shows like THE CHEF’S TABLE on Netflix do such a beautiful job showcasing food and ingredients) to animation.  But there’s an opportunity that to do something unique taking LiDAR scans (this could the work of a PA on an iPad) of specialized varietals of fruits and vegetables, sourced from the 14th Street Greenmarket to insert into the story for scenes where cooking is occurring.  In order to animate the cooking, we’ll rig and capture the mechanics of cooking.

As we caputre movements, we create a database that includes all movements for our characters, with cooking being a unique asset not a lot of AI platforms have the data to produce.  This data is an added byproduct we can monitize.

Here’s a reel of some innovative ways of capturing cooking.





Sonic Texture and Score

I won’t go into examples here, but this soundtrack is a mixture of techno beats and classical cello interspersed with licensed songs.

There is an emphasis on candid audio and sound design.  The sounds of the city, the sounds of the kitchen are a very essential part of creating the texture. 

The final mix should be done in Spatial Audio, or similar.  Something that is going to allow the sound to play really well in headphones.  I do think that most of our early viewers will watch this on their phone.



Conclusion

With AI incorporated platforms and workflows on the market, we need to ensure that we’re creating content that at its core, is still imagined by human minds from human experience and based on human thought and feeling and movement.  We should lean heavily on technology and AI to assist and help us tell more detailed dynamic stories without spending money irresponsibly, improving the quality of stories told while creating a solvent business model for creating TV and Film.

For SO DAMN TUFF, we want to take this story, produce it to a finished product ourselves and license it to platforms worldwide, implementing a sales structure that allows for different streaming companies around the world to own licensing rights for their territory. 
When we’re successful, we’ll not only license the content, but also be able license the work flow, packaged as an SaaS solution for animation.  From SO DAMN TOUGH there are more seasons and stories to tell, as well as spin offs we can listen to what our community asks for the most.  We can take this chance, as the cost will be lowered the more content we create, and data we add to our databases.

In this workflow, there is nothing that needs to be invented.  We are tapping into technology already out on the market and available to use and test, as I’ve been doing.  

To tell this story, we can begin now, gather our tools, and create a salable product for the world.