Quote Originally Posted by 4Dragons View Post
Quote Originally Posted by SrslySirius View Post
Then you're all set. Have fun with the hours of tedious lip syncing!
I lol'd. South Park went from using flash to AE, it's much easier to use and you can put shit together much quicker.
False.

When the show began using computers, the cardboard cutouts were scanned and re-drawn with CorelDRAW, then imported into PowerAnimator, which was used with SGI workstations to animate the characters. The workstations were linked to a 54-processor render farm that could render 10 to 15 shots an hour. Beginning with season five, the animators began using Maya instead of PowerAnimator. The studio now runs a 120-processor render farm that can produce 30 or more shots an hour.
I have some experience with AE, but never really thought of it as a tool for this sort of thing. That's a very robust program, and my shitty computer can't handle rendering AE projects anyway.

As far as I know, there really aren't any good tools for automating lipsyncing. If you're only using 2 mouth shapes (open and closed), maybe you can automate it that way. But that's going to look shitty, like my Matt Marafioti animation. It would be cool if there is something that can detect which vowel sounds or sibilants are being spoken and inserts the correct shapes. If I'm wrong about this, and such tools exist, please let me know.