Meta AI: Introduce yourself to the scholar creating kid-friendly animations with AI.

Introduce yourself to the scholar creating kid-friendly animations with AI

We’ve all witnessed kids create unique, amazing characters, and we’ve even overheard them discussing their drawings as if they were real. How awesome would it be if the figures came to life in real life?

To do that, Meta AI researchers have created a method. We’re introducing a groundbreaking AI-driven animation tool that can quickly and automatically animate children’s human figure sketches.

See this: https://www.facebook.com/watch/?ref=external&v=1228598794216404

To observe the character wave, dance, or walk around the screen, parents only need to upload a picture of their child’s drawing to our demo, rapidly check the model’s predictions, and then do so. We expect that parents and kids will both enjoy watching their drawings come to life. Additionally, it’s a good method to help kids express their creativity and improve their motor skills.

We sat down with Jesse Smith, a postdoctoral researcher working in the Meta AI Research Creativity Pillar, to discuss this initiative, which was the main focus of his postdoctoral work. Jesse and Jessica Hodgins have been collaborating on AI-driven animation tools for kids’ sketches since 2020. Jesse obtained his PhD in Computer Graphics and Animation from the University of California, Davis, where he worked under Professor Michael Neff, before joining Meta AI as a scholar.

Jesse is a researcher for Meta AI’s Creativity initiative, which looks for methods to use AI to support and improve the creative process in people. Jesse has expertise in character animation, social computing, and animation-based educational tools in addition to his research experience. He is presently finishing the research paper that goes with this piece of writing.

How did this fantastic undertaking get started?

Jesse Smith: Young people are talented painters, and their depictions of human beings are abstract, interesting, and endearing. Seeing these illustrations “come to life” and move on the paper has an undeniable allure. However, historically, this has required employing intricate computer graphics software or drawing the figure in a variety of positions (similar to a flip book). For many individuals, the required amounts of time, attention, and money are insurmountable barriers.

In recent years, human image detection and analysis have seen significant advancements thanks to machine learning and computer vision. In order to allow more parents and kids to see their drawings come to life, we modified these models for use with children’s drawings and combined them into a tool that can convert drawings into animations swiftly and easily.

Let’s take a moment to look back. How did you come to choose this field of study?

JS: Finding methods that AI can enable people to create novel and exciting things is one of the objectives of Meta AI’s Creativity Pillar; however, it’s crucial that people maintain a great deal of control throughout the creative process. This is consistent with how I see AI’s proper place as a facilitator of the creative process.

I spoke with Jessica about a number of potential projects before I joined Meta AI, and when she mentioned animating kids’ drawings, I was immediately interested. I still clearly recall reading Harold and the Purple Crayon as a child, a book about a boy with a magic crayon whose drawings came to life.]

When I was a kid, I would have wanted something like this to exist, and now, all these years later, I could contribute to making it happen. Thus, I visited Meta Intelligence.

What have you learned about this endeavour that has surprised you the most? Did you discover anything new about applying AI to kids’ drawings?

JS: I believe that the quantity of variety and individuality in children’s drawings has surprised me the most. I’ve made an effort to imitate them, but my work always comes out appearing like shoddily drawn imitations of children’s drawings by adults. They produce with this unrestricted, carefree rhythm.

I frequently observe it in my brothers (3 and 6). Most people, including myself, I believe, loose that as we mature, and it’s very challenging to regain.

It surprised me how difficult it is to get a model to predict a decent character segmentation map that’s appropriate for animation when using AI on children’s drawings. One explanation is that many characters are drawn in a “hollow” way; the inside is left unmarked but part or all of the body is outlined by a stroke. You cannot use texture cues to deduce which pixels belong to the character because both the character’s inside and outside have the same colour and texture. The key distinction between sketches and photos is this. We’re still experimenting with various model combinations, but so far, nothing consistently outperforms a segmentation that isn’t based on deep learning.

Are some animations more challenging to create than others?

JS: One guideline for character animation is that the character’s movement should have a similar quality and manner. Since the majority of these figures are rendered in flat 2D, we flatten the motion capture data before retargeting it onto the character. For some movements, this works better than for others. Both motions that follow one axis, like a boxer delivering a punch, and motions that follow two axes, like a dancer doing the Charleston, are effective. However, it won’t appear great when applied to the character if the motion occurs in all three spatial dimensions, like Neo dodging bullets in The Matrix.

Have you had the chance to observe any kids’ responses to this piece?

JS: I do! Some parents sent me reaction films of their kids watching the animations for the first time; it’s wonderful to see their enjoyment as they take in the results. I also watched a wonderful film of a 2-year-old enthusiastically imitating the character’s motion.

Sincerely, witnessing those responses has been the project’s high point and has persuaded me that it was worthwhile to make this work available as a public demo for anyone to attempt.

Can you explain how work like this could eventually result in new animation tools for children and adults? How might the switch from realistic to unconventional pictures affect other kinds of images?

That’s a really good topic, JS. Since this endeavour is just one example, I don’t want to generalise the transition from realistic images to abstract representations. I should point out, however, that our demo does an excellent job animating people in clip art images, even though we concentrated on the field of children’s drawings. What other kinds of non-traditional pictures people will attempt to animate and what they will do with the results intrigues me greatly.

Does adding human figures from both toddlers and adults to the model’s scope create any new difficulties? Describe them.

JS: It really relies on how sophisticated the adults’ drawings are. Adult amateur draughtsmen can create figures that resemble those of children. The animations might not function properly or be visually attractive if the drawings are not done with proper perspective and foreshortening. It might be required to use a different animation pipeline, which would probably entail modelling the character in 3D.

What direction do you think this undertaking will take? Is there a method for people to contribute their own voiceovers or sounds to animations to make them more complete, for instance?

JS: In many respects, animating children’s sketches is still a relatively new field. There are more intricate forms of animation that demand a more intricate character study. This includes facial animation, seeing undrawn portions of the character, and changing motion to reflect the personality of the character. These methods, however, call for more information, and there aren’t many comprehensive, useful annotated data sets of kids’ drawings of human figures.

In order to create this crucial data set of children’s drawings and make it available to other researchers, we are hoping that people will be willing to share their kids’ drawings using this demo.

But even without more information, I believe there are many opportunities to expand on this work. One method is by adding noises. Another is allowing users to control character motions with their own bodies rather than current motion capture clips. Personally, I think expanding this work by developing a video game where you could draw the characters would be really interesting.

It might be an action-adventure game or a puzzle game where specific character drawings are required to solve challenges.

What is the project’s contribution to AI research? What more general outcomes do you expect your work will produce?

JS: At Meta AI, I work with brilliant individuals who are stretching the limits of what AI is capable of. However, I believe it’s also critical that we concentrate on determining the finest applications for those AI advancements. This endeavour is for me about figuring out how we can combine those recent AI advancements with more conventional animation techniques to delight and make people happy.

I hope that our initiative encourages others to reevaluate their own limitations and to design the tools they wish they had access to when they were younger. In order for us to develop this demo and for other researchers to think about how their study might be applied to the field of children’s drawings, I also hope that people will be encouraged to share their children’s drawings with us.

Sarthak Yadav

Sarthak Yadav

Sarthak is a freelance Tech Writer with well over 14 years of experience. He started his career with writing feature content and since then have kept his focus on the same. His work is published on sites like Futurefrog.net, Hotmantra, Oradigicle.com and . When not writing, he enjoys grooving on South indian Music.

Leave a Reply

Your email address will not be published. Required fields are marked *