20 April 2022

10 Min Read

far cry 6

Far Cry 6 – How AI Helped Animate Yara’s Hero

In a game like Far Cry 6, realistic animation is key to a believable and engaging experience, and the team at Ubisoft La Forge is developing the technology that's making Ubisoft's games even more immersive. Some of their latest work is a tool called Choreograph, which helped create the animations in Far Cry 6 through an AI-driven technique known as "motion matching." To better understand the technology, we spoke with Development Director Olivier Pomarez and Engine Programmer Raphael Saint-Pierre from Ubisoft Montreal about the team's work, and how AI supported animators in creating more realistic animations for Far Cry 6.

[UN] [FC6] Choreograph Motion matching Tech Interview - Raph+Olivier

What exactly is motion matching?

Raphael Saint-Pierre: In our traditional animation system, for a character to move in a specific way, we would have an animation tree on one hand, and a set of input variables, such as speed and direction, on the other. The animation tree references a collection of animation files, each with a single objective – one file for walking straight at slow speed, another for jogging straight; each action needs a unique file to account for a character’s style of movement. When the character changes from a walk to a jog, for example, we change the input speed, leading the animation tree to select the jogging animation and apply various blending techniques to give the appearance of picking up speed without immediately snapping to the new animation. This means using considerable processing power and memory for each additional animation we play and blend, which can get costly.

Motion matching is a way for the animation system to pick and choose the best animations to play based not only on the input variables, but also on how characters are currently positioned, and how they are expected to be animated next. It’s like the editor of a movie, choosing the best shots and putting them all together, and creating clean transitions between the cuts to make them invisible. The result looks more natural and yields smoother transitions than a purely state-driven blending system. When that same character goes from a walk to a jog, all we do is change the requested speed, and the motion-matching system figures out by itself that it first needs a walking animation, then an acceleration animation, and finally a jogging animation, and switches from one to the other at the most appropriate time. This system lowers the complexity of decision-making and blending systems, and moves the focus from exact results to results that look and feel right.

If you perform motion capture and performance capture (MOCAP/PCAP), why do you need motion matching?

RSP: MOCAP and PCAP are complementary to motion matching in the same way they are to other animation systems: they are used to build directed sets of natural, organic animation data in an array of situations the game will need. Once obtained, that data still needs to be selected at runtime so that it matches the intent of the animated characters – sometimes in ways that have not been captured, because there are so many ways of moving. Realistically, we can’t have an actor perform them all. A motion-matching engine understands captured movement in a way that allows it to extract interesting, reusable sequences that can be reordered and stitched into a pattern that may or may not have existed at capture time.

Olivier Pomarez: Performance capture is necessary to provide the game with its unique signature in terms of how characters interact with the world. For example, you can compare it to Prince of Persia: The Sands of Time, and the unique signature in the movements and animations that contribute to vivid memories for people that have played it. As you add more animations, the animation trees become very large and complex. Motion matching helps to organize and blend all that data together to create a seamless motion in an efficient way that doesn’t require actors to account for every one of potentially thousands of different motions.

How was Choreograph developed?

OP: We started the work on Choreograph by following up on the latest developments in academia. We used our colleague Daniel Holden’s work, which proposed revisiting motion matching by leveraging some machine-learning strengths, like efficient memorization and the ability to generalize.

From there, we focused on several questions we had to solve with the new tool, such as how to facilitate the labelling of the animation files so they can be used in the code, how to dynamically convert the files to a format that can be used in the game, and how to do it all in a way that allows it to be quickly searched and used by the game for more fluid blending. One of the limits for the motion-matching approach is that the amount of memory needed grows as you increase the number and variety of animations you want to add.

This is where the team introduced machine learning into it, with learned motion matching. The proposition was to introduce some system that would learn the outputs of a specific query, like going from a walk to a jog, and by doing so, compress the amount of memory required by up to 30 times. This part is covered at length in our papers on the topic, and while this has not yet been integrated into a shipped game, the approach is proven, and we can get a high diversity of content with a smaller and faster memory footprint using a motion matching-based approach.

What does Choreograph do for development? How does it change things?

RSP: Firstly, Choreograph gives some control back to animators. Animation data now is seen as a continuous stream that is annotated by people who know which parts are most relevant to each situation, like labelling a certain animation as a walk, one as a jog, one as a run, and so on. This relaxes some of the restrictions of MOCAP data, and means we can get more situational variety, more control over what gets used and what does not, and less waste.

Unlike our previous system that could only play entire clips before switching to another one, it picks very short animation sequences – usually a few frames – making the data we have more versatile. That allows for better reactivity to unpredictable input, without things like input lag, or the character jumping unrealistically to an unexpected animation, or animators having to resort to tricks to cover it up. Since it’s engine- agnostic, it allows for our teams to rely on a larger community for help and good practices, and means the burden of optimization and maintenance is lower since it’s shared with other development teams.

What does Choreograph do for players? What’s the effect for them?

RSP: We have only shipped Choreograph’s infant steps, and it still helped us introduce new experiences to players, like a third-person camera mode for Far Cry 6. In my opinion, Choreograph’s offer to players is twofold: more expressive and immersive animations on one hand, along with opportunities for a richer game world thanks to the responsivity, design language, and smoothness Choreograph enables. On the other hand, since it is reasonably frugal in terms of production time and end-user hardware resources, it frees up human time for a more polished result, as well as processing power for other systems.

What parts of the animation process still require human hands to accomplish?

RSP: We need humans to define exactly what kind of motion we want to be matched – what is allowed and what is not in the various animation contexts. I mentioned before how it’s a bit like a movie: You have Choreograph as an editor; animation data as the actors; programmers and writers as scriptwriters; and animators as directors. You want the scriptwriter to write a sensible script; the director to tell the actors what is expected of them and to give them relevant context; the actors to give their best performance; the editor to piece it all together in a pleasing and believable way; and then all of them to work together so that things don’t diverge from that intention.

While the player and the gameplay drive what on-screen characters do, the artists and programmers behind character behaviors are still required to determine the style of animations we are looking for, and the amount of blending from Choreograph to offer the desired result. For example, while the player may input a forward run, the character may run into a piece of scenery or another character, and need to be redirected to walk around it. So the code needs to work with Choreograph to produce the correct result. In the end, magic and art only occur once gameplay code, animation data, and Choreograph are properly connected.

Is it true that teams outside Ubisoft developed their own similar systems to Choreograph based on a recent GDC talk from members of your team? How does it feel to inspire games across the industry?

OP: Yes, we’re very proud of the research that the team did at La Forge, and the extra effort they put in to share it internally and externally. But we are not the only ones to work on systems like this. For instance, EA has revealed and shared quite generously at past GDC events on their own progress on animation systems. This is an industry where people inspire each other, and after all, creating games is not about technical prowess – it’s about creating recipes for amazing player experiences, something we can all share.

RSP: It seems that the work done on motion matching at Ubisoft Toronto and Montreal, which was presented at GDC 2016, has sparked interest around the game-development community, and led game makers to believe in, and improve on, that technique. The results from our own teams, and from others exploring similar techniques as well, are fantastic.

I think it says a lot about the passion that people in our industry have, yielding that interesting combination of competition and collaboration in the form of knowledge-sharing. Working alongside the masterminds who built and made those powerful systems is inspiring; having these people share their discoveries is empowering; and seeing what others have done by building upon that knowledge is truly stunning.

Choreograph is an ambitious project that is going to bring a lot of good stuff to our games. Its existence and promising future are a testament to the skills and dedication shown by the people behind Far Cry’s animation and motion-matching systems, behind Choreograph, and those who fed these systems in a way that allows us to immerse players so deeply into Yara. I know our research and development teams have a lot more tricks up their sleeves, and I’m pretty sure people all over the games industry do as well. I can’t wait to see the kind of immersion and aesthetics the future has in store for us.

Discover more about the technology and design behind Far Cry 6, and how the team elevated the stealth experience of the game, or check out the latest piece of free content with the Far Cry 6 Stranger Things crossover.

More From Ubisoft

7 March 20243 Min Read

Valiant Hearts: Coming Home and Valiant Hearts: The Collection Out Now for PlayStation, Xbox, Switch and PC

The collection, which includes both The Great War and Coming Home, tells the stories of people in World War I.

Read More
18 January 20245 Min Read

Prince of Persia: The Lost Crown Out Now

Dive into Sargon’s adventure through a time-cursed mythological world starting today.

Read More
15 January 20245 Min Read

Ubisoft+ is Evolving: Here’s What to Expect

Today, alongside the Early Access launch of Prince of Persia: The Lost Crown on Ubisoft+, Ubisoft is renaming its subscription plans.

Read More
2 November 20233 Min Read

Ubisoft’s Educational Initiative with the Micro-Folie Network

Ubisoft has been working with the Micro-Folie program since 2021, contributing digital content to help make culture accessible for everyone.

Read More