How Virtual Production Worked On-set of the Lion King

The Lion King

Disney’s The Lion King aimed to solve a classic VFX/animation problem, namely how to direct a story when the director can’t see all of the things that they are directing. With the traditional, iterative approach it can be frustrating for a director to direct a scene when he or she is only directing one component at a time. With the Lion King, the creative team sort to take advantage of the revolution in consumer-grade virtual reality technology. Coupled with game engine technology, they advanced the art of virtual production and produced visual effects and animation combined with a traditional physical production approach. They succeeded in creating a system that allowed Jon Favreau to direct a movie with high quality, real-time, interactive components, – ‘shot’ in context – while still making a completely computer-generated film.

Jon Favreau speaking at the UE4 User group at SIGGRAPH in LA

The Lion King is a technical marvel. The quality of the character animation, the look of the final rendered imagery and the innovation in making the film are just as impressive as the incredible box office success of the film. MPC was the visual effects and animation company that provided the stunning visuals. They worked hand in hand with the creative team, especially director Jon Favreau, DoP: Caleb Deschanel, visual effects supervisor: Rob Legato and the team at Magnopus, headed by Ben Grossmann, to innovate the art of virtual production.

Oscar-winner Andy Jones headed the Animation team at MPC, with Adam Valdez as the Visual Effects Supervisor. Elliot Newman was the other MPC VFX Supervisor. Oliver Winwood was the CG Supervisor but he started as the FX supervisor. Julien Bolbach was MPC’s first CG Supervisor working on sequence work, which was the Buffalo Stampede sequence. They were joined by James Austin as another CG Supervisor.

A new way to work

Magnopus got involved with the Lion King project just before October of 2016, which was right around the time the idea came up to do the Lion King.  The project was always intended to improve upon the lessons learned from making Disney’s The Jungle Book. Grossmann explains. “We started by sketching out ideas around throwing out all the old visual effects based software and switching completely over to game engines. We then needed to figure out what we would have to write in order to shoot the entire movie in VR and integrated this with a major visual effects pipeline”.

John Oliver as Zazu, and JD McCrary as Young Simba,

The original D23 sizzle reel, which was the first footage seen of the Lion King, was actually shot in the offices of Magnopus with the first prototype of their VR Virtual Production System or VPS. Magnopus is made up of both Oscar-winning visual effects artists and VR specialists. The company has made such landmark VR projects as CocoVR with Disney Pixar and the Mission: ISS in collaboration with NASA, which allows users to explore the International Space Station in VR.

While the team built on their experience with The Jungle Book, the backbone of that earlier film was Autodesk’s Motion Builder. That technical approach had its roots in virtual production work that Rob Legato had pioneered for Avatar and The Aviator. “When we finished The Jungle Book we said ‘we now have all this new technology that we could take advantage of’ (for The Lion King).  Actually the technology had been there but it was never at a level that we could actually use”, Rob Legato commented at SIGGRAPH recently. “I had actually looked at the idea of using a VR headset on Avatar, but it was so crude. At that stage, it was not really ready for prime time”.

All the footage in the D23 reel was eventually replaced or updated, but even for those first few test shots, the team was operating virtual cranes on the herds of animals in real-time.  Some of the animals in the D23 reel were assets that were reused from Jungle Book before the full Lion King assets were ready. This is why some keener viewers could spot Asian Elephants in the background, not the correct African Elephants that would be used later. Given how early this reel was animated and rendered by MPC, the D23 reel is jaw-droppingly good and it fed into huge anticipation for the film’s release.

Interestingly, there was a technical bump or glitch in the Unity Engine. A feature that no user would normally have any control over, that caused a bump that made it all the way from the D23 footage to the final film. Every once in awhile the Unity Engine clears our any unneeded data or assets from the Engine. Unfortunately, when this random function, deep in the code called ‘garbage collection’ ran it could cause a tiny pause in the smooth movement of the master Unity Camera. One such ‘bump’ happens in a shot in the D23 trailer. After the trailer was complete, the team discovered what the issue was and fixed it. But even when the shot was redone much later, this ‘bump’ is still in the final camera move, just because Rob Legato liked the feel of the recorded natural move, even with the bump. For the creative team, ironically given its causes, it just felt natural.

The humanity of imperfect moves and imprecise framing provides the film with a live-action aesthetic. “The reason to do it the way we did”, comments Legato. “Is that that is how it has been done for a hundred years. You can’t improve on the filmmaking process”. Legato himself is almost as much a cinematographer as he is a VFX supervisor or second unit director.  “As much as you’d like to, and as archaic and bizarre as it seems: actors on sets rehearsing with the cameraman, plus the crew and all this stuff, – works. Film making is a collaborative art form. You need all the collaborators to help you make a movie…and without this sort of methodology of working, it all becomes a little more stifling and more sterile. It just doesn’t have this extra real life that comes from you continually changing and altering to what you see as you film it”, he clarifies. Legato is passionate about both collaboration and respecting the roles that have been honed over decades of filmmaking by professional storytellers.

Rob Legato

The new Lion King

From the outset, Jon Favreau had stated that he did not just want to remake a computer-animated version of the original animated classic. He believed one of the reasons why the Broadway musical version of The Lion King worked so well was that it was the same story but in a different context. It was important that it was a different presentation.

From the outset, the plan was to be faithful to the original story and not re-write the narrative, and yet do something to make the new film feel different. The team had been very pleased with the visual realism that MPC had delivered on The Jungle Book and so the team started work on envisioning a way to make a live-action production model for a completely computer-generated film. Unlike The Jungle Book, there would be no live actors or animals filmed. “It wouldn’t feel like an animated film. It would feel like something else,” Grossmann recalls. “In the very first meeting that we had with Jon, we all discussed that we couldn’t just do a knock off. It had to feel like something else. We needed to reach further into our toolkit to bring every technique to bear to make this film feel like a live-action movie”.

The second major issue to address was the balance of realism when the animals were going to be talking and singing. This had been already been to be addressed in The Jungle Book, but for The Lion King, the team decided to make the animals even more realistic than the previous film. King Louis and a few of the other animals in The Jungle Book were quite stylized and ‘humanized’. Naturally, it is easier to anthropomorphize a primate or orangutang than a lion.  For the new film, the team decided to tweak the artistic vision or look of the animals. Oscar-winning animation director Andy Jones at MPC was once again in charge of the team that would deliver these even more nuanced animation performances. Jones and his team at MPC refined their animation approaches and character rigging to deliver an even more subtle set of speaking animals.

Ben Grossmann, Magnopus.

Unity

The difficulty of the task of building the elaborate system that would allow the filmmakers to film a major Disney film in VR is not to be underestimated. The process relied on game technology to provide real-time performance. But when the Magnopus team started work in building their system in Unity, the high-performance engine did not even have a timeline. “At the time that we started, there was no Unity timeline and so we had to go in and we build a set of time-based functions”, recalls Grossmann. To do this the Magnopus team had to heavily modify the code and write themselves something that was effectively going to become the beginnings of a timeline in Unity.

The team did not run Unity complied, which was an unexpected decision. Magnopus didn’t write any executables and compile them. “We modified Unity’s editor mode to have the functionality that we needed so that we were essentially always shooting the movie in editor mode,” explained Grossmann. “It was a weird thing and not many people could figure out why we would do it this way, …but it was awesome”. This decision was related to the problem of changes.

If the team had compiled an application to make the movie, it would have worked but the problem was that once it was launched, whatever assets are in the ‘game’ are the only assets the team can access. “If you complied it, and then you were standing on Pride Rock with 10 people about to film in VR, and someone wants a new tree…, you’d have to say ‘all right, I’m going to kick everybody out. I’m going to shut the program down and load new assets and then bring everybody back in because everybody’s on distributed clients’” he explains. By running Unity in editor mode, new assets could be added at any time, without restarting. This was a major difference in the practicalities of filming.

Latency

The system had to work with very low latency. It is nearly impossible to operate a virtual camera if it does not feel immediate. If one pans a camera and stops on exactly what you like in the viewfinder – but actually there is a latency lag, then when the operator stops, – the system keeps tracking on for a beat longer and overshoots. Magnopus managed to achieve a latency of fewer than 4 milliseconds, while still ‘filming’ at a high enough frame rate to capture the detail of all the movements (keyframes). The latency is directly related to scene complexity. The team, therefore, had to be very skilled at translating high-density assets into low poly count assets that would allow good performance on the soundstage. A series of tools were developed to both decimate assets and to convert any visually distant assets into a temporary cyclorama. It was important for the filmmakers to see off into the distance in the wide shots, but with a 200 square kilometer virtual set, only the assets within a few hundred meters needed to be fully 3D at any time. The needs of high-performance game engine playback and yet rich visual film assets are competing goals. The solution ended up being an implementation of multiple levels of rendering. “We needed the camera station as much as possible to get 120 to 240 hertz or frames per second so that we could have plenty of keyframe data to draw upon, ” explains Grossmann. “Since all of our computers were running in sync and networked together, we took another computer and said, ‘let’s put some really high-quality imagery in here, turn on ray tracing and add the very high-quality assets to only this one machine’”.  The result was that almost all the Unity machines were using high-performance assets and running at 120 fps, but there was one machine that wasn’t able to keep up. That one machine could like barely render 20 frames per second, but looked a lot better, as it was using the high-quality assets. “We set up two monitors next to Caleb (DoP). One was the monitor he used to operate (running at 120+fps), and the other was the monitor that he used to judge lighting”. This second lighting machine had soft shadows and better key lights etc. Both of these computers were designed to aid in live production. Additionally, whenever a clip was cleared to go to editorial, a separate machine would render the best version possible of that shot Sometimes this ‘best version’ was as slow as 1 frame a second, due to all the ‘bells and whistles’ being turned on. This computer was not used of live production filming but it was placed in an automated sequence pipeline to produce the best imagery for editorial. A 10-second shot could be automatically re-rendered in roughly 4 minutes and loaded in the background, ready for the editing team whenever they needed it.  A second per frame is very slow for a game engine but lighting fast compared to final VFX render speeds, which can take hours a frame to produce.

The Virtual Process.

Andy Jones felt strongly that the animation should have no motion capture component, so all the film’s animals are all keyframe animated. This worked extremely well, but in the early stages, there was provision to puppet a digital animal on set to explore blocking or trail a new idea. In the end, all that was required was the ability to occasionally slide a character so that from the camera view the action was clearer or the blocking was slightly adjusted.

The primary approach was

  1. Keegan-Michael Key (Kamari) and Eric André (Azizi) in BlackBoxThe voice actors recorded dialogue individually, with the exception of some scenes. For example, scenes between Billy Eichner as Timon and Seth Rogen as Pumbaa. These actors not only recorded some of their dialogue together but the Director filmed them in a ‘BlackBox theatre’ set environment where they could act and walk around in an empty space (with no computer use at all). As with all the animation, this done to get good voice performances and was not recorded for motion capture. It was filmed for reference and later editorial discussion. “We’d basically take a Seth and Billy or some of the other actors and throw them in the middle of a giant rectangle surrounded by cameras and then they would act out a scene. They would have room to move around like actors and do their lines,” recalled Grossmann. “Jon would then direct those performances and say, ‘Okay, that’s great, we love that’. These clips would be the reference clips that we would send to the animation team and that audio could be cut into editorial.”
  2. James Chinlund and his team lead by Vlad Bina and Tad Davis, designed and build the scenes and reviewed them in VR in Unity using the VPS.  The crew on the sound stage in LA would often use these scenes to scout in the VPS.  When ‘locations’ were approved, they would be handed off to the MPC Virtual Art Department (VAD).
  3. The Crew did not need to wear VR headgear, although they often didThe Director, DoP, VFX Supervisor and a special team worked on the custom sound studio in LA, designed by technical lead Lap Luu. This sound stage was designed to allow them to ‘film’ in VR. The ‘stage’ had traditional film gear, including tripods, dollies, geared heads, focus pulling remotes, cranes, and even drones, but the actual ‘cameras’ were all inside Unity. These devices were either physically encoded or optically tracked by 3d printed mechatronics designed by hardware supervisor Jason Crosby. The actual stage was not that large, but even with the modest stage size of only about 70ft by 40ft, the team filmed most of The Lion King in just about 1/3 of the total space. The process proved so effective that the team did not need vast spaces. The final system was primarily the Magnopus VPS developed further from the D23 test and constantly refined and added to as the production needed. Ben Grossmann designed and oversaw the software and hardware development for the VPS under Rob Legato’s supervision, as well as overall operations.
  4. Animation was in Autodesk MayaThe animation team at MPC would animate a scene and provide that animation to the LA sound stage. These assets both environmental and character were logged into the virtual stage management system. MPC maintained a database so all the assets were version numbered and every piece of data related back exactly with the correct version of the asset, take, scene and edit. This automated process was vital as assets would round trip to and from the stage, and any changes needed to be automatically logged and recorded. The MPC team needed to have a 100% confidence that any onset timings or changes were seamlessly recorded and feedback into the next animation iteration. Headed by sets supervisor Audrey Ferrera, the MPC team in LA would import animation from Andy Jones’ animation team and adjust and optimize layouts for the game engine.  MPC Virtual Production Supervisor, Girish Balakrishnan would oversee the workflow and convert the scenes into a format ready to move to the stage and confirm that asset tracking for MPC’s asset system was functioning.
  5. When assets or animation was ready and approved by the MPC team, they would move to the MPC ‘Dungeon Master’ in the machine room of the LA sound stage. The MPC bridge between their work and the VPS on the sound stage was nicknamed the ‘Dungeon Master’ by Jon Favreau. The Director left two six-sided DD dice on the table one day, next to the computer and called the station the Dungeon Master, – as this computer decided the ‘map’ that all the team would ‘play on today’. Virtual Production Lead John Brennan and Virtual Production Producer AJ. Sciutto, would get a scene hand-off from Balakrishnan.  Once would confirm it was ready to shoot the team would push the data to the computers on the stage, and confirm that everything was ready to shoot.  From that point forward the shoot day could begin.
  6. The creative team would then film that animation sequence with VR cameras in the VPS. Key to this process was that multiple people could join the same VR session and see each other. The various pieces of traditional camera equipment were geared and wired into this master set up. For example, a dolly pushed in the real world would move the virtual camera matching it exactly.  The VR world facilitated a skydome and cinema style lights. For example, if the team was filming up on Pride Rock and someone ‘added’ a Redhead (500W Tungsten light), then a virtual light (looking like a redhead) would appear and a virtual C-stand would extend down to the ground (no matter how far that was). The director could walk over (in VR) to the master camera and ‘tear’ off a copy of the VR video monitor from the top of the virtual camera. He could then walk to any spot he liked and build his own video village. This could be ‘miles’ away from the action, but of course, in the sound stage, he was physically just a few feet away from the main crew. Faris Hermiz from the actual Art Department (not the virtual art department) would often be in VR during the shots as a “set decorator” preserving continuity and making sure that the set was used as James Chinlund had designed it.
  7. The view from the Editorial section back to the main stageThe First AD Dave Venghaus would orchestrate the shooting day’s work. Caleb Deschanel and Rob Legato were assisted by Michael Legato, Key Grip Kim Heath, various grips and crane operators. In VR they were helped by the Magnopus VPS Operations team of John Brennan, Fernando Rabelo, Mark Allen, and often engineers Guillermo Quesada, Michelle Shyr and Vivek Reddy. Once a scene was shot, it could be immediately reviewed in the editorial machines at the back of the stage. Every aspect of each take was recorded, as individual channels references with the day and time of the take, take number and the relevant asset register of character and environment version numbers.
  8. All the recording was done in the VPS on stage. When the first AD would say “cut” the VPS machines would collect their local recordings of all the changes and animations, and send them back to the MPC database PC so they could confirm they captured everything.  If MPC wasn’t on the stage, (eg. during some reshoots or pickups, or recamera-ing after principal photography), the VPS had all the same functions built-in and could send the same packages back to MPC. Either way, complete shots were always returned to MPC.
  9. For the animators at MPC, the first thing they could choose to do is ‘enter’ into their copy of the virtual set in VR and have a look at what was shot and how the team approached the scene. This step is theoretically unnecessary, but as most animators would agree, it is really advantageous to just have a look around on set as an omnipresent observer and get a feel for how the creative team was approaching each scene. It was also free to do both in terms of setup and data wrangling.
  10. When the animation, lighting and fur sims were finalized, the LA creative team got one last chance to check the lensing of any shot. Again this step may seem redundant, but it allowed for the occurrence that with the various animal’s fur or secondary motion, such as a tail swipe, – a slightly different blocking or framing might improve the shot.
  11. Once whole scenes were done the team could also preview in VR what any scene might look like in Stereo. Normally it is impossible to visually replicate an IMAX experience, as any monitor will always be closer and way smaller than an IMAX screen, in relation to the fixed distance between someone’s actual eyes. But with VR system, the team could simulate watching the material in a virtual VR IMAX theatre and satisfy themselves the stereo convergence was correct. (Many of this team had previously worked on the Oscar-winning natively stereo film Hugo by director Martin Scorsese ).
  12. MPC rendered the final imagery in RenderMan. For non-stereo reviews, there were two review theatres built at the LA sound stage so the shots could be reviewed in a standard Dailies environment.
THE LION KING – Featuring the voices of Beyoncé Knowles-Carter as Nala and Donald Glover as Simba,

Hiding Computers

Magnopus engineered The Loin King sound stage to also look and feel very different from the earlier Jungle Book stage.  Grossmann recalls that when he used to walk on The Jungle Book set, “it felt like you were walking into a computer factory with a stage shoved in the back corner” he jokingly recalls. “There were tents full of people on computers and rows of desks with computers all over them. All of those people were doing really important stuff and essential to the process,  but there’s something about walking onto a movie stage. There is a reason why art galleries have art on the walls and very little else in the gallery- it is because you don’t want to distract your attention from the thing that you’re supposed to be focusing on”. On The Lion King, there was no imposing presence of technology on the stage.

While many respects of the new film were vastly more complex than The Jungle Book; when the creatives walked on the set, everything was simple and easy to set up, with most of the 20 VPS computers not even on display. “We certainly had tons of computers on the Lion King, but they were workstations that we pushed up against the walls on the outside of the stage. There weren’t people sitting at them doing stuff”. The stage was carefully staffed so there were not loads of people on the stage just sitting in front of computers. In addition to the core creative team (Director, DOP, VFX sup, first AD) and the camera department (Focus puller, etc) and grips, The Lion King stage had just an editor, a representative from MPC who also handled asset control and very few other people.

There are lessons (and even surprises) from this approach

Quality playback matters

Ben Grossmann noticed that the better the quality of the animation and the playback that crew was seeing onset, the more engaged the crew were in the detail of their own work. It mattered that the animation from MPC was not rough blocky old-style previz, but rather refined and articulated animation. Some of The Jungle Book’s virtual production attempts had been with expressionless ‘stand-ins’. Grossmann, as an observer on set, commented that “the better things looked, the more seriously the crew considered their work when filming them.” He explained that “if we were shooting a scene file that had characters that looked good, something that someone had obviously had time to put a lot of care into, then there was a heightened sense of tension on the set.  The level of engagement of the film crew always went up and people took it more seriously when the imagery was more refined. I feel like we did our best work when the scenes looked good”. This is why the production bothered to produce the material to the level they did. In theory, lighting or other things could have been much simpler but the lower the quality of the footage that the team was looking at in VR, the “lower the quality of the engagement and connection that the film crew had to it” he adds.

Need for Second Unit

One of the interesting aspects is that this project reduced the need for second unit. The virtual production stage team in LA did not have great time pressures or schedule issues. Moving locations took only seconds. If a location needed to be revisited for a pickup shot this could be done again in minutes. “Shooting second unit at the same time as the main unit was a rare thing. We rarely had to, but if we wanted to, you could easily because you could technically run three or four shoots that are totally independent, running completely different scenes, in that same stage at once. But this was never really needed as we typically shot very quickly” Grossmann recalls, adding extraordinarily, “Caleb (the DoP) could shoot 110 setups in a day without going into overtime”.

Multi-track

This style of filmmaking, while designed for traditional film making collaboration, also allows for a new form of multi-track filmmaking. For example, if there was a complex shot, and just one person, perhaps the focus puller, was slightly off in their timing. It was possible to lock in everyone else, camera operation, dolly action, etc and just play the whole shot back while redoing just that one ‘track’ of focus pulling. It was possible to play everything at half speed to aid in pulling off an otherwise nearly impossible refocus, or any other manner of playback speeds. While this was done rarely when it was chosen as an option, it was never decided upon lightly, “What’s funny is that we would take those times very seriously,” says Grossmann. “…And then we would comfort ourselves by acknowledging that all cinema is based on deception and we’re not documentary filmmakers!” he knowingly recounts. A production in the future could go even further, (not that the Lion King team did this), but the DoP could have chosen to do every role himself and just lay down the shot in multiple passes. First record the camera move, then lay down the framing/camera operation and then the focus pull, etc, one ‘track’ at a time. This is analogous to a modern audio recording.

Reactive Virtual Cinematography

The opening shot is a very good example of how the film making process lent to a form of natural reactive digital cinematography. The film opens with a small mouse scampering through the long grass and along rocks and logs. The virtual camera operators were ‘filming’ the mouse running through the grass, they did not model the camera movement in a way that feels preplanned. Thus this opening sequence has a very live-action feeling as the camera is a fraction behind in its framing in a way that feels completely natural. This respect for the traditional craft of film making, dating back to analog earlier times, was key for making the audience see the highly realistic animation as live-action footage.

Rob Legato (left) discussing a shot on the LA sound stage with DoP Caleb Deschanel (centre)

Losing the Director

Crane shot in the virtual world

While the digital tools were modeled to capture the limitations of their real-world equivalents, they were also highly flexible. For example, a techno crane could be mounted on top of a dolly, on top of a cliff, in just minutes.

It was also possible to travel vast distances in the virtual world, and so the team quickly added a special feature in the VR menus that allowed the crew to immediately teleport to where ever the director was. The crew was always able to hear Jon Favreau talking to them as they were in reality only a few feet away from him, but especially during virtual scouting of locations in VR he might end up virtual miles away.

While shots were being set up sometimes, Jon Favreau would “be sitting there in VR and there’d be these little rocks and he would pick up the rocks off the ground and he would make like one of those little sculptures of balanced rocks.  He’d just start stacking things up and playing with bushes and trees and because they were out of the shot it did not matter” explained Grossmann. “But then at some point in the movie, we might be filming that area. At which point the art department was suddenly confused,- ‘what the heck is going on with those rocks over there? Somebody made a snowman out of rocks’!  Only to then find out, ‘oh no, –  Jon was sitting up there and he made a snowman out of rocks’ “.

Puppets on set

While not used on The Lion King, the Magnopus team also had an option in the VPS where anyone could ‘be’ any character. Someone could ride along ‘on them’ or ‘be them’. In which case the creature would walk with its own walk cycle, but move where ever the individual moved in VR and additionally, wherever the real person looked, the animal would also look in that direction. The idea was to allow anyone to mime out an action they might want from a lion without having to ‘control’ the rigged character with traditional tools. “We made it so that you could just walk around on the stage and your center of mass would drive the animal center of mass. And if the animal was walking over terrain, the system would automatically conform to the train even though you were walking on a flat stage outside the real world,” explained Grossmann.

DoP Caleb Deschanel (elft)

Not all the crew operated in VR all the time. While some of the crew could be working in VR, others would be seeing the ‘video’ split on monitors around the stage, able to do their functions without the need to wear a VR headgear and with both their hands-free to manipulate the film gear.

Stage Design and adding a TV Station

The basic stage was just an empty box when the team first moved in. First, the team soundproofed the facility ready for the BlackBox theatre. “We really trusted Kim Heath, the key grip, he just did a mind-blowingly phenomenal job of rigging that space with truss and old fashioned ropes and pulleys. In the stage, you could swing down from the ceiling arms that had Valve lighthouse trackers on them for each different volumes”, explains Grossmann. The space needed to be able to be divided into different volumes for the VR gear to work. Each space had to have isolated infrared, with the walls painted matte black so they would not reflect any of the infrared beams. “Sometimes we’d have to use night vision goggles and fog up the stage, just to see where the infrared bleed was coming from”, Grossmann recalls Infrared leakage would up screw up the tracking, “if too much infrared light overlapped between different volumes. He could then be able to see that with the smoke and special goggles.  That’s basically what night vision goggles allow. So you can imagine there were days when we had effectively a bunch of ‘black ops’ VFX people in there trying to make sure your virtual production tracking was on point!”.

The stage was then fitted out with one of the first Opti-track active tracking systems. The OptiTrack Active Tracking solution allows for synchronized tracking of active LED markers. The active tracking solution mainly consists of a Base Station and a set of active markers. The active markers can either be active markers via a Tag and/or markers on an active puck which can act as a single rigid body. A Tag gets RF signals from the Base Station and correspondingly synchronized illumination from the connected active LED markers. The active markers are never able to be mislabeled, each prop, controller or camera gear gets a unique id, so once set up, on any given day the team could “just turn things on and it would just work and we’d know what each thing on the stage was”, explains Grossmann referring to all the cranes, tracks and items such as the other camera department gear.

All the computers were provided by HP with NVIDIA graphics cards. There were some custom-built machines the team made for unique or special uses. “We’d make them double water cooled and all that stuff because we were constantly trying to improve image quality. We were over cranking our computers and in some cases, we overclocked them by an additional two gigahertz!  Every once in a while you’d melt a computer or you’d melt a processor because you have a heavy scene and it just couldn’t take it anymore and it would smoke” he humorously recalls.

Blackmagic Design provided a large amount of the video gear the team needed for reference cameras and switchers for the editorial team. Grossmann points out that in addition to the data paths onset, the team had video playback and video assist with monitors everywhere on the stage. “Aside from having this digital network, we basically needed to build an entire video network as though this was a live broadcast television studio”. This was because every computer on the stage was a view into the world that the team wanted to record.  “We ended up building a television-style control room. I used to work in broadcast television. So when we were designing the stage, I designed a broadcast control room on one end so that we could put all of the video equipment and all of the control panels in the broadcast control room”.  This meant installing switchers, routers, color correctors and video recording decks. In order to do that, “we really needed Blackmagic Design’s equipment. We were really stressed out about that when we were trying to budget and plan the whole thing out, and Blackmagic simply said ‘you go worry about doing the hard stuff and we’ll worry about all the equipment’. They did just that, and we were able to build some pretty crazy stuff, that worked brilliantly” Grossmann comments. “Generally speaking the big people who really kind of helped us out was NVIDIA, Hewlett Packard, and Opti-track and Blackmagic Design.

The Virtual team on the Animation team

Ben Grossmann, an Oscar winner himself, was incredibly impressed with the work from MPC. “I think that the work that MPC has done on this film has just moved the bar forward for the whole industry. Not just in terms of the aesthetic quality, – I’ve never seen anything rendered to this quality before” he explains. “But also the nuances of the animation. They really dug deep on every single performance in this movie and pushed it further than I think anyone before. I was humbled because initially I only saw the virtual production most of the time. I didn’t sit in on the reviews of the visual effects. So when we first saw the final material come out it was shocking, … it was amazing.” Magnopus actually had professional researchers who saw some of the final imagery out of context and assumed that it was new reference footage that had somehow been found.

“MPC blew my mind with the quality of the work that they did. And I would like to think that one of our ambitions from the virtual production team wasn’t just for the filmmakers, it was for the visual effects artists,” adds Grossmann. “I think it was great that we gave freedom back to the animators, – to avoid confusion, -remove a lack of clarity – avoid endless re-work, because some of the time you can wander the desert in search of what a shot is ‘supposed to be’… and you want instead to have the time to produce the highest quality possible,.. and I feel like we contributed to that”.

MPC

The normal turnover on a film would involve a file from the editorial to the animation teams with just the plate photography for the selected shot and its metadata. On The Lion King, MPC would get a turnover package. This would contain the previous reference material that MPC themselves would have provided, including any previz passes. It would also have the references for the various sets, and reference cameras. It would contain a lighting package and all the onset data including a VR Unity setup.

Pipeline

“Stage one for us at MPC,” explains Oliver Winwood, CG Supervisor, “was to load the assets, and ingest those back into our pipeline. We’d ingest the lights and cameras and then do a basic render through RenderMan. This basic render would be delivered back to the stage. “This was like a ‘version 0.0’ if you like, and this would verify everything was correctly loaded. This gave us a very good idea of lighting, direction, framing and mood”, Winwood adds. Winwood himself would also always fire up the VR version of the assets and just ‘visit’ the set himself to get a feeling for the whole virtual production. The reference turn over material would have all the output reference cameras and the edit itself, but he still liked to “just get a sense of the layout and how all the elements in the location worked with each other”.  MPC had a smaller but mirror version of the LA sound stage permanently setup in their offices. “The first scene we did was the Elephant graveyard sequence.

The MPC team could then visit the set of the elephant graveyard in their own offices “to get a really good sense of kinds of distance, and what we were putting the characters into, and the kind of scale of everything. It was generally a really good tool for us”. He goes on to add that in his opinion virtual production isn’t just useful for the actual shoot, it is also something that “we can revisit at any point and walk around it. It was really quite cool and very useful”.

A lot of the camera work was used exactly as shot, but a significant amount also had to be changed for animation adjustments, which required minor layout tweaks. If the required camera work was more than just a modest adjustment, then the material would be immediately exported back to the main stage in LA for new camera work.

Once the material was ingested at MPC, the team would very much stay in Autodesk Maya for all their animation work. Some members of the Layout team did use Unity to do an additional check on the environment work, but only when assets were sent back to the LA sound stage would Unity factor in again for MPC.

Florence Kasumba, Eric André and Keegan-Michael Key are the hyenas, and Chiwetal Ejiofor played Scar.

Adam Valdez in his role the Visual Effects Supervisor, and some other key MPC staff, such as members of the Environment team, were in LA, but most of the MPC team was in London, “even the previz team was based at MPC in London, – yet the system of asset management worked extremely well”, Winwood recalls.

One of the most complex animation scenes was the stampede. The shot had a huge number of characters, but the actual set was also very complex. All the cliffs and rocks were modeled and even with instancing, the 3D scene was extremely demanding. A close contender for the most complex shot to render was the Cloud Forest scenes, due to the vast amount of organic detail. “The more we were trying to fit in a space at any one time would always push the render limits. Actually, the Cloud Forest was probably our heaviest sequence, now I think about it. You’ve got an environment plate that covers hundreds of plants, trees, grass, and vegetation. Definitely those would have to be our most complex shots to render !” Winwood estimates. Many of the scenes had their own complex challenges. The bugs sequence that our heroes dine on, was complex “You’re looking at really highly detailed assets which you actually trying to match all these little feet interaction on. There’s a lot of passes and you are always trying to make them collide with each other” he explains. Perhaps the most difficult creative shot was the Dung ball sequence with the tuft of hair.

The hair sequence is elaborate and covers vast sets that are otherwise not used. The tuft of hair needs to be blown and animated, interact with water simulations and rigid bodies, and always remain readable to the audience. “Even scenes we have seen before like the desert, we now needed to go down to at a macro level and see individual grains of sand. All the sand was individually simulated…and the same with the water” he adds.” The film has plenty of waterfalls and rivers but suddenly you are having to look at a tuft of hair that’s no more than maybe a centimeter in size and we had to fill a good chunk of the screen with the closeup water – that was particularly challenging”

JD McCrary as Young Simba, not the complex hair

Simulation

The complexity of animated the adult lions such as Simba was made more difficult by the adult mane. The volume and movement of it directly affects posing, readability, and performance but fur simulations are traditionally costly. On The Lion King’s animals, all the grooms were split up by hair length, so they could be accessed separately. Winwood explains, “for example, the body fur on the main body was separate from the mane. Generally, the body fur was not simulated, we focused our simulation work on the longer fur. The body fur normally got enough movement from the animation and the underlying muscle stimulation and skin simulation on top”. There was some simulation work done around the mouths of some character especially on Mufasa (James Earl Jones). The majority of the simulation was on the mane. The process started with the groom using MPC’s in house Fertility fur grooming software. “We moved to version 8 of Fertility for this film, and then from there to Houdini. This was because by going to Houdini we were not just restricted to guide curves”. In Fertility, most of the work is using guide curves with the fur generated around those curves but in Houdini, “allowed us to take as much or as little of that groom as we wanted. So for instance, on some shots we might be taking a smaller percentage” explained Winwood. “We did some test on certain shots hero shots where possibly even 40 or 50% of an input groom was actually being stimulated with the rest of the groom being mapped back on. Whereas on some more difficult shots, we take as little as 1%, just to speed things up”.

The mane simulations went beyond secondary motion, for example, when Mufasa is sitting on Pride Rock in the Patrol sequence, the team worked hard to get simulate the wind blowing through his mane. “We did a lot of work on getting wind settings correct, to get the correct amount of occlusion from the hair and reaction. These were very heavy simulations, for some of the long shots, we could have the system simulating for the best part of the day at a time”, Winwood recalls.

Rigging

A good deal of work went into improving the existing rigging systems and making the rigs faster for the animators to use, while also expanding the amount that could be previewed within a reasonable playback speed. A lot of time was spent looking at footage shot in Kenya in order to build rigs that highlighted the animal’s mechanics. For example, the team focused on what happens to the skin and fur when a Lion retracts its claws. Another example of these animal nuances involved Zazu the Hornbill.  MPC used the puffing of his feathers to emphasize certain words and expressions.  The rig puppet had to have a representation of that puffing, which as closely as possibly flowed through to MPC’s feather system so the same puffing was seen in the render of the final character.

As for muscles and skin, the skin sliding set-up was more sophisticated than MPC had ever previously done and allowed for complex movement across the entire muscular structure. The muscle simulation tools also have even greater connectivity with its characters skeleton, resulting in a more realistic and anatomical result. “Each muscle would be simulated automatically when the animation was baked out, while also giving the Techanim team the ability to go in and make adjustments to the simulation if needed” explained Winwood.

Animation

The animals were all hand-animated, the flocking and herding animation was handled by MPC’s Alice software. ALICE stands for Artificial Life Crowd Engine. It is MPCs in-house crowd software that was created originally for 10,000BC in 2008. ALICE Has been steady and continuously updated and allows the artists to manage herd or crowds and customized scripting for large groups of agents. For some time it has been one of MPC’s flagship in-house software tools and the team have been slowly transitioning ALICE to Houdini.

Final Images

The use of physically plausible lighting via RenderMan set in Katana added to the validity of imagery at the end of the virtual production approach. Rob Legato commented that “It was ‘real light’ on a properly groomed animal, as soon as you see the real light, via a ray-traced simulation, it just comes to life. It just looks like the real thing” he explains. “We were still blown away every time we saw it because our choices were correct as cameraman and with all the various things MPC did to make something look good, …when you finally see it with every different hair structure built on the animal, catching light, the way a real fur catches light, – even for me, it becomes very impressive”

Genesis

MPC has now developed its own on set production tool called Genesis. This was first demoed at SIGGRAPH 2018

Leave a Reply

Your email address will not be published. Required fields are marked *