Why the Campanile Movie, 25 years on, is still so important..

In 1997, Paul Debevec presented The Campanile Movie at the SIGGRAPH Electronic Theatre. This landmark short film was done as a capstone to Paul’s Ph.D. work at the University of California, Berkeley. In an entertaining way, the short film unveiled a landmark new approach to generating realistic computer graphics. The film shows a young Debevec walking up to the University’s Sather bell tower in the center of the Berkeley campus. The camera then engagingly flies around the four sides of the Campanile tower swooping in ways impossible in 1997.

Cut to 25 years later and it is fitting that Dr. Paul Debevec has been awarded The Television Academy’s Charles F. Jenkins Lifetime Achievement Award which honors a living individual whose ongoing contributions have significantly affected the state of television technology and engineering.

Below is the official Emmy video honoring Paul’s work with the Lifetime Achievement Emmy award.


Dr. Paul Debevec today, with the original Sony VX-1000 camera, and real Campanile model (Taken at USC)

“It’s incredibly gratifying to receive this award from the Television Academy for the things I’ve done in photogrammetry, HDRI, and Image-Based Lighting, and I’m especially happy that it recognizes how our Light Stages pioneered the technique of surrounding actors with LEDs to light them with images of virtual locations,” Paul commented. “The Light Stage 3 lighting reproduction system I presented at SIGGRAPH 2002 brought everything I’d worked on since The Campanile Movie full circle, and it’s a thrill to have this be connected to the virtual production shows that millions of people have gotten to enjoy.”

The Campanile Student project and its significance

The student Campanile Movie immediately influenced and informed the team at Manex where The Matrix VFX team with industry legends such as John Gaeta and Kim Libreri were trying to work out how to pull off the now famous ‘bullet time’ shots which would become one of the most iconic signature visual effects of the late 90s. “When I saw Debevec’s movie, I knew that was the path,”  VFX Supervisor John Gaeta told Wired in 2003. The Matrix raised the bar for action films by introducing new levels of realism into stunt work and visual effects.”

Despite his ‘VFX Supervisor’ title on the Matrix and many other major award-winning films, John Gaeta comments, “I’m not an engineer, I’m a designer, actually, my title on What Dreams May Come was ‘reality capture supervisor’. From that time, Kim (Libreri, now CTO at pic Games) and I worked together on what are the ways that we can capture the world, not just simulate it.”

At the start of The Matrix’s production, the VFX team did not have we didn’t have the solutions yet for virtualizing all things the script needed, “we needed image-based rendering, which didn’t exist really before Paul started experimenting, and so this is how we all came together.” While there were huge advances in animation and rendering happening with PIXAR and RenderMan at that time in the late 90s, the results were still far from photoreal. What marked such a turning point for the industry was how the Manex team was able to produce such incredibly realistic visual effects by focusing on capturing (or sampling) reality and then ‘screwing’ with it.

Jon Gaeta sees a direct line from the impactful implementation of the Wachowskis’ vision in The Matrix built on the research from Paul Debevec’s team to today’s volumetric captures, virtual production, holographic and ML NeRF concepts, and so many other derivatives of reality – captured from the real world. While the film was about simulating reality for Neo, the VFX crew built the Wachowskis’ cyberpunk world by sampling and then providing a fresh and original perspective.

At the same SIGGRAPH in ’97, SGI showed on their stand a real-time version of the Campanile Movie.  The SGI demo was also shown during the SIGGRAPH 97 paper “InfiniteReality: A Real-Time Graphics System” and for several years was an internationally used SGI InfiniteReality demo. “In fact, the first time that I met Kim (Libreri) was when my grad student George Borshukov brought me over to visit MANEX at Alameda naval air force base in early 1998,” recalls Paul. “And Kim was twirling around The Campanile model in real-time on his SGI RealityEngine2”. It is then not without a sense of irony and closure that Kim Libreri worked with John Gaeta again recently on The Matrix Awakens, tech demo: An Unreal Engine 5 Experience, 25 years later,.

“When we saw the Campanile movie, we realized that this was a totally different path forward in computer graphics and that CG would no longer would it be the domain of purely artisanal pursuits, but actually you could take science and photography and the understanding the lighting and start to get something from a computer that would make a human believe it was real,” recalls Kim Libreri.

That philosophy went on to be a core approach for all The Matrix movies, building on what the team learned and “being inspired from working with Paul,” he adds.  “I think we owe our philosophy of how we achieved the visual effects in The Matrix and all the pioneering that followed from it to his original approach.”

Kim Libreri goes on to add, “Paul is very unusual in academic circles. He is quite artistic, and I think that’s another reason that he was able to inspire all these filmmakers to adopt these new techniques. He has intuition. When you’re trying to pioneer new approaches, it’s good to have an intuition of what problems would actually be useful to solve. Coupled with the fact that he loves films, and that he has a ‘good eye’. Paul’s a visual guy who always wants to make imagery that is amazing. It is a reflection on Paul in that he’s not just a scientist, he’s not just a researcher, he’s an artist.”

So how did they do it in 1997?

The research behind the Campanile Movie was realized by Paul Debevec, Camillo J. Taylor, George Borshukov, and Yizhou Yu.

In simplest terms, the shape of the Campanile tower and surrounding landscape was triangulated from a set of still photographs. The team then generated 3-D models based on this, but instead of applying UV textures to the models, they projected the models with photographs of the buildings themselves. The process pioneered view-dependent image-based rendering using projective texture mapping. The effect worked spectacularly well.

Given the height of the tower and lack of drones in 1997, several of the photographs used to produce the Campanile model were taken from above the Campanile using aerial kite photography by UC Berkeley Prof. Charles Benton.

Prof. Charles Benton (left) flying the kite to photograph the tower from above.

Kite photography is a risky approach at the best of times. Prof. Chris Benton was an expert and extremely experienced at kite photography. For this shoot, he used a 60ft custom kite, with a camera that Paul could remotely control from the ground (see right). But the system was before live feeds, so Paul had to estimate that he was controlling the camera to look in the correct direction by eye.

The system worked extremely well but “actually the shoot was complex because we had to deal with the wind dying down every so often,” recalls Debevec. “Chris (Benton) was desperately trying to fly between this building and that building, and  I swear if we’d had a video feed from that camera, I’m sure the kite probably came within 15 feet of getting stuck on the spikes at the top of the Campanile, which would have taken quite some explaining!”

Most of the photos of the campus environment were taken from the lantern at the very top of the Campanile.  To take some of the images, Paul Debevec got ‘creative’ after climbing to the top of the Campanile tower and took out a pane of glass to have access to film. This ‘delicate’ screwdriver operation also allowed him to take a definitive selfie (see below).

Paul Debevec today believes that one of the interesting aspects of an incredible career in academic and industry research is how film narratives directly affect new technology adoption. Throughout his career from Berkeley, and his time as a Professor at USC developing the Light Stage, and throughout his time as a senior researcher at Google and now Netflix, he has developed new visual technology. Paul believes there is a direct correlation between spending time to make an entertaining narrative demonstration film and the corresponding time until research then appears in some form in a Hollywood film. He believes for those projects that adopted the Campanile model of producing an engaging short film, the adoption time is far less than similar work only published with a technical film, limited to scientific examples and tests.

Beyond the capturing of the tower, George Borshukov, a Masters student at the time wrote the custom rendering engine for the movie and assisted with the execution of the project. The rendering engine was written in C/C++ and OpenGL and ran on an SGI Reality Engine that was gifted by SGI to the lab by Carl Korobkin who was also the author of the original projective texture mapping technical paper. In the fall of 1996, Prof. Jitendra Malik suggested Borshukov talk to Paul Debevec and CJ Taylor who were already Ph.D. students of his and doing exciting research.  Paul Debevec already had a strong vision for the Campanile movie and when he invited Borshukov to join him for the five months it took to finish the project.

What is surprising, given the huge significance history now bestows on The Campanile Movie, is the reaction from some at SIGGRAPH 1997. While most of the reaction was positive and a lot of people stopped and expressed admiration for the project after the screening to the team, there was also some skepticism from certain VFX professionals in the industry if such an approach would have any impact on the industry, which at that time was moving in a very different technical direction. It seemed that this new approach was always in ‘conflict’ with the traditional CGI way of doing things. It was not clear at the time it was going to be that influential but it was clear it was a very unique and extremely powerful new set of techniques.

The Matrix.

The story Borshukov remember is that John Gaeta and some of his team had seen The Campanile Movie at SIGGRAPH in the Electronic Theater and immediately ‘connected the dots’ about using the approach for the backgrounds for Bullet Time on the upcoming movie The Matrix which was in pre-production. Someone told John Gaeta that they had seen a resume of a guy (Borshukov) who had actually worked on the movie so John reached out to Kim Libreri, Nick Brooks, and Pierre Jasmin who were on the West Coast shooting What Dreams May Come to connect. “Kim Libreri reached out to me to schedule an interview. I went over to the Alameda Naval base. They showed me what they were working on and I showed them The Campanile Movie and we really hit it off!” he recalls.

The Matrix script provided a visual problem that needed to be solved. “We were working on how to cheat time and space by being able to imply a God’s eye limitless perspective,” comments John Gaeta. “We wound up creating bullet time but our focus was principally on the subject at the center of a scene and appearing to capture it as both volumetric and virtual.”

The ‘bullet time’ shots needed to solve two problems. The first was to refine the ‘temp-mort’ process of filming something simultaneously from multiple cameras and then turning that into one smooth move. If the multiple cameras all fired a moment apart, as the stills were compiled into one clip, it would appear as super slow-mo while the camera spun around the subject. The Manex team did not invent this multi-camera approach, but they did refine it enormously.

The second problem was to then remove the visible cameras from behind the actors and replace the green screen with a matching spinning background. The Matrix was shot in Sydney Australia but it was not possible to film a 360 sweeping background to match the camera rig, that was visually what the directors wanted. Without a direct MoCap solution, the background of the bullet time would need to be CG, but critically it needed to look completely real.

The central action.

To make the main actor’s photograph appear smooth, the team needed to solve the physical problems presented by the actual size of the cameras. To make a smooth pan around Keanu Reeves’ character Neo, the interval or spacing between cameras would be too tight to fit real cameras. Today the concept of interpolating or virtualizing the camera is easy, but in 1997 it was near impossible. Kim Libreri contacted Snell and Wilcox in the UK who had developed optical flow technology for motion compensated standards conversion, which allowed 24fps footage to be interpolated to 30fps. The Snell & Wilcox management team in turn dumped this problem on a young Ph.D. graduate Bill Collis to solve. “I had no idea what the actual project was, let alone the story, but we got this letter from the strange fellow (Kim Libreri) saying ‘we hear you are the best in the world at motion estimation for standards conversion. I got some retiming. Would you be able to help us?” recalls Collis. Not only did Kim send the frames but also he needed to FedEx a data tape machine for Collis to use. “And I spent probably about the next six months playing with it, not having a clue, what I was working on.” Collis succeeded and in addition to becoming a lifelong friend to Kim Libreri, he went on to run the Foundry in London and engineer the NUKE deal with DD.  But Collis’ motion retiming was done in near complete isolation from the work the Manex team did to solve the backgrounds.

Manex

Steve Demers was a technical lead at Manex during the bullet time work on Matrix.  “The ‘guts’ of the technique involves image-based rendering of a simplified geometric scene with still photographs taken from a physical set, in order to produce a ‘virtual backdrop’ in which to place the live-action,” Demers comments. The research from Berkeley dealt with the two main requirements of this approach – generating the geometry in the computer, and then the photo-realistic rendering of the set geometry using the photographs.  “The techniques and software he developed at Berkeley – such as “Facade” – was used to recover the pertinent geometry of sets for which there wasn’t very good or sparse survey data (which amounted to quite a lot),” he recalls. The Matrix photo-realistic backgrounds were image-based rendering using the projective-texturing method developed and used in the Campanile Movie. “Basically, the locations of the still cameras used to photograph the physical set were surveyed and entered into the computer model of the set.” The pictures taken of the physical Matrix location at these respective positions were then projected onto the computer model of the set creating a virtual backdrop that could then be rendered with almost any camera move imaginable as long as the still-photo coverage was adequate. “I do not believe that the Bullet-Time sequences would have been possible without using the approach pioneered by Professor Debevec with the Campanile Movie.”

George Borshukov was the technical designer at Manex for the bullet time work on Matrix. The Bullet Time shots used Softimage plugins that exported data for accurately placing the real 120+ Canon SLR cameras on the set from the digital pre-visualization scenes in Softimage 3D. After the completion of the physical Bullet Time photography, the team developed the approach for the image-based backgrounds as a set of mental ray shaders using camera projection mapping, using visibility calculations to texture the reconstructed digital sets with actual photos from the physical sets. This early work led to the more comprehensive Manex Virtual Cinematography pipeline that was awarded a Sci-Tech award by the Academy of Motion Pictures in 2001.

What would it look like for real?

To celebrate the original film, Paul Debevec recently went north again to the Berkeley campus and oversaw a Drone shoot, providing footage that was impossible to film 5 years ago, but which today serves as ground truth for what the Campanile Movie was trying to recreate.

A lot of developments in photogrammetry and radiance fields have happened since The Campanile Movie, which makes one wonder how one would go about making the film today?  “These new scene reconstruction techniques require a lot more photos than 20-odd views we originally loaded into Facade,” Paul explained. “Instead we leveraged 4K video frames from a modern drone flyover of the campus with a DJI mini3 drone, shot in about the same cloudy lighting conditions as The Campanile Movie,” 

Greg Downing from HyperAcuity used modern photogrammetry tools to build a per-vertex textured 3D model of the tower and the campus from the drone footage, with some manual touchups to add in a sky dome. “The model is amazingly detailed and realistic, especially of the tower, though it doesn’t have the same sharp straight edges as the original Facade model, which modeled architecture as low-polygon geometric primitives,” Paul explains.

“Jingwei Ma, a PhD student at the University of Washington and a Netflix summer intern, ran several hundred frames from the drone video through three recent radiance field modeling techniques, all of which represent the scene as an opacity volume with spatially-dependent directional radiance”, explains Paul Debevec. The three approaches that Ma tested were NVIDIA’s InstantNGPBerkeley’s Plenoxels, and Google’s MIPNeRF360, which all have their codes available. Each of the techniques produced promising results, with a few shortcomings.

  • Instant-NGP has trouble keeping the faraway parts of the scene in focus since it’s designed for finite scenes. It also makes the tower appear to be transparent when seen from below the altitude captured in the drone video, (as if the background were projected onto it).
  • Plenoxels does well with faraway regions, but the tower comes out blurry, perhaps due to the maximum resolution of the implementation.
  • MIPNerf360 has no trouble with representing the far regions, but the tower is blurry and there are some issues extrapolating directional reflectance as with InstantNGP. “Jingwei is continuing to optimize the parameters of the algorithms to improve the models and they have already noted that turning off directional reflectance improves the results significantly,” he adds.

Jingwei Ma used COLMAP, a general-purpose Structure-from-Motion and Multi-View Stereo pipeline, to match-move the second sequence of the original Campanile animation, making it possible to render comparison results of all five techniques in the video below.

The work done at Berkeley academically became a part of Paul Debevec’s Ph.D. thesis: Modeling and Rendering Architecture from Photographs: A Hybrid Geometry- and Image-Based Approachand the real-time SGI work was presented as part of the 9th Eurographics Workshop on Rendering in Vienna, the following year (1998) as Efficient View-Dependent Image-Based Rendering with Projective Texture-Mapping, authored by Paul Debevec, George Borshukov, and Yizhou Yu.

The Campanile Movie was created by Paul Debevec, George Borshukov, Yizhou Yu, Jason Luros, Vivian Jiang, Chris Wright, Sami Khoury, Charles Benton, Tim Hawkins, and Charles Ying, with help from Jeff Davis, Susan Marquez, Al Vera, Peter Bosselman, Camillo Taylor, Eric Paulos, Jitendra Malik, Michael Naimark, Dorrice Pyle, Russell Bayba, Lindsay Krisel, Oliver Crow, and Peter Pletcher, as well as Charlie and Thomas Benton, Linda Branagan, John Canny, Magdalene Crowley, Brett Evans, Eva Marie Finney, Lisa Sardegna, and Ellen Perry.

Leave a Reply

Your email address will not be published. Required fields are marked *