In the nascent days of event staging we focused on the essentials of making sure the speakers could be heard throughout the hall, that the slides (and later PowerPoint and video vignettes) could be seen clearly by most attendees and a bit of IMAG thrown in for emphasis. These are still the foundation of any successful presentation or meeting — if you cannot hear or see the information, then there is no point in gathering folks together. The modern presentation also has an additional element of drama and envelopment. One of our primary tasks is to simultaneously enrapture an audience while pinpointing their focus on the message. To achieve this we use a number of tools, including audio shaping, lighting and the newly resurgent technique of projection mapping via media servers.
Projection mapping is the process and the art of projecting light and imagery onto objects such as sets, cars and even buildings to create visual and dimensional effects all without having light spill off the items. You may have seen this when it is used in car reveals. Two examples area recent Volkswagen show and presentations on buildings. Other examples show video and presentations that are spread across multiple wide or unusually shaped projection screens where the visuals appear to pop off the stationary object and give it a sense of motion and depth. In order to create these dramatic visuals several projectors aligned in conjunction with one another are required, but alignment alone does make for a projection mapping show.
Mapping video is not new in concept and in fact goes back to the early 1970s from, of course, Disney, which reportedly used 16mm projectors with specially shot imagery and physical masking techniques. Mapping is establishing a relationship between the content, the output devices and the element on which it is to be projected. The process of mapping is, as you would imagine, a fairly complex process using specialized software and techniques. In the first mapping shows done the content would be made specifically to fit the object to be projected on and only from the perspective of the projector. It is a complicated and time-consuming process that creates a great final product but leaves very little flexibility.
A second method is to use software, usually proprietary suites provided by the server manufacturer, to align, modify, warp, bend and morph the “flat” content or multiple elements to fit the destination surfaces. This second method provides a good deal of flexibility but is also time-consuming and the results are never quite a perfect fit. A third method is where a model of the projection surface is made, like a building, which has UV coordinates mapped to it. Content folks create the images based on these coordinates, which can look a bit odd to folks who do not understand the process pre-projection, as it looks like a flattened out structure. This third method includes a final step of taking into account the physical projection positions in relation to the object to be projected on and make any minor adjustments for lens distortion and any original site survey errors.
The Media Server
Today, much of the process is handled inside of media servers. For many the term conjures up boxes in the home theater world that go by the same name and deliver instant streaming access to stored movies and music to be played on a single screen. Aside from sharing the name and the act of providing an instant output of media the boxes used in event staging are light years of difference from their consumer cousins. Physically media servers are just computers, fairly hefty and powerful computers, but just computers all the same. The units have more powerful processing, high-end output cards and connectors, usually DVI, and specially designed software to manage the content. The power is required for larger shows but you can play with the concepts and some techniques with low-cost software or freeware and standard off-the-shelf computers.
While the size and complexity of a show or installation will alter the final details of a system, we can discuss the basic methodologies and metaphors to get an understanding of how it all works. Media servers are not created equal. This is not to say that, for the purposes of this article, one brand or type of server is superior, or inferior, to another These units come in a variety of setups, options and software based on market focus and system concept intent. As with all systems in a live event it is wise to build in a backup to ensure your show can go on if the unexpected happens but we can break down the system metaphor into two camps. The first method is to use single machine where using a master-mode-only setting you can get one output to your projectors or destinations along with a preview/programing view. The second method uses multiple machines in a “cluster mode.”
As stated above it is possible to produce a show on most media servers using just one unit although there is a distinct advantage to splitting a show across a number of machines. When using more than one machine you can ensure redundancy and it can help spread the workload by giving specific servers limited tasks. There is also a cost benefit to using more machines, and while it may seem counter intuitive to save by adding, it is your best method for doing so. As we discussed above the performance specifications for what we call the display machine — that is the computer that has the processors, busses and video cards to output an HD quality image and the ability to render effects — are quite high. By using the client-server model one can add master, or control machines, which manage the show flow and cue triggers via the network connection. These control machines can be far less expensive boxes as they are relieved of the arduous tasks of generating or playing back the content and special features.
The Big Picture
Media servers can seem like the main focus of a show but as anyone involved in event staging can vouch it takes many interconnected systems to make a show. It is not just about the outward control of projector functions, such as setup parameter calls or scrolling the units to black; the units need to aware of the big picture as well. Video must be in sync with lighting, moving sets, audio and even the actions of the talent on stage. This is a lot of moving parts through which dynamic content is delivered by a media server that needs to keep track of and be able to react to and be in lockstep with. With this in mind media servers have the ability to incorporate MIDI, DMX from lighting consoles, User Datagram Protocol (UDP) streams from scenic automation control systems and an SMPTE time-code to sync with audio playback and other video elements. These are not jobs for the easily rattled and the programmers who oversee the servers must have a strong multi-disciplinarian technical depth and a natural understanding of live show flow relationships.
The issue of content management also is one that requires a decent understanding of the differences between playback devices, which focus only on playing a single file at any given time. Think of these as a Beta tape deck whose only task is to play the unaltered video from beginning to end and only that. Media servers have many more tasks to be concerned with than just the straight playback of video. These devices need to manage not just the main video feed, the incoming control commands and dozens of files in support at once, but it also is rendering images on the fly. The stress on the CPU is not insignificant.
The reason this all matters is that expecting the server to handle uncompressed pro-res ultra-high bitrate files is akin to asking a carpenter to frame a floor, run the electrical wires and sweat the plumbing all at the same moment; some may be game enough to attempt it but we know that the quality will suffer. The answer?Compression my friend, compression. The prosumer trade magazines may wail about compression of video being the root of all evils, leading to harder less-quality video like watching movies on a nano, but here it is a necessity. It is important to get show content to the programmers as soon as is possible so it can be transcoded into a digestible format like WMV, MPEG or H.264. It is of course best practice to hand off to the event production folks the highest quality at the start, but understand that the limitations of the hardware, and frankly physics, will require for it to be altered, squeezed and even have its frame size modified.
If you are considering taking that next step toward enveloping and enrapturing your audience with projection mapping, know that it is complicated and process-driven, so allow ample time and budget to get it right.The results will be remembered by those watching long after the show is done and the gear is packed away.