Because i work more and more on realtime animations and video mapping projects, I always face the same problem about the bandwidth needed for video in this kind of applications.
The place took by a video on a hard drive is only an economical problem. It's not technical. If the project need 10 synchronized videos, running at the same time and sent to 10 different projectors, for instance to project animation on a building, you can buy 10 more hard drives by integrate them in the provisional budget. So, you can encode your 10 videos in RAW (RGB or YUV), using containers lire AVI, Quicktime, MXF, MPG or MKV and codecs like YUV422 or QT-Animation. The heavy weight of each file is not for now a technical problem.
The moving truck and the cardboard boxes
Depending of the codec we use, we can describe two main problems :
Reading a FullHD video of 3,5 gigabytes with a duration of about 1 minute tell us that the bitrate in Mbits/second is going to be very high. It's like trying to empty a moving truck in less than 5 seconds when you're alone in the truck to do that. Your friends, standing at the back of the truck, are going to wait a long time for the cardboard boxes. For your hard drive, it's exactly the same (even for SSD). Most of the time, components behind the drive (processor, memory, etc), are waiting for datas. In this case, the animation freeze. Not because the processor is overwhelmed ... but because it waiting for datas to come.
If you build a RAID 0+1, RAID 5 or RAID 6 system, now you aren't the only one guy/girl in the truck and you can throw the boxes out in less than 5 seconds like rockets. Now, the problem is that your friends who are waiting at the back can't carry the boxes inside the appartment at the same speed (because of stairs, etc). The processor, who is one of your those friends, has a lot of things to do in the same time it go upstairs. It have to manage the whole computer and ask for the hard drive to send datas to the RAM through the Memory BUS. Then, the RAM is not necessarily fast enough to manage all of these boxes per second, before sending them to the memory of the graphic card, etc, etc ...
So, even if you have a big budget, you can't build this ridiculous workflow. We are now into a technical problem, not a economical one. Even with a crazy powerfull computer, the management of the bitrate is always a complex problem.
Reducing the bitrate for the disk, but not for the CPU
Of course, you can reduce the bitrate of the video source. Since the birth of MPEG, reducing the bitrate by using the temporal redundancy is THE key to reduce the size of a file on the hard drive. We send less boxes, but they are filled in a more clever way.
But the counterpart of this is that the unboxing need more time, because after pushing datas into memory, you have to uncompress them before sending them to the graphic card. The CPU have to work of this uncompression alone. And it take a lot of time ... the CPU have so many things to do in the same time.
It could be interesting to ask somebody else (a bunch of friends, once again) for working on this process before sending datas to the screen.
GPU for the win
In a computer, the processor use the RAM to work on datas. The graphic card (named GPU), also has his own memory (VRAM) to save textures, buffers and of course the final image displayed on the screen. Transferring datas between this two memories is a slow process, at the scale of a computer. Too much datas transferred between the two can freeze the video again.
So, it's advisable to reduce the amount of datas travelling the graphic BUS (PCI-Express for instance), who is a real highway between the two components. It could be better to uncompress datas after the transfer, on the graphic card side and not on the processor side.
Graphic cards has evolved a lot, since ten years. Now we can code their behaviour and defined what to do with datas, by using concepts called shaders or GPGPU. The difference between a GPU and processor (CPU) is that the graphic card doesn't contain only one processing unit, but thousands of them (named cores), who are able to work on specific little area of an image, or mulitple images at the same time in the case of an animation. If we remind the unboxing of our cardboard boxes, we can compare all cores as a bunch of friends who gonna work together on the decoding process, each friend handling a small part of the image.
So, to optimize the usage of disk/CPU/memory/bus/GPU, we have to encode our video with a codec designed to respect these concepts :
- Reducing the size of the video on the hard drive, using usual compression technics (spacial and temporal redundancy, entropy, etc ...).
- This optimization can reduce the bitrate on the hard drive, the memory BUS and the CPU.
- The uncompressing process is no more managed by the CPU, but by the GPU cores, who are really performant when they have to work in parallel.
HAP, the OpenSource codec designed for Live
Here we are ! As usual in my posts, it's time to talk about "Open Source", and the time to talk about HAP, who can handle these concepts.
In the world real time and Vidjing, VidVox is well known actor, mostly by developping his software VMDX. In the same way as OpenEXR, who cames from a wish from different studios to make the images interchange more easy , VidVox thought that no codec was able to handle all the problem we could have in Live. Most of the time, the computer used to author the show is a simple ans not so powerfull laptop. Selling codec is not the business of VidVox. That's why they choose the OpenSource scheme, providing to everyone the ability to encode any kind of videos using HAP. By this way, a software like VDMX works better with videos and looks more attractive. It's make a lot of sense and we have to thank them for thant !
The HAP family is composed of three children using the Quicktime container :
- HAP : the base codec who encode the RGB channels
- HAP Alpha : who can handle an additionnal Alpha channel (I recommand to encode the RGB channels in Straight Mode instead of Premultiplied, to reduce the overload on the GPU by avoiding a divison of the RGB channels by the Alpha channel during the process).
- HAP Q : the same as HAP, but with a higher bandwitdh used for the Intra images, providing a better quality. Be careful, the Alpha channel is not supported by the HAP Q.
You can find the HAP codec here :
After the installation, you can encode video with any applications who can export Quicktime. No parameter is required except the type of codec. From my experience, the base HAP codec doesn't produce visible compression artefacts. At least, not much more than a video encoded with X.264 with a Quality Quantizer value at 20. For the end user, the only thing to do is to choose the right codec.
Using HAP with Touch Designer
In order to test this codec in Live condition, I import ten animations 1920x1080/25fps using and Alpha channel in a Touch Designer project. The HAP Alpha codec passed all the tests without any problem or freeze, reducing the CPU overload by limiting his usage to handling the nodal network and the outputs. With a Core i7 4770K, the CPU was used at a maximum of 10%.
Of course, the graphic card has a major role in the process. I'm more NVDIA, so I can't affirm anything for the AMD cards. But remember that because the process is mainly done on the GPU side, you can also use multiple GPU on the same computer at the same time and boost your decoding capabilities.
So, don't hesitate to integrate this codec in your live and realtime applications, video mapping or Vijing. With HAP, you can integrate more and more videos stream in your show for the best !
Last but not the least : because the video is uncompressed on the GPU side, the final image is already in the VRAM ! So, you can use it as a texture on 3D surfaces or tweak it by using shaders with languages like Cg, HLSL or GLSL.