by Richard Lackey | 6th April 2017
HDR is the future, the future is now. Find out why 10-bit color and higher bit-rates will come to consumer and professional imaging devices. The highlight of the new Panasonic GH5 has to be its internal 10-bit 4:2:2 video specs. Despite some issues, Panasonic is the first camera manufacturer to bring 10-bit 4:2:2 4K video to a consumer device, and will soon up the bit-rate to a respectable 400Mbps. It’s really about time, and I hope and predict that this signifies a base-level shift in the minimum accepted (and expected) video standards for other manufacturers moving forward. A move to enable and encourage video acquisition in wider color gamuts with higher dynamic range is a necessary evolutionary step. What is 10-bit color? Whenever you hear a reference to 10-bit color depth, it is referring to video which has 10-bits of luminance information, or 1024 total steps per pixel, per channel. Those can be R, G, B or in most cases Y, Cr, Cb. Chroma subsampling is something different, but also very important. If you want to know more about chroma subsampling (Y, Cr, Cb 4:2:0, 4:2:2, etc) check out my article Getting to Grips with Chroma Subsampling. To be honest, 10-bit color, and even HDR (High Dynamic Range) is nothing new. It has been considered the minimum requirement for color and finishing since the first DPX film scans. Color bit-depth has to do with the number of steps that can be assigned to levels that make up the image in each color channel. A bit has one of two states, on, or off, a binary 1 or 0, white or black. 1-bit: 1 value (0 – 1) 2-bit: 4 values (0 – 3) 4-bit: 16 values (0 – 15) 8-bit: 256 values (0 – 255) For example, a binary word made up of 8-bits offers 256 possible combinations of those eight ones or zeros, and therefore can represent any value between 0 and 255. In terms of luminance information, this translates to a maximum of 256 steps (0 to 255 counting 0 as a value) between black (0%) and white (100%). I went into this a bit further in the context of non-linear gamma encoding in my last article about the new flat / log gamma profile in the latest FiLMiC Pro beta. Actual black point however is usually not 0 but 16, and white point is 235, not 255 for rec709 legal video levels. I haven’t represented an image of a 10-bit gradient above because you wouldn’t be able to see the difference on the device that you are using to read this. It would look identical to the 8-bit gradient above because your device screen and GUI graphics pipeline is probably only capable of displaying 8-bit color depth, as most devices are still 8-bit. The differences to the end viewer between 8-bit and 10-bit color depth are minimal to unnoticeable when viewing a traditional rec709 gamut on 99% of our various screen types and devices. However, when expanding dynamic range, not only at acquisition, but at the end-user display – whether a TV, a smartphone, laptop or in cinema – the bit-depth of the original camera files, post workflow and the final deliverables for playback become very important. When you grab the white end of that gradient shown above and stretch it further to an even brighter white value (extending dynamic range), the steps obviously also get stretched out, so you want more steps to keep it smooth. Keep in mind that just as non-linear gamma encoding in camera means we assign more of these steps to the lows and mids and fewer to the highlights, but then reverse this curve when color grading it back to rec709, the same principle applies to HDR, only that it stays more stretched out since rec2020 covers a much greater range. Beyond Rec709 Up until now, within the fairly limited color gamut and range of the rec709 standard set for HDTV, 8-bit has been the norm particularly for delivery, but also for acquisition at a consumer and prosumer level. We’ve even pushed the limits further by implementing log or “flat” non-linear gamma profiles in camera, which assign more of these values to the shadows and mid-tones and fewer values into the highlights of the image. However, 256 values between 0% and 100% luminance is not nearly enough when expanding the total dynamic range of luminance – or brightness – that we want to faithfully record, or even deliver in the end to a HDR-capable display. By adding an additional two bits of data to each channel, we quadruple the total number of values per channel to 1024, which means we can now assign levels 0 – 1023. This results in the ability to record a higher overall range of luminance values, or contrast ratio (dynamic range), and a wider color gamut. A Consumer Shift What makes this different from the last major shift from SD to HD is that it’s being led far more aggressively and quickly by consumer television manufacturers. Prices of UHD HDR capable televisions have dropped and will soon be available to fit almost every budget. UHD HDR televisions are already in many living rooms and home theatres, despite the fact that content creators, broadcasters and technical delivery standards are trailing far behind consumer adoption. Consumer television sales and the associated marketing of UHD and HDR as the next major leap in the home entertainment experience are driving demand ahead of the entertainment industry at large. It could be that we see an adoption of HDR-capable imaging and display in consumer devices such as tablets and smartphones enabling home user generated HDR content on a consumer scale before many professional content creators, their tools, workflow and delivery channels have caught up. Challenges and Solutions Many digital cinema cameras have been shooting files and formats that meet and surpass the requirements for UHD and HDR finishing and delivery for many years. However, the champions of the “DSLR” video revolution, and subsequently mirrorless cameras, as well as prosumer-level video cameras and camcorders have been left behind in an 8-bit H.264 limbo. The challenges facing camera manufacturers are multi-faceted: some are commercial in nature, some technical. Many manufacturers are reluctant to bring what they consider more advanced, professional imaging capabilities to entry or mid-level products that could threaten a higher range, more expensive line of cameras. It is no surprise that Panasonic are the first with the GH5, as they have nothing to lose, no other products to protect. However, all the other big names in our industry will soon have no choice if they want to remain competitive in the market. Some challenges are technical. More image information, whether it is in the form of resolution, color depth, frame rate, or any possible combination of factors means nothing if it is not recorded and reproduced faithfully. This extra information has to be processed, requiring more powerful processing in camera. It must also be recorded, and whether spatial and/or temporal compression is involved or not (raw recording for instance), recorded bit rates must be higher, and recorded files larger. This requires higher bandwidth (faster), and higher capacity recording media. All of this often results in more heat that must be dissipated from the camera, and higher power consumption. Inevitable Compromises Sony have just announced their update to expand HDR production capabilities to existing products implementing HLG (Hybrid Log Gamma). This update also brings the HLG picture profile to the FS5 and Z150 camcorders, though still at only 8-bit color depth with 4:2:0 chroma sub sampling in XAVC at 4K internally. This is the only thing they could do within the internal recording limitations of these cameras, but it’s a compromised transitional solution at best. Using an external recorder will be the best way to take advantage of this HLG workflow. What manufacturers need to do is adopt 10-bit 4:2:2 all-intra encoding at suitably higher bit rates as a minimum for video in all new camera platforms from consumer mirrorless cameras and all camcorders up the entire product range. Keep Your Eyes on Smartphones FiLMiC Pro v6 LOG curve option As unlikely as it may sound, it’s worth keeping your eyes on new camera technology and video features being developed for smartphones. With absolutely nothing to lose, and everything to gain, smartphone manufacturers are fighting harder and more aggressively than ever for a greater share of the mass consumer market. This is already resulting in capabilities matching many dedicated professional video cameras. I would not be surprised if we see more robust 10-bit, high bit-rate, HDR-ready codecs come to mobile devices before some of the big traditional players bite the bullet and increase the internal recording abilities of their camcorders. High-end digital cinema camera manufacturers aside (a different market), the entry-level and mid-level professional large sensor camera manufacturers may lose out by responding too slowly – or not at all – to increasing market demands for even higher resolutions, higher frame rates, advanced high bit-rate codecs, high color bit-depth, and HDR capabilities at consumer friendly prices. If the established players we know and love (or love to hate at times) don’t shift their thinking and act quickly, it’s not impossible to imagine the consumer and prosumer video camera going the way of the digital compact camera, victim to the Darwinian evolutionary dominance of the multi-camera, computational imaging-equipped smartphone.Read more
by Richard Lackey | 25th August 2015
In an industry full of obscure acronyms, ACES probably ranks among the most obscure. The Academy Color Encoding System attempts to standardize how color is handled from acquisition through post and delivery. ACES is not even that difficult to understand. First of all ACES is a workflow, it’s a means of interpreting and processing image data in post in a scene referred linear color space with input and output transforms that relate to specific non-linear devices. We’ll first take a look at how ACES handles color information, and why. So what is a scene referred linear color space? The essence of a scene referred linear color space is far simpler than you may imagine. It is a direct digital representation of linear luminance levels as they appear in front of a camera lens. Or, worded differently, it is a one to one relationship between real-world brightness and the data that represents it in an image file. Why don’t we work this way? Imaging sensors actually see light exactly this way; they have a linear response to light. One reason we don’t record image data values linearly in camera but employ a curved or log gamma function (aka log, flat, film look) is that we can reduce the required color bit depth and file size by reassigning values to describe finer increments, or steps in luminance at the darker end of the luminance scale than the brighter end. In other words, when we shoot “log” we are assigning a larger number of smaller “steps” to the shadow and mids, and fewer larger steps to the highlights. This is a clever way to squeeze a higher total range of brightness (dynamic range) into a limited bit depth in a way that is visually unnoticeable. A non-linear gamma function allows for a more efficient assignment of the values in relation to the perception of human vision. Log encoding also better suits some grading functions, which may not behave as expected with linear encoded files. How ACES works The IDT So if our camera files are not encoded as scene referred linear, but ACES works in a scene-referred linear space, then how does ACES handle camera files? The answer is simple, the 10-bit, or 12-bit log encoded values in the camera files are transformed into scene-referred linear space using an IDT, or Input Device Transform. You can think of this almost as a type of LUT. When stored or rendered to file, these are 16-bit half-float EXR files. Because every camera is different, each camera requires a specific and dedicated ACES IDT. The ODT Once we are working with our images successfully transformed into the ACES space, we need to make sure we are seeing them correctly. This is where an ODT, or Output Display Transform comes in. There is no such thing as a perfect, or completely unbiased monitoring device. You can’t monitor scene-referred linear image information. Every monitor display technology has limits and can only display a limited color gamut. A display device expects to receive input data encoded with a non-linear gamma response according to a standard video color space and needs to be calibrated to either Rec709, DCI-P3. Just as every camera needs a specific and dedicated IDT, the same is true for display devices and rendered file outputs from ACES into standard delivery color spaces. Preserving your look The last piece of the puzzle worth mentioning is a unified and platform independent method of retaining your intended look once graded. This is another transform called a RRT or Reference Render Transform. The RRT will ensure no matter what new output devices and color spaces come out in future, your intended grade will always be preserved. As HDR and true Rec2020 UHDTV display technology becomes a consumer reality, demand will increase for content with full and rich colors, encoded in a color space with a far wider gamut than anything we are currently used to. Although it is still early days and ACES does have some technical issues, many believe ACES is the future of digital color.Read more
by Richard Lackey | 23rd July 2015
So what exactly is a LUT anyway? Of all the cryptic terminology and acronyms we throw around, the “LUT” is the most commonly misunderstood. The LUT or “Lookup Table” is in fact a very simple device used to transform RGB input values to different RGB output values. LUT’s are often used technically to transform image data from one color space to another. LUT’s are also used to describe and apply the customized color transforms we often refer to as “looks” when it comes to color grading. A custom LUT can be used on set to preview a desired “look” or grade directly on a live camera monitor. There are 1D and 3D LUTs, the difference being that a 1D LUT only applies to a single color channel whereas a 3D LUT is a cube like matrix covering transforms for all three color channels across any and all combinations. We’ll be referring to 3D LUTs from this point forward, as this is the most common type, and is most often what is meant when the term “LUT” is used. Before we go any further it’s important also to understand what is meant by color space. Color Spaces, Color Models and Mapping A color space is a specific organization of colors, often defined by the limitations of a particular device, such as a display device, or image acquisition device. It can be an industry standard defined by the capabilities and limitations of the image processing chain as a whole. What we commonly refer as a “color space” however is not just an arbitrary organization of colors like a pantone color chart or crayons with cute names; it refers to a particular color model and a mapping function referencing an absolute color space. The reference absolute color space includes the entire spectrum of visible colors against which a particular color model and mapping function will have a footprint, known as a “gamut”. The wider the gamut, the more possible colors of the visible spectrum can be represented in that color space. A LUT can be used to transform image data from one color space to another by re-assigning values from the source color space to the correct values in the destination color space. Mapping and Transforming RGB Values Whether dealing with LUTs or color spaces, hopefully you have noticed one thing in common in all of this so far; we’re essentially dealing with the same thing, and that is either mapping or transforming RGB values. A 3D LUT would contain a huge amount of data if it had corresponding input and output combinations for every single coordinate set, so instead it employs a fixed number of coordinate points, usually 17 x 17 x 17 with other points interpolated between. Essentially, a LUT is nothing more than a reference table that specifies an RGB output value for any given RGB input value. Of course if you dig deeper it gets more complicated but this basic understanding of its function will set you up to correctly use LUTs in your workflow, on set or in post.Read more
We only send updates about our most relevant articles. No spam, guaranteed! And if you don't like our newsletter, you can unsubscribe with a single click. Read our full opt-out policy here.