In an industry full of obscure acronyms, ACES probably ranks among the most obscure. The Academy Color Encoding System attempts to standardize how color is handled from acquisition through post and delivery. ACES is not even that difficult to understand. First of all ACES is a workflow, it’s a means of interpreting and processing image data in post in a scene referred linear color space with input and output transforms that relate to specific non-linear devices. We’ll first take a look at how ACES handles color information, and why. So what is a scene referred linear color space? The essence of a scene referred linear color space is far simpler than you may imagine. It is a direct digital representation of linear luminance levels as they appear in front of a camera lens. Or, worded differently, it is a one to one relationship between real-world brightness and the data that represents it in an image file. Why don’t we work this way? Imaging sensors actually see light exactly this way; they have a linear response to light. One reason we don’t record image data values linearly in camera but employ a curved or log gamma function (aka log, flat, film look) is that we can reduce the required color bit depth and file size by reassigning values to describe finer increments, or steps in luminance at the darker end of the luminance scale than the brighter end. In other words, when we shoot “log” we are assigning a larger number of smaller “steps” to the shadow and mids, and fewer larger steps to the highlights. This is a clever way to squeeze a higher total range of brightness (dynamic range) into a limited bit depth in a way that is visually unnoticeable. A non-linear gamma function allows for a more efficient assignment of the values in relation to the perception of human vision. Log encoding also better suits some grading functions, which may not behave as expected with linear encoded files. How ACES works The IDT So if our camera files are not encoded as scene referred linear, but ACES works in a scene-referred linear space, then how does ACES handle camera files? The answer is simple, the 10-bit, or 12-bit log encoded values in the camera files are transformed into scene-referred linear space using an IDT, or Input Device Transform. You can think of this almost as a type of LUT. When stored or rendered to file, these are 16-bit half-float EXR files. Because every camera is different, each camera requires a specific and dedicated ACES IDT. The ODT Once we are working with our images successfully transformed into the ACES space, we need to make sure we are seeing them correctly. This is where an ODT, or Output Display Transform comes in. There is no such thing as a perfect, or completely unbiased monitoring device. You can’t monitor scene-referred linear image information. Every monitor display technology has limits and can only display a limited color gamut. A display device expects to receive input data encoded with a non-linear gamma response according to a standard video color space and needs to be calibrated to either Rec709, DCI-P3. Just as every camera needs a specific and dedicated IDT, the same is true for display devices and rendered file outputs from ACES into standard delivery color spaces. Preserving your look The last piece of the puzzle worth mentioning is a unified and platform independent method of retaining your intended look once graded. This is another transform called a RRT or Reference Render Transform. The RRT will ensure no matter what new output devices and color spaces come out in future, your intended grade will always be preserved. As HDR and true Rec2020 UHDTV display technology becomes a consumer reality, demand will increase for content with full and rich colors, encoded in a color space with a far wider gamut than anything we are currently used to. Although it is still early days and ACES does have some technical issues, many believe ACES is the future of digital color.Read more
So what exactly is a LUT anyway? Of all the cryptic terminology and acronyms we throw around, the “LUT” is the most commonly misunderstood. The LUT or “Lookup Table” is in fact a very simple device used to transform RGB input values to different RGB output values. LUT’s are often used technically to transform image data from one color space to another. LUT’s are also used to describe and apply the customized color transforms we often refer to as “looks” when it comes to color grading. A custom LUT can be used on set to preview a desired “look” or grade directly on a live camera monitor. There are 1D and 3D LUTs, the difference being that a 1D LUT only applies to a single color channel whereas a 3D LUT is a cube like matrix covering transforms for all three color channels across any and all combinations. We’ll be referring to 3D LUTs from this point forward, as this is the most common type, and is most often what is meant when the term “LUT” is used. Before we go any further it’s important also to understand what is meant by color space. Color Spaces, Color Models and Mapping A color space is a specific organization of colors, often defined by the limitations of a particular device, such as a display device, or image acquisition device. It can be an industry standard defined by the capabilities and limitations of the image processing chain as a whole. What we commonly refer as a “color space” however is not just an arbitrary organization of colors like a pantone color chart or crayons with cute names; it refers to a particular color model and a mapping function referencing an absolute color space. The reference absolute color space includes the entire spectrum of visible colors against which a particular color model and mapping function will have a footprint, known as a “gamut”. The wider the gamut, the more possible colors of the visible spectrum can be represented in that color space. A LUT can be used to transform image data from one color space to another by re-assigning values from the source color space to the correct values in the destination color space. Mapping and Transforming RGB Values Whether dealing with LUTs or color spaces, hopefully you have noticed one thing in common in all of this so far; we’re essentially dealing with the same thing, and that is either mapping or transforming RGB values. A 3D LUT would contain a huge amount of data if it had corresponding input and output combinations for every single coordinate set, so instead it employs a fixed number of coordinate points, usually 17 x 17 x 17 with other points interpolated between. Essentially, a LUT is nothing more than a reference table that specifies an RGB output value for any given RGB input value. Of course if you dig deeper it gets more complicated but this basic understanding of its function will set you up to correctly use LUTs in your workflow, on set or in post.Read more
We only send updates about our most relevant articles. No spam, guaranteed! And if you don't like our newsletter, you can unsubscribe with a single click. Read our full opt-out policy here.