Lizard Head Pass, CO.
Sunny Day in January; Lizard Head Pass, San Juan Mountains, CO. Wratten #9 (yellow) filter, Super-XX Pan film.



Darkening Blue Skies in B&W Photography

Perhaps one of the first thing many learn about B&W photography is that it's possible to darken blue skies by using colored filters over the lens at the time of exposure. Moreover, the effect is variable, increasing as one goes from yellow to orange to red filters.

But this is only qualitative, and the question naturally arises as to how much a given filter darkens the sky. For example, in the widely used Zone System (1 zone = 1 stop), is it a reduction of ½ zone for a regular yellow filter (say a Wratten #8, also sometimes known as a K2), or is it closer to a whole zone? For a red filter, is it 1 zone, or two?

I was somewhat surprised to find that there wasn't a standard reference where these values were tabulated. So I constructed one.

To do this I used a spreadsheet program to simulate the exposure of a somewhat randomly chosen panchromatic B&W film to daylight, with and without a series of long-pass filters ranging from hard-UV (very pale yellow) up to deep red. All these filters absorb wavelengths shorter than some value and transmit all longer wavelengths.

The film used for this experiment was Ilford Delta 100 Pro. Its datasheet conveniently had a spectral sensitivity spectrogram (at right), which was easily digitized at 10 nm intervals. This film is still being manfactured in sheet film sizes like 4"x5" (2018), so the results have current relevance. And it seems like a film which is not atypical for "pan" films generally; that is, it doesn't have "extended" red sensitivity, like Kodak's Tech Pan 2415 film. Still, it's difficult to say right off how the results would compare to those obtained with other commonly used pan films. One would expect any differences to appear at the margins, and to be more pronounced with deeper orange and red filters.

An old spectral sensitivity curve I have for Super-XX Pan (#4142) film shows it having a slightly reduced red sensitivity, peaking at ~600 nm before falling off to higher wavelengths, compared to this last peak being at ~625 nm for Ilford Delta 100 Pro; this probably explains why it was especially recommended for studio and darkroom work (like making color separations from slides) where a tungsten lamp was commonly used to expose it -- with lots of long wavelength light, this would "cancel out" with the film's reduced sensitivity there, giving a response across the spectrum that was rather level.

For a daylight illuminant I chose the CIE standard D55 illuminant, which has a correlated color temperature of 5500°K, that being the color temperature of "photographic daylight". Even though the series of CIE Dnn illuminants are purely synthetic, and thus have no counterpart in nature, they are easily constructed (also at 10 nm intervals) and are designed to be representative of the types of daylight illumination photographers typically encounter in a wide variety of circumstances. As with the choice of film, changing this to standard illuminant CIE C (with a correlated color temperature of 6775°K) or D65 would change things subtely but not substantially.

In order to define an "unfiltered" spectral sensitivity for baseline comparison purposes I added a standard UV filter to the system, since most photographers use these on their lenses at all times, even if just for protective purposes. The filter chosen was a B+W #415 since a) I had its spectral transmission curve, and b) because it is about as "hard" a UV absorber as you can get without getting into filters that a have a distinctive if slight pale yellow look to the eye, which means they absorb some light in the very most violet part of the visual spectrum. The #415 has a 50% absorption level right at 400 nm (whereas for the more common B+W UV 010 this wavelength is closer to 375 nm -- which means it passes some blacklight UV, the bright I line of mercury being at 365.4 nm). The closest Kodak Wratten filter to the #415 is probably the #2C, while the closest Tiffen filter is their Haze-1; both are slightly stronger filters than the #415, meaning their 50% transmission points are at slightly longer wavelengths (~410 nm for the Haze-1).

For the main part of the experiment I used a series of seventeen Kodak Wratten filters, starting with a very pale yellow (#2E) and going in steps all the way up through the yellows and oranges to a very deep red (#92). One of the seventeen consists of a combination of two filters -- a magenta (#32) plus a #16 (orange-yellow) to block its blue passband, leaving the #32's red passband as the effective filter, which turns out to be intermediate between a #26 and a #29 (both deep red filters); any filter from about a #15 up to a #26 can be used as the second filter to give this passband with the #32.

In addition I also went in the other direction and ran the numbers for a series of sky lightening filters, which include mainly blue and bluish-green (cyan) filters, but also some deep blue and violet filters, as well as some green filters (and combos) which are meant to yield a spectral response closely matching that of the eye.

The first step then was to "expose" the film through each filter directly to the D55 daylight illuminant, which yields the filter's filter factor. This effectively determines the amount the exposure would need to be increased to keep the exposure to a white (or neutral) card constant.

I then exposed the film+filter combos to a blue sky illuminant which was generated by an above-the-atmosphere (ATA) spectrum for sunlight I had (it comes from satellite data) being purely Rayleigh scattered according to a λ-4 function. This illuminant has coordinates of x=y=0.2348 in the standard CIE chromaticity diagram. It represents the deep, pure, "cobalt" blue skies one sees in deserts and at high altitudes, where there is neither moisture nor dust reducing the color saturation of the sky. It is thus an extreme example of a blue sky, but since I live at high altitude and in a very dry climate it's not irrelevant. It does yield maximum amounts of darkening (or lightening) for a given filter.

The diagram below shows the results, with the series of lightening and eye-matching filters on top and the darkening filters on the bottom:

The point marked "Clear" uses the B+W #415 UV absorbing filter to define the visual passband.

The first important result is that the filter which supposedly yields the correct blue sky tone rendering is not necessarily the traditionally recommended #8/K2 filter, but the next step weaker yellow filter, the #4. This has an effective mean wavelength (a quantity which also drops out of the calculations) of 561 nm with this film and illuminant, while with the #8 it is up at 569½ nm. The CIE visual response curve peaks at 555 nm, and has a mean wavelength of 560.2 nm, so a #4 results in a better mean wavelength match than a #8. The #4 is thus somewhat arbitrarily chosen as the zero reference point for the series of blue sky darkening filters, the filter which least alters the tonal rendition of this synthetic sky relative to the way the eye would see it. A #56 (light green) filter in combo with either a #8 or #9 to remove what blue light it otherwise would pass will produce an equivalent natural sky rendition, and a better overall match for other colors to the way the eye sees tones in B&W, but with a higher filter factor. If the latter is not an issue it is superior to using a #4.

The folowing graph shows how the filter and filter combos plot with the correleation coefficient versus the CIE y tristimulus curve shown in the y direction and mean wavelength in the x direction:

As can be seen, the #8 has a higher correlation coefficient than the #4, but once one is up above 0.90 the differences are minimal. The reason why one filter can have a better CIE y correlation coefficient but be further off in mean wavelength is a result of the spectral sensitivity curves not being symmetric.

The second interesting result is that most of the desired sky darkening effect is produced early on in the sequence, with rather modest yellow filters. This makes sense in retrospect because more of the sky's light is in the blue part of the spectrum. So one gets diminishing returns with stronger orange and red filters because there is less sky light to absorb once one is out in the orange and red parts of the spectrum. A #4 filter produces ½ stop (or Zone) of sky darkening relative to no filter (the #415), while it takes a solid orange #21 or #22 to get up to 1 stop above no filter (one of these is a tiny bit above 1.0, the other a tiny bit below), or double the effect. A #25 (A) red filter gives 1¼ stops, while to get to 1½ stops it takes a very deep red #92 -- with a nearly 5 stop filter factor (4.8), which is basically impractical for all intents and purposes.

The third thing to note is that it's easier to lighten blue skies than it is to darken them. Again, in retrospect this makes sense since there's more shortwave blue and violet light in sky light. Relative to a #4 filter one can only get about slightly less than one zone of darkening max, but almost two zones of lightening using deep violet filters. Relative to no filter this is about ±1½ zones in either direction, for a total available range of about 3 zones (or a little less).



The photo at the top of the page suggested an interesting, simple experiment. This involved writing computer code to process the sky from the top of the picture down to the top of the highest peak, Vermillion Peak, at right.

For each line I first did a least squares fit of the pixel values as a function of x along the row. From high school math, the equation for a line is y = m⋅x + b where m is the line's slope and b its y-intercept. (The result, y, would represent the pixel value.) So the entire row is reduced to just two numbers.

Because the sky darkens from left to right along each row, meaning the pixel values decrease, we expect these slopes (the m's) to be negative. The y-intercepts (the b's) give the expected pixel values along the left edge of the sky (x=0) based on the entire rest of that row. Due to noise and any non-linearity (curvature) in the run of values, this will usually be different from the actual pixel value in the image at the x's = 0 by a small amount. (More on this later.)

Next, the slopes and y-intercepts were themselves least squares fit to lines, two of them, as a function of y, that is, from top (y=0) to bottom, reducing the entire sky area down to just four numbers. In math-ese, m(y) = mm⋅y + bm and b(y) = mb⋅y + bb, so the four quantities the program determines are mm, bm, mb, and bb.

Because the rate of left-to-right sky darkening is more or less the same across the sky, we expect the slope of the slopes (mm) to be a very small number near zero. Also, since the sky gets lighter as one goes from top to bottom, meaning the pixel values increase, we expect the slope of the fit to the y-intercepts (mb) to be positive. The y-intercept of the y-intercepts (bb) corresponds to the expected pixel value for the top-left-most pixel (at x,y=0,0) based on all the calculations.

From the four quantities the pixel value for any point in the sky (any x,y) can be calculated: first, y is plugged into the fits for both the slope and y-intercept, m(y) and b(y), yielding the equation for the line for that row; second, x is plugged into this to yield the pixel value at that point along the row.

The four quantities essentially define a tilted plane representing the best fit to the entire sky area above the peaks. It can be extrapolated down the additional 30% distance to the lowest gap in the ridgeline, and then used to replace the original sky in the image with a perfectly flat version of it.

The process also has a couple of handy controls. For example, cranking bb down to a lower value darkens the entire sky. The key advantage here is that doing this with a stronger filter at the time the original exposure was made would also have darkened the shadow areas, since they are illuminated almost entirely by skylight, whereas turning bb down affects only the sky and leaves the shadows unchanged.

Another thing that can be done is to increase the contrast of just the sky in either the x or y directions, increasing the range in values between lighter and darker regions. Increasing mb increases the slope (range) in pixel values from top to bottom, while raising bm does this in the othogonal direction. The last of the four values that could be changed, mm, is generally not altered; doing so would lead to some sort of weird tilt in the rate of darkening/lightening, causing its axes to be skewed with respect to the x and y directions, giving a very unnatural effect. Maybe someone wants to do this for some reason.

There are two flies in this otherwise perfect ointment, as mentioned above: grain noise and non-linearity (curvature).

The first is most easily dealt with. In fact I had initially run my code on a version of the image in which the entire sky area had been treated with a "salt-and-pepper" (SAP) filter. This was not my own programming, so I'm not sure exactly what it does or how it works. It's in the suite of noise filtering algorithms in the program Paint Shop Pro 7.04 (JASC Software; Anniversary/Y2K Edition; 2000). It basically seems to find pairs of black and white specks of a certain size (or smaller) and replaces them with an average value. It's very slow but reduces any resulting mottling better than a median (or averaging) filter. Highly enlarged example at right.

For my B&W film scans at 2400 DPI, and a filter size setting of 7 or 9 pixels -- roughly 80 microns or a 1/12th mm -- this flattens out the pixel values to the point where the RMS error of the least squares fit versus the original data of less than 2½ for a typical row of pixels. The least squares "coefficient of determination" (usually called r2), which ranges from zero (bad) up to one (perfect), was running ~0.92 for the fit on the rows in the filtered image, which is pretty good. The remaining deviation from a straight line turns out to be due to curvature, not grain noise. When I ran the SAP filter on the original, un-filtered image sky, and then did the fit, the coefficient of determination dropped into the ¼ range, and the RMS of the fit versus the (noisy) data jumped by a factor of ~6x, to something like 14 pixel values.

The main cause of the deviation from a better fit is the data's curvature. The graph shows two line tracings and their fits:

The top trace is the lighter (higher pixel values), and in fact is the last row before the sky runs behind the top of Vermillion Peak. The bottom trace is higher up in the middle of the sky, where it's a little darker. The vertical scale is such that the tick marks at left and right represent steps of 10 in the pixel values; this drops ~14 from the left end of the row to the right end. The non-linearity in both rows is obvious, with the data running above the fit in the middle of the lines and then falling below it at either end. Again, this is for the SAP filtered version, not the original, noisy data.

At the far left you can see in both sets of data a dip or darkening. This is due to developer processing uneven-ness, that being the edge of the film where the sheet is held down in the processing drum; it extends for ~300-400 pixels, which is ~1/7" (3½mm). I'm not sure why the same processing uneven-ness isn't easily visible at the other end of the row, because when looking at the picture it's actually more obvious on the right side than the left. This might have to do with the size scale of the uneven-ness; the one at left is broader, more spread out, so it's less eye-catching than the smaller artifacts on the right edge, where maybe they're more visible because they stand out more against the darker sky there.

The main cause of the curvature is likely the standard geometrical Cos4 lens off-axis light fall-off. This would be consistent with the curvature peaking near the middle of both rows, which are the points closest to the optical axis.

Correcting this is problematical because, for one thing, we don't in general know the location of the optical axis in the picture, though in this case it's likely in the stand of pines on the far side of the expanse of snow in the foreground.

But even if this correct (or close enough) the magnitude of the adjustment one would want to make is difficult to determine. For a typical point in the sky, the distance from the optical center is ~5000 pixels, which translates at 2400 DPI into a distance a little more than 2", say 53 mm. For a 135mm lens the angle is then ~21½°, so the Cos4 factor is ~¾, or 0.4 stops. In principle this dimunition in lens illumination away from the optical axis would get transformed through the film's characteristic curve to a lower density in the developed film, and then this would get transformed through the film scanner to some lower pixel value through the curve which describes how densities are converted into 8-bit pixel values. The magnitudes of these last two things are difficult to quantify without careful calibration of both, which we generally don't have.

In the absence of a good a priori way to correct the pixel values, the best approach would probably be to do it via trial-and-error, attempting to minimize the RMS error of the fit versus the data using some sort of adjustable "amount" scaling parameter. Fortunately, we don't have to go to the trouble, since affecting this correction wouldn't change the slope of the fit line substantively, due to the symmetry, though it would change the y-intercepts, raising them (and the whole curves) several pixel values. We've already seen above that we can do this simply by cranking the "bb" parameter up, lightening the entire sky area.


©2018-19, Chris Wetherill. All rights reserved. Display here does NOT constitute or imply permission to store, copy, republish, or redistribute my work in any manner for any purpose without prior permission.

Tip Me Please!


Your support motivates me to add more diagrams and illustrations!





[ To VISNS Home || 4x5 Beginner Notes ]