"Mathematics is the poetry of the universe."
- Johnathan David Farley

Adaptive Center Weighted Median Filtering

Adaptive Center Weighted Median (ACWM) filtering is a highly non-linear image smoothing and enhancement technique. It's especially effective at reducing both signal dependent and random noise, and to some extent moderate amounts of impulsive noise also.

To understand what ACWM filtering is all about, one first needs to understand what a median filter is and what it does. In many different kinds of digital picture processing, the basic operation is as follows: at each point (or pixel) in a digital image we place a window around that point, analyze the values of all the pixels in the window (or neighborhood) according to some algorithm, and then replace the original pixel's value with one based on the analysis performed on the pixels in the window. The window then moves successively over every point in the picture, repeating the process.

Median filtering follows this basic prescription. The median is just the middle value of all the values of the pixels in the window. Note that this is not the same as the average (or mean); instead, the median has half the values in the window larger and half smaller (though the pixels are not exactly rank-sorted to arrive at the median). The median is a stronger "central indicator" than the average. In particular, the median is hardly affected by a small number of discrepant values among the pixels in the window. ("Small" is relative to the total number of pixels in the filtering window and "discrepant" depends on the distribution of pixel values in the window...) Consequently, median filtering is very effective at removing various kinds of noise. In this respect, median filtering is superior in many circumstances to just averaging the values in the window, since an average is drawn in the direction of the outlying value(s) to a much greater degree than the median is.

[PNG picture, 33.6k]
Click on the illustration for a detailed caption.

In smooth, uniform areas of the picture, the median and the average will differ by very little. To over-simplify a little, the median filter removes noise (by ignoring it), while the averaging filter just spreads it around evenly. The median will typically change by much less than the average does from one window to the next in uniform areas of an image.

The disadvantage of the median filter is that it destroys fine detail in the picture. Anything relatively small in size compared to the size of the window will have minimal affect on the value of the median, and so will be filtered out. In fact, median filtering is often used just for this purpose. Astronomically, one can practically remove stars from pictures with it. In other words, the median filter can't distinguish fine detail from noise, and it's such a strong smoother that it doesn't take any chances. It removes both the noise and the fine detail since it can't tell the difference between the two. This is an obvious disadvantage if one is interested in the fine detail. In technical jargon, a median filter is a low-pass filter, meaning it lets pass through the low spatial frequencies while filtering out the high spatial frequencies. Again, "low" and "high" depend on the size of the filtering window.

While there are many different operations that could be attempted on the pixel values in the window, it's proven worthwhile to deal first with relatively simple modifications to the straight median filter. A straightforward way to bias the filter's output so that the pixel's value at the center of the window is more strongly represented is to give that pixel's value more weight than the other pixels in the window before determining the median, meaning we count it as if it were N pixels ( N > 1 ) rather than just one pixel. This has the effect of preferentially preserving that pixel's value, so both fine detail and noise are more preserved. (But we haven't done anything yet to distinguish the two.) In the extreme, one could make it so that the center pixel had a weight equal to the weight of the entire rest of the window, in which case the value of the center pixel is assured of being the output of the median operation. As it turns out this is the simplest filter of all, called the identity filter, since output = input. (It's the non-filter filter, since no change takes place, but it can be good for testing software.)

So, in general, one can devise a Center Weighted Median (CWM) filter which can be varied over the range from the median filter to the indentity filter just by varying the central weight. This corresponds to going from strong noise and detail removal (much smoothing) to none. In straight CWM filtering, the central weight is constant over the entire image.

The Adaptive Center Weighted Median (ACWM) filter varies the center's weight from one point to the next by a technique designed to distinguish noise from fine detail. When the center pixel in the window is deemed to be more likely to be noise than signal, we want to lean towards outputting the median. Conversely, we want the output biased towards the identity filter when we think we've got signal.

A simple scheme which can be employed to distinguish noise from detail is to compare the variance of the values of the pixels in the window (based on the window's mean pixel value) to the base (or reference) variance for the image as a whole. (Recall that the variance is equal to the RMS squared; one could just call it the MS...) For many kinds of images the value of this base variance may be hard to determine, but for many astronomical images of the night sky (especially where most of the picture is just blank sky) a base variance is easily estimated from statistics on the empty sky background.

The ACWM filter, then, works like this: in smooth, slowly varying parts of the image, the window's variance is similar to the base variance, and the output is the median of the window's values -- the average would probably do almost as well -- noise being strongly suppressed. When the window's variance is large compared to the base value, we're in a strongly varying part of the image (which we presume to be due to the presence of a signal such as a star), the center pixel's value is strongly weighted (more or less, depending on how far the window variance is above the base variance), and the output tends toward the identity filter.

In technical terms, we want to weight the center pixel (i,j) in the W×W window (W is an odd integer) by 2·K(i,j) + 1, where K(i,j) = L·R(i,j) is rounded to a non-negative integer, before computing the resulting median. L is a convenient measure of the size of the window for this purpose: 2·L + 1 = W² is the number of pixels in the window; and

R(i,j) = [ Vx²(i,j) - Vn² ] ÷ Vx²(i,j)

is a normalized measure of the noise level in the window around pixel (i,j). Vn² is a sort of base level for the variance in areas of the image that are random noise (rather than signal) dominated. Consequently,

0 < R(i,j) < 1.

This can be assured by judicious choice of Vn or just by re-setting negative values to zero. R=0 is the straight median filter; R=1 is the identity filter.

A method for getting a first estimate of Vn² when one has no a priori value at hand is to find the median for the entire image and then compute the RMS (=V) about the median. A slightly refined value can be made by masking the RMS estimate to values ±A·RMS, where in principle A depends on the percentage of the image occupied by signal. In the usual astronomical case, this strategy works well only if the image has been flat-fielded; otherwise, the width of the central hump in the histogram representing the background might depend more on the degree to which the field is not flat -- a global property of the image -- than to the degree of noise in the image -- the local property we want to be referenced to. In the former case, which may still be of some relevance since no flat-fielding is perfect, we end up over-estimating Vn², causing R(i,j) to be under-estimated, especially in smooth (non-signal) areas of the image, leading to stronger median smoothing. A helpful diagnostic is the minimum Vx²(i,j) encountered in the real image, to which Vn² can be set for the subsequent actual application of the filter.

In the limit of an image containing nothing but empty sky, and having noiseless electronics in the image producing system, the noise due to the sky background is

Vs² = S ÷ [ N · G ],

where S is the sky counts, G is the system gain (e­'s per ADU), and N is the number of pictures being averaged over. The theoretical base variance would then be

Vn² = Vs² + Ve²,

where Ve is the electronics noise. Using this value for Vn² should result in about half the R(i,j)'s > 0 and about half being reset to 0 for being negative. As Vn² is raised we expect more resets and fewer R(i,j)'s > 0. When we add stars to the image, both populations are diminished at the expense of the number of R(i,j)'s ~ 1; this latter number increases with increasing window size, as there's then a greater likelihood of a star being in a given window. At any rate, the R(i,j) are at the center of what the filter is doing and much can be gained about what the ACWM filter is doing from studying their statistics.

To prevent impulsive (one or two pixels) noise from passing through the filter unchanged, a ceiling can be put on the center's weight to prevent the filter from actually becoming the identity filter. This is the converse of putting a floor of 0 on R(i,j), instead limiting its maximum to a value < 1 .

Thus, the ACWM filter can both remove noise and preserve detail, continuously varying to suit the local conditions. One of the Bisque Bros. refers to this kind of filter as a "smart filter", which is an appropriate description. It's not the best filter to remove impulsive noise, though it does a tolerably good job of this in conjunction with its other functions.

One distinct disadvantage of this type of filter is that it is not what might be called flux preserving. In short, it can change the brightness of stars -- and anything else, too. So any photometry derived from ACWM filtered images should be treated as approximate only. One could in principle generate an image showing where the filter is acting like the identity filter (or close to it), and thus preserving pixel brightnesses, and only use those areas for photometry, but this might in general be such a small fraction of the image as to be next to worthless. It might be possible with some experimentation to determine photometric corrections as a function of measured brightness if one were processing a large number of images taken and processed the same way, but I'll leave that for others.

Finally, I'm sorry I don't have any before-and-after images to show. I had some working code running back around the time of Comet Hyakutake in 1996, but it was on a friend's 486 laptop that he'd borrowed from work and briefly loaned to me, a computer he could check out for a couple of days when he travelled on business. (Strictly speaking my use was unauthorized and he could have gotten in trouble.) Both the code and the images have long since evaporated.
Transferring code and/or pictures between computers was a non-trivial task back then, and my home computer was an Amiga 1000, which not only didn't run QBASIC (thank lucky stars) but only had a 4-bit display, meaning 16 levels of grey, so everything looked posterized anyway; it ran AmigaBASIC, probably one of the best programs MicroSoft ever made (especially compared to QBASIC), but because it was an interpreted BASIC, not to mention only a 1 MHz processor, it was hopelessly slow for image processing.
Anyway, I've always wanted to re-create the code, as it was really spiffy, automatically adjusting the filtering window size at all four edges until the window was up to full size, so the images didn't have to be padded by half the window size. It calculated the median by maintaining a histogram of pixel values within the window (rather than by sorting), a trick taught me by Richard Kron back during grad school days. One just sums the bins in the histogram up from zero until half the total number has been reached. This is much faster than sorting. As the filter moves down a row of the image, one is subtracted from each bin for the pixels in the column leaving the window and one is added to each bin for those moving into the window. Even though the coding is a bit more complex this way it runs very fast.


Reference (technical): Ko, Sung-Jea and Lee, Yong Hoon (1991); IEEE Transactions on Circuits and Systems, Vol. 38, No. 9, Sept. 9, 1991, pgs 984-993.

See also: Image Processing Algorithms


------------------------------- -------------------------------

Tip Me Please!



Thanks for visiting. -----------------------------
Last modified: 5 Jun 2002
[some additional notes added 15+ yrs later]