Artifacts in Digital Images

 

Artifacts in digital images are unwelcome and unnatural elements or distortions. While they abound in reproductions of all kinds they often go unnoticed, perhaps because we are so used to them. Problems that we perceive in images are often the result of several causes combined, but let's try to look at some of the more prominent ones individually.

Noise
 
We are all familiar with the idea of noise in sound but the same concept can be applied in the visual and electronic context. Charge Coupled Devices (CCDs) used in just about every digital camera and desktop scanner today, characteristically produce a number of different types of noise. Electronic fixed pattern and "dark" noise is present all the time in these devices, so just before an exposure is made, a digital camera will do what's known as a dark reading, to ascertain the signal from each sensor in its CCD array. When the actual exposure is made, the camera program then subtracts the dark noise signal from the exposure signal. This is one of the reasons why there is always a momentary delay between the time when you press the shutter button and when the picture is actually made.

The dark noise subtraction method is never perfect, particularly when the sensors are hot. For this reason some of the professional level and astronomical cameras use built-in refrigeration systems in order to keep the sensor cool. The signal to noise ratio is somewhat dependent on the "fill factor" of the pixel, or the percentage of the sensor element that gathers photons. Smaller elements have a higher inherent noise ratio and a very low signal to noise ratio or a very weak signal leads to problems that are hard to solve. Noise is also introduced during the analog to digital (A/D) conversion of the image data. Much research has been done in the field of CCD noise suppression, and a lot of progress has been made so that modern CCD cameras and scanners are much better than their predecessors.

CCD sensors have a bias toward the red and infrared end of the energy spectrum. If you look at the sensor you will usually see a bluish filter or filtration layer over it, designed to mask out sensitivity to unwanted infrared wave lengths, and thereby increase the signal from the blue end of the visible spectrum. Because of insensitivity at the blue end it's in the blue channel that you will almost certainly find the most noise in a digital photograph or scan. In some cases it can be helpful to open the blue channel in software such as Adobe Photoshop and apply a moderate amount of softening. Expect more noise to appear in situations with low light levels where the noise to signal ratio will be at its highest. Perhaps surprisingly, a lack of noise can also be a problem. One reason that computer generated images such as those from 3D rendering programs, look so artificial is that they lack noise. Gradations are perfect in tone and color and edges are unnaturally abrupt. Keep this in mind if you are photo retouching and applying gradations or tone fills. It pays to also add a little noise.


Blooming

Blooming or light spill over, is a problem caused by photons spilling from one sensor element to another creating what can be a whole region of over fill, resulting in highlight blow out and / or weird color in these areas. Larger sensor elements can collect and contain the photons better than the smaller ones found in most of today's consumer level digital cameras. CMOS sensors which have some drawbacks are actually better in this regard. If this effect bothers you, it's best to avoid subject matter with bright reflections.


Pixelation

When a relatively small sensor array is used to create an image, pixelation becomes very apparent. Larger sensor arrays are more expensive but supply enough information to produce a more lifelike picture. Pixelation is most noticeable as stairways on diagonal lines that are jagged instead of straight and smooth. On type this effect is sometimes called the "jaggies" but it's essentially the same problem in photo imaging.


Interpolation

The lipstick is red and the highlights are white, but they display an effect known as "Christmas tree lights" due to the color filter mosaic over the sensor array.

A variation on this theme is "Christmas tree lights" or color aliasing artifacts which are a function of the way that color filtration is laid down on the sensor and is particularly apparent when an image is greatly enlarged. On many sensors Red (R), Green (G) and Blue (B) filtration is applied as RGBGRGBGR...with twice as many green pixels as red and blue. This is because human vision, which we want to emulate, is most sensitive to the green wavelengths, and also because the CCD uses the green reading to compute luminance. The resulting pattern, blown up, particularly on diagonal lines is an unreal mosaic of colors that are a result of averaging between adjacent pixels. A less than perfect work-around for this problem is to desaturate the color in specific areas on an image where fringing is apparent.

There are many different ways to expand an image's size. Some well known examples are the Linear, Bilinear and Bicubic methods which can be chosen in Adobe Photoshop's Preferences, and another from Live Picture which is a mixture of concatenation and pixel decimation. In the Photoshop methods, the basic tradeoff is between speed and quality, but while Bicubic interpolation is widely regarded as the best method, it may not always give the most pleasing results.

If you are experiencing "ghosting" on diagonal lines, for instance, it may be better to change your software's preferences, and try Bilinear instead. The Live Picture concatenation algorithms which work so well on continuous tone image sections, fall down somewhat on hard edged lines, particularly when the lines are not exactly vertical or horizontal. Camera manufacturers create their own interpolation systems specific for the task, and secret unto themselves. Unfortunately if you don't like their interpolation regime, you're stuck with it.

 
Compression

Data file compression can be divided into two obvious camps. "Non lossy" compression implies that there is no loss of image quality in the process, but usually doesn't afford much decrease in data size. "Lossy", as the name suggests, involves data shedding and therefore implies image quality loss particularly when using highly compressed settings. The most common, ubiquitous even, lossy file format is JPEG, so called because it was proposed by and is maintained by the Joint Photographic Experts Group. Just about every digital camera on the market can save to this format. Note that there are a variety of JPEG format variations, but those originating from cameras are all readable by common imaging software.

A radical amount of compression produces a very small file but the cost is in clarity. Mushiness, blocking and color jumps are typical problems with over-compression.

What varies most obviously, is the amount of compression applied to the image data. This can vary from say 1:4 (great quality) to 1:28 (rather poor), with each camera manufacturer deciding on what compression options to offer and what mathematical formulas will be used to achieve them. The worst results come from high compression of small data sets, such as you would get from cameras with small sensor arrays. So what are JPEG compression artifacts likely to look like? It depends to some extent on what algorithms are used, but generally speaking, more compression is likely to produce "mushy" areas that lack sharpness, especially obvious in the flat areas of an image. Overemphasized edges and unnatural color distribution are other common artifacts. Random pixels that are quite different from those that surround them are also likely to be seen. Note also that because compression is done last, image artifacts such as sharpening and color saturating are likely to be compounded.

For some applications, such as displaying thumbnail images on a web page, high ratios can be quite acceptable, but for best results use compression sparingly. Are you wondering just how much compression to use? There is no set rule apart from: try it and see. It's important to see the results as your viewer would in final form such as on a print or on the computer screen. Be aware also, that most image manipulation software will show you the image at the original quality setting BEFORE compression was done. You have to close the file and reopen it in its new compressed form to see exactly what it looks like.

 
Sharpening and Unsharp Masking

Some loss of perceived sharpness occurs at capture stage with any sensing device. There is some chemical compensation for this in film capture due to increases in edge contrast. Digital techniques which produce similar effects, have been used with scanners since the inception of digital imaging. Various digital sharpening routines are available but most depend on the addition of edge contrast in one form or another. In the days when all graphic arts procedures were film based, a technique was developed for increasing edge contrast by first making an unsharp monochrome negative and then generating a soft but contrasty positive which was then sandwiched with the original. This type of analog unsharp masking can be emulated very well in digital sharpening filters which today carry the anomalous description of "Unsharp Masking.".Because of the nature of current CCD image capture, consumer level digital cameras use Unsharp Masking routines after the image data has been interpolated. There is no choice by you, the user, as to how this is done or how much is applied. Unfortunately there is also no fix for overuse of Unsharp Masking, other than buying another brand of camera. You can see from our example, done in software, what the effects look like. As more Unsharp Masking is applied, and as it is applied to a wider and wider edge radius, a characteristic shadow / halo effect becomes apparent. It will often appear as ghosting or black line around hard edges. You will not see radical effects like this in original camera images, but now that you know what it looks like you may well recognize its presence in small amounts, particularly in images from cameras with small sensor arrays.

 
24-Bit Inadequacies

Is more than 16 million colors enough? You'd think so wouldn't you? There are cases however when more would be better. Consider for instance, photographing a rose with vivid red colors. Your 24 bit palette has in effect, just 255 levels of pure red to portray the flowers color subtlety and this number isn't nearly enough. That's why a capture system that's 30 bit or 36 bit is better. More color and tone choices. Even if the end system such as your computer screen is only 24 bit, more bits in the beginning means that you can choose the BEST 24 bits.


Stepping

A problem peculiar to scanners and line scanning cameras, where a line of sensors moves steadily across a gate, is that of stepping. Causes vary from harmonics with pulsing in the light source, electrical signal noise, to more obscure reasons. It's most common and noticeable in the shadow regions where sensors are pushed to the limit but can appear in any part of the image. Here's a little hint when evaluating these devices: Create a scan, then in a program such as Photoshop view at 100%, go to curves or levels, radically lighten the image, and look for stepping artifacts. These will appear as lines of different density and / or color.

Solar-Terrestrial Data




CURRENT MOON

Para Entertainment




Pro Vocal Services

Video Services


Visitors


border=