This article has multiple issues. Please help improve it or discuss these issues on the talk page. (Learn how and when to remove these messages)
|
The Kell factor, named after RCA engineer Raymond D. Kell,[1] is a parameter used to limit the bandwidth of a sampled image signal to avoid the appearance of beat frequency patterns when displaying the image in a discrete display device, usually taken to be 0.7. The number was first measured in 1934 by Raymond D. Kell and his associates as 0.64 but has suffered several revisions given that it is based on image perception, hence subjective, and is not independent of the type of display.[2] It was later revised to 0.85 but can go higher than 0.9, when fixed pixel scanning (e.g., CCD or CMOS) and fixed pixel displays (e.g., LCD or plasma) are used, or as low as 0.7 for electron gun scanning.
From a different perspective, the Kell factor defines the effective resolution of a discrete display device since the full resolution cannot be used without viewing experience degradation. The actual sampled resolution will depend on the spot size and intensity distribution. For electron gun scanning systems, the spot usually has a Gaussian intensity distribution. For CCDs, the distribution is somewhat rectangular, and is also affected by the sampling grid and inter-pixel spacing.
Kell factor is sometimes incorrectly stated to exist to account for the effects of interlacing. Interlacing itself does not affect Kell factor, but because interlaced video must be low-pass filtered (i.e., blurred) in the vertical dimension to avoid spatio-temporal aliasing (i.e., flickering effects), the Kell factor of interlaced video is said to be about 70% that of progressive video with the same scan line resolution.