First of all, why can’t a CCD detect colors? What happens inside a CDD is rather simple: when a photon hits the CCD’s substrate, it “generates” electrons. More precisely, the photon transmits its energy to an electron in the valence band, making it pass to the conduction band. At least, this happens if the photon has sufficient energy. So, there is a threshold of energy above which each incoming photon can generate electrons. Energy is strictly correlated to the wavelength and so to the “color” of the photon. So electrons are generated regardless of the color of the incident light. For this reason, CCD is said to be “panchromatic”, although it is usually improperly referred to as “monochromatic”.
So, how come we obtain colored pictures from an inherently panchromatic sensor? The trick consists of covering the photosensitive area with a color filter array (CFA), so that each pixel is covered with a colored glass according to a certain pattern. The most used pattern is the “Bayer CFA pattern”, which consists of a 2×2 matrix repeated both horizontally and vertically:
where R=red, G=green, B=blue. The repeated pattern yields:
We are getting to the gist. As an example let us consider a pixel covered with a blue filter. It has only the information of the intensity of the blue light hitting it. However, it is surrounded by 4 green pixels and 4 red pixels. Hence, although the blue pixel under consideration has no information about neither green nor red light hitting it, it can be guessed by interpolation, exploiting the neighboring pixels.
This is done off-chip: the sensor just outputs the sequence GRGR… and BGBG… alternately. This output is known as “sequential RGB” (sRGB).
Note that by covering all the pixels with a color filter array (CFA), letting just a fraction of the electromagnetic spectrum pass through, we reduce the light received by each pixel by about 1/3. That’s why in low-light applications (e.g. astronomy) panchromatic sensors are used, resulting in black and white images.
The off-chip interpolation is not trivial at all for many reasons.
First of all, it is inherently inaccurate, because it can only be a (sound) guess. And it is even more so where edges or fine details are present, that is where color in the original image changes abruptly compared to the filter pattern. Moreover, interpolation cannot be a simple linear average. Indeed, each of the colored pixels is affected by glass transmission (different for different colors) and by the quantum efficiency (again, different for different colors). Besides, humans’ eyes do not see colors as a sensor does. So a further correction is needed.