One half of the Nobel Prize in Physics this year was awarded to Willard Boyle and George E. Smith, who developed charge coupled devices (CCDs) at Bell Labs in the late 60's. Their work combined attempts to make 'picture phones' with Bell's focus on bubble memory, which used thin films of magnetic materials to hold individual bits of information in ordered arrays.
A brief review: Light can be modeled as photons, which are characterized by a wavelength λ and a frequency f. Those quantities are connected by the speed at which the wave travels (which, for electromagnetic waves, is the speed of light). c = f λ, which means that the wavelength decreases as the frequency increases.
Even though humans can see only a very small portion of the electromagnetic spectrum, we are somewhat obsessed about transforming what we see into a format that allows us to share it with other people, or putting it in giant piles in the back of a closet that we really do intend to organize someday. Really.
The camera obscura, a system of lenses used to project images, was known in the 1000's CE, but it was an aid for drawing – there was no way to save the images. Daguerre developed a process in 1839 that employed copper plates and mercury vapor; that technique captured images, but the images were very unstable and had to be treated extremely carefully. Also, the process was extremely unlikely to ever be approved for installation in your local Walgreens due to OSHA issues.
Before film, photographs were taken on glass plates, which produce much more durable images, but are very difficult to carry in your wallet or purse. Regardless of the base, all of these image-capture technologies rely on the selective chemical transformation of silver halides (silver bromide, silver iodide and silver chloride) into silver.
Eastman Kodak is credited with the first flexible (although not transparent) film in 1885. The photograph film we use today starts with a layer of transparent plastic about 0.004-0.007 inches thick. The image, however, is not captured in the plastic, but rather on top of it in a layered structure that is usually less than a thousandth of an inch thick. The layers are made of gelatin into which silver halide particles (usually in the size range of tenths of a micron to tens of microns) have been mixed. Given the right growth conditions, you can grow silver halides so that they form flat crystals, usually triangular or hexagonal in shape. This mix of crystallites and gelatin is usually called an emulsion; however, it's actually a dispersion. An emulsion technically requires two liquids that don't mix.
The silver halide grains are important because a photon of light hitting a silver halide grain can give a pair of ions (a positively charged Ag+ ion and a negatively charged Br- ion) in the crystal enough energy to change into a positively charged silver (Ag+) atom, a neutral bromine (Br) atom and an electron.
The electron then combines with the silver ion to form a silver atom.
Silver halides are sensitive to blue and ultraviolet light, which would make for some sort of gothic-looking pictures. Molecules sensitive to other wavelengths of light are attached to silver halide grains. Wavelengths that normally wouldn't affect the silver halide create photoelectrons in the accompanying molecules. Once promoted to the conduction band of the molecule, the photoelectrons can cross over into the conduction band of the silver halide particle and proceed as if the silver halide particle had create the photoelectrons themselves.
We need three layers of different types of particles to deal with the three primary colors (red, blue and green) required to make all the other colors. Much like multi-pass printing, there is a portion of the image in the red-sensitive layer, a portion in the blue-sensitive layer and a portion in the green-sensitive layer.
In addition to color, we also have to worry about sensitivity - how little light can be used to activate the particle and what range of light can be detected. Modifying silver halide particles during and after growth (controlling the imperfections in the crystal, controlling size, adding gold and sulfur impurities during heating, etc..) makes the grains more or less sensitive to light. The sensitivity is related to the speed of the film. The finer the crystal the 'faster' the film. Larger grains are more sensitive with a bigger dynamic range, but larger grains make the picture, well, grainy-er. Finer grains produce higher resolution, but with less sensitivity. Really high-quality film and movie film often have multiple layers for a single color - there may be three red layers with three different types of silver halide particles (one fast, one medium and one slow) so that all of the subtleties of the scene are captured. There also are other layers in the film, such as a UV filter layer (since silver halide can be reduced by UV light), and a protective topcoat for the gelatin dispersion.
The image at this point is called a latent image because the image is stored in the film, but you can't see it. The developing process reduces silver-halide grains to pure silver; the key is that those grains that already have some silver in them (from the exposure process) convert to silver faster. Kodak says you need four silver atoms to nucleate the developing process in a grain. The grains that contain the image should convert before you start converting other grains. If you overexpose a piece of film, you've started reducing silver halide grains that weren't part of the original image.
During the developing process, the gelatin swells up by taking in the developer like water going into a sponge. This allows the chemicals get to the grains, but holds the grains in place so that your image remains intact. Color films take about 20-60 photons per grain to produce a latent image - but that latent image isn't in color yet. The developing process reduces the silver halide and oxidizes the developer. The oxidized developer reacts with chemicals called couplers to produce dyes (usually cyanine based) in each layer. The couplers in the blue-sensitive layer make yellow dye, those in the red-sensitive layer make cyan dye and the couplers in the green-sensitive layer make magneta dye. When you're ready to stop developing, you use water to rinse away the chemicals or you use a stop bath, which chemically halts the reaction. The fixing solution removes silver halide, but not silver or the dyes. Finally, a bleaching chemical is used to remove the silver, leaving only the dyes that were produced during development.
Why cyan, yellow and magenta? This is subtractive color addition (an oxymoronic name to be sure). Additive color addition is used when combining paints and the observer sees the color reflected from an external light source. When you look at light that passes through a transparent or semi-transparent object (like the negative produced by film or the light coming from your television), you are using subtractive color addition. Yellow absorbs its complimentary color, blue. When light shines through something yellow, only red and green light is allowed through. When you shine light through magneta, only red and blue (but not green) are allowed through, and only green and blue pass through cyan.
Film photography is converting an optical image through chemical means. The Nobel Prize recognizes the technology that allows us to convert an optical image using electronics: the charge coupled device or CCD. Instead of randomly distributed grains of silver halide in layers, the detectors for the light that forms a CCD image are MOS (metal-oxide-semiconductor) capacitors. The MOS capacitor consists of a bottom semiconducting layer (doped so as to have excess holes/missing electrons), a middle insulating layer (often silicon oxide, which is where the 'O' comes from), and a top layer of conductive metal called a gate.
These capacitors are arranged in an array, with each individual stack constituting a pixel, as shown in the drawing. A photodetector (usually a material like silicon that generates photoelectrons easily) sits on top of the CCD. Photoelectrons are generated in proportion to the number of photons hitting each pixel. The photoelectrons accumulate in the interface between the silicon and the oxide. Filters over the devices make each one sensitive to the red, green or blue components of the image, just like the layers in film. The human eye is more sensitive to green, so most common filtering schemes have two parts of green to one part each of red and blue.
Boyce and Smith's challenge was figuring out how to read how much light was deposited in each pixel. They applied an alternating current to the gates (the metal parts) of each capacitor in a row, forcing the charge to pass down the row like a fire bucket brigade. Oxide 'channel stops' are grown between rows to keep the electrons from crossing into other rows. The amount of charge is measured at the end of each row, so you can determine how much charge - and therefore how much light - came from each pixel. The last device in the row converts the electrons into a voltage and the time dependence of the voltage corresponds to where in the row the signal originated.
Some of today's cameras are moving toward CMOS (Complimentary Metal Oxide Semiconductor) technology, which is similar to MOS, but reads information directly from each pixel instead of by rows. CMOS technology uses less power, but makes chips physically larger with lower sensitivity and higher noise than MOS. CMOS is increasingly popular in areas where price is an issue (like low-end consumer cameras). CCDs are still being used in more sensitive applications like astronomy and medical imaging.
The reduction in size is what makes CCDs ideal for consumer cameras. Most cameras have a 4:3 width-to-height aspect ratio, so if you buy a 10 megapixel digital camera – which really has 10.34 megapixels in it, you are getting a 3648 x 2736 pixel CCD. Detector sizes are specified in fractions using British units. My camera has a 1/2.7 inch sensor, which means the array itself is somewhere around 5.3 mm x 4.0 mm in size. Of course, it takes multiple pixels to get each color, and there is some very sophisticated software engineering going on that improves resolution by interpolating between the different colors.
There is always controversy following the awarding of the Nobel Prize. While I find the inevitable discussion of 'should so-and-so have been included?' rather pointless, it is a good thing to remind ourselves that even though a couple of people get the credit for "inventing" something, other people make very significant contributions to getting any invention into the marketplace. If you like the anthropology of science, there are also some interesting stories about how they invented the CCD in an hour (although it took significantly longer to get a working model), under the threat of budget cuts. I'm not encouraging any administrators to read that story. They might think (in true Dilbertian fashion) that there is a causal link between productivity and budget cuts.
(activate chemist pedant mode)
1. The color dyes that form during development of photographic film are azine dyes, not cyanines. Cyanine dyes are the "molecules sensitive to other wavelengths of light" that are attached to the silver halide grains during manufacturing.
2. "The finer the crystal the 'faster' the film." Backwards, as your next few sentences imply. Indeed, fast film (big grains) is "grainer" than slow film. The reason has to do with the critical number of photons that have to hit a grain to activate it, as you mention. Size matters here - with a given number of photons available in the image, more of them will hit a big grain than a small one. Similarly, that's why flat grains are more sensitive than cubic ones - more surface area to collect photons. The atoms of silver (created by the reactions you show) act as hyper-efficient catalysts for the subsequent development of the grain.
3. "Kodak says you need four silver atoms to nucleate the developing process" and "Color films take about 20-60 photons per grain to produce a latent image" You seem to be mixing up two different processes here. In the developing bath, it takes four Ag+ ions to drive the most common coupler + developer image dye forming reaction, although there are some souped-up couplers than need only two. For the latent image on the film in the camera, tens of photons per grain are typically required, although the theoretical minimum is less than 10.
4. "If you overexpose a piece of film, you've started reducing silver halide grains that weren't part of the original image." Not strictly correct. If you overexpose your film, you've left the shutter open too long, and those few photons coming from the darker regions of your image have added up and activated grains that should not have been. So you lose contrast between the light and dark regions, and your photo looks washed out.
(deactivate chemist pedant mode)
It's cool that you described chemical photography in this context, because very few people (even among chemists) know much about how it actually works.
Posted by: Richard Blaine | October 21, 2009 at 11:26 PM
PS. Speaking of "'should so-and-so have been included," Boyd & Smith owe a big debt of graditude to Steve Sasson, who took their CCD chip and a bunch of scrap parts and fashioned the first digital camera in 1975. See http://www.youtube.com/watch?v=RGoCL1F-xVw for an interview with Steve that shows the actual (8 pound!) camera. http://www.blurtit.com/q985613.html gives more of the story.
I suppose it's some consolation that Steve has received a bunch of other awards.
Posted by: Richard Blaine | October 21, 2009 at 11:50 PM
Great summary of film chemistry, a little short on the digital end. But never mind that. Here's my question. Standard color print film seems to have an infinitely higher resolution than even the best digital images, or it used to anyway. Nowadays I'm not so sure. My Uncle Ed had a tiny imported film camera that fit in his pocket, and for a while we had 110 film, but the photos from these were never quite as good. So how much resolution do you need to make a perfect picture?
Posted by: Charles Pergiel | October 27, 2009 at 11:57 PM
Well, of course there's no such thing as a "perfect picture." But here's some pertinent information:
A typical 35mm negative is estimated to have the equivalent of tens of megapixels of information. 110 film is much smaller.
As discussed in Diandra's post, each pixel in your digital camera is only sensitive to one color of light (red, green or blue), so divide your megapixels by three to approximate its true resolution. (The "missing" color in each pixel is filled in by software after the picture is taken, by extrapolating the data from neighboring pixels.)
If you don't enlarge the photo too much, a digital camera with about 5 megapixels will give a satisfactory image for the average person's eyesight. Higher resolution is better, but the quality of your optics becomes increasingly the limiting factor. There’s an analogous situation with film - a cheap point-and-shoot camera will give an inferior photo compared to a professional SRL with superb lenses, even though both may use the same film.
So, as usual, it depends on what you pay. An amateur photographer taking family snapshots and viewing them on a computer screen will be perfectly satisfied with an inexpensive 8 megapixel digital camera, while a photographic aficionado would sneer at such a setup.
One important advantage to high pixel count is the ability to crop unwanted peripheral parts of the image afterwards without sacrificing much quality. For example, if you’re a birder and want to prove to the world that you’ve seen an authentic Ivory-Billed Woodpecker, you will need expand that little white and red spot in your photo with plenty of remaining pixels.
Posted by: Richard Blaine | October 31, 2009 at 08:53 AM