In recent years, image-capture devices such as cameras, lenses, and digital backs have continued to improve at a very fast pace. Post-capture software tools also have shown dramatic improvements. It appears to me that as a result of these two trends, more and more photographers are becoming lazy, developing a laissez faire attitude about the capture process. There is a tendency to set the camera to autofocus, auto exposure, auto-image stabilization, auto everything, and just point and release the shutter.
I constantly hear that no matter what the problem is with the original capture, it can always be fixed later in software. Wrong exposure? Image blurred or out of focus? Bad framing? Who cares, you can always fix it later in Photoshop or another image-editing program.
The real question is: Is this a wise way to operate? My answer to this is a resounding No!
Call me weird or unlucky, but when using the automatic settings on a professional-grade DSLR such as a Canon 1Ds MKIII or a Nikon D3 (or any other camera for that matter), approximately 99% of the time the camera does the wrong thing for me, and I either entirely miss the shot or my capture is suboptimal. I’m not exaggerating. I just returned from a trip to Botswana where I took two Canon 1Ds Mark III bodies and a variety of lenses with me. I shot roughly 4,000 images. I don’t think there is a single image from this trip where I did not override the auto settings in my cameras to capture the image better.
Let us start with exposure. (I assume that readers are interested in quality and therefore shoot in Raw mode.) I encourage you to perform a simple exercise. Take a photograph and walk into a room or a closet where the light is fairly dim. Look at the photograph carefully. Now, take the photograph and put it in front of a good strong light. You will find that you can see a lot more detail and color when you look at the image in front of a bright light. In fact, the stronger the light the better, so you could keep increasing the light intensity. At some point, however, the light will be so bright that it blinds you, and you can no longer look at the photograph.
Digital sensors are very much like our own eyes in this respect. The more light they receive, the better they can “see” and record the image. You should always increase the exposure as much as possible without increasing it so much that you “blind” the sensor. This is called “expose to the right” (ETTR). The reason for the name is that your histogram of every image should be as far to the right as possible without clipping. Clipping means that you have “blinded” the sensor; you have gone beyond its capabilities and lost highlight detail. (Note: Film shooters will probably recognize this as not too different from the Zone System methodology of giving enough exposure to have full detail in the shadows while being careful with exposure and development to place the highlights in the right zone.)
Exposing to the right (ETTR) will obviously make many of your images look overexposed. Therefore, when you convert from Raw, you have to dial down the exposure in your Raw converter so that the image looks exactly the way you want it.
Let me emphasize that dialing down the exposure preserves all the data that you captured using ETTR. In other words, using ETTR and then bringing down the exposure always captures more information than not using ETTR and having a lesser exposure at the time of capture. The bottom line is that setting the camera to auto exposure with no exposure compensation produces either suboptimal or unacceptable exposures close to 100% of the time.
In view of this, I set my DSLRs to +2/3 f-stop exposure- compensation as the default position. I find this to be a good starting point. But it is just a starting point—each image is different and requires different compensation.
I also have my DSLRs configured so that if I am working fast in the field, I can swiftly change the exposure compensation with a simple turn of a dial. This system quickly gets me to a good exposure.
I find that most people who look at their captures on their cameras’ LCD do so to see if the exposure, color, and composition look good. My view is that this is unwise. The screens built into cameras are neither good enough nor calibrated to judge exposure, color, or contrast. The main reason for checking the LCD should be to look at the histogram. Forget the rest—pay attention to the histogram and get it right.
Finally, many believe that digital cameras tend to underexpose; this is not always the case. Figure 1 was taken in Aperture Priority mode with Matrix Metering and with Highlight Protection enabled on a Canon 1Ds Mark III. The exposure is a disaster; the highlights are completely blown out. No amount of software wizardry can ever recover the detail or the proper skin tone in the face. This happened because the camera gave too much exposure importance to the dark green ivy occupying a good part of the scene and simply ran out of dynamic range to protect the highlights in the face. The camera had no way of knowing that the face was the critical item, even though it occupied a much smaller part of the frame than the ivy.
So if you want to capture an image with the maximum amount of detail and quality, you have to think before you release the shutter. You then have to look at your histogram to see if you need a correction, and if so, make the correction and reshoot.
Some folks erroneously think that if you clip your histogram either on the shadow side or on the highlight side, you can fix it later. Nothing could be farther from the truth. Imaging software cannot perform miracles; detail that is lost during capture can never be recovered, period.
Other folks feel that it is safer to center the histogram and leave lots of room on both sides. Well, if you want mediocre captures that are far worse than what your camera is capable of, you can certainly do this. That, in turn, also guarantees that your final images and prints look pretty mediocre compared to what you should have gotten out of your camera system.
I have tried just about every camera on the market that offers autofocus. Many of them offer multiple focusing points and all kinds of sophisticated hardware and algorithms. Yes, autofocus is wonderful and can be very accurate. Unfortunately, I find that a large percentage of the time the camera does not focus on what I want it to focus on.
We all have seen the typical failures, such as the camera focusing on the background instead of the people in front of it. Other autofocus failures make you lose the shot entirely; a typical example is trying to capture a bird in f light. Oftentimes, the autofocus system either focuses on the background instead of the bird, or wanders back and forth for so long that by the time it focuses on the bird, the shot is gone. I have a much higher rate of success using manual focus in situations like these.
I constantly observe cameras making huge numbers of more subtle focusing errors that many people ignore. I cannot even tell you how many times I have been shooting a portrait, and because of the lighting or the subject angle, the camera decides to focus on the nose instead of the eye, or on the eyelashes instead of the lips. The same thing happens when shooting landscapes; well over 90% of the time the camera focuses on the wrong object. Similarly, when shooting wildlife, the camera often focuses on the vegetation or the branches instead of the animal, and if it focuses on the animal it is usually the wrong part of the body. Many people let these more-subtle errors go, thinking that they can fix them later using software sharpening. The problem is that sharpening after the fact is never as good as shooting an image in perfect focus in the first place, and I get tired of seeing so many images that are overly sharpened, with unacceptable amounts of noise and haloes, all because people are trying to “fix” focusing problems.
I have therefore personally given up on multiple sensors. I need control, and I need it fast when I shoot. I either use the center sensor only, point it where I want, and then reframe; or I use manual focus. Other photographers may have good luck with multiple sensors.
Depth of field
I continue to be amazed that the depth-of-field scales on lenses today are the same as they were many decades ago—these scales are totally obsolete. The criteria for depth of field were developed a long time ago, when lenses were not nearly as sharp as they are today, when the image was placed on a roll or a sheet of film that was never completely flat, and before modern coatings existed, so lenses had much more flare and other aberrations.
What was “sharp enough” back then does not look acceptable by today’s standards. Today, we deal with extremely flat sensors and with lenses that have much higher contrast and resolution than envisioned when the acceptable circle-of-confusion standards for depth of field were developed.
A full technical report on these issues is beyond the scope of this article, but I will give the reader a few practical guidelines. In most cases, I have found that you need to close the lens at least one more f-stop than indicated by the depth-of-field scales.
So if the scale tells you that you have adequate depth of field from, say, eight feet to inf inity at ƒ/5.6, you really need to close your lens to at least ƒ/8 to obtain that depth of field.
I also have found that diffraction is much more noticeable with digital cameras and modern lenses. The onset of diffraction is typically visible at larger apertures (one to two f-stops) than we were used to in the days of film, so the balancing act between depth of field and diffraction is much more delicate with digital capture.
A few simple tests with your own camera and lenses can give you some good working guidelines. I suggest that besides experimenting with lens apertures, you also experiment with the settings for hyperfocal distances. The ideal setting for hyperfocal distances with modern lenses and digital sensors is farther than indicated by traditional scales. Some experienced digital shooters use the following guideline: Focus on or near the farthest distance where sharpness is required, then close the lens one more f-stop than the scales indicate for the nearest distance where you still require sharpness.
The above guidelines work reasonably well, but are not the ultimate exact science on these matters. I suggest that you run some tests with your own equipment to see what works best for you.
Everyone praises image stabilization, and it works well most of the time, but it can lead to disaster at other times. Let me share a very frustrating example. In August last year, I had the privilege of witnessing one of nature’s most amazing moments: I saw a leopard capture and kill an antelope in broad daylight at a very short distance from me. This is an extremely rare event to witness, let alone in full daylight, as leopards are extremely shy and hunt mostly at night. There was a magic moment when the antelope jumped and then the leopard jumped right behind it and grabbed it in midair. My timing for releasing the shutter was perfect, and I thought I had captured the exact moment when the leopard grabbed its prey. A quick look at the screen in the back of my DSLR confirmed that my timing had been perfect, and that my histogram was close to ideal. I was ecstatic and could hardly wait to get home to print what I thought was an incredible shot.
When I opened the image in my computer, I nearly fainted. The image was in perfect focus, but it was blurred. In spite of a shutter speed of 1/750 second, the image had motion blur. I tried everything I could to make it into a good image; I even tried to make it a more impressionistic rendition by adding even more motion blur in Photoshop, but in the end I could not turn it into something acceptable. Figure 2 is the best rendition I could get out of this image. This once-in-a-lifetime shot turned into a reject.
This is one more example of the fact that many improperly captured images cannot be fixed after the fact, no matter how much editing magic one tries to apply to them. Garbage in usually does turn into garbage out.
I was devastated and miffed, but eventually figured out what had happened. At the time of the shot, I had image stabilization turned on. Because the event was a complete surprise, and because the action was so fast, it required me to move the camera quickly to frame the subject and shoot without fiddling with any settings.
Unfortunately, the camera detected a lot of motion, and right at the moment of exposure, image stabilization kicked in. As the lens elements shifted during the exposure, they caused the image to blur. Had image stabilization been turned off, this would not have happened.
The issue is that when shooting wildlife, image stabilization is often a good thing, but sometimes something unexpected happens very quickly. Given that most professional DSLRs already have built-in microphones, and that speech-recognition software (as in cell phones) is very good and essentially free, I f ind it unacceptable that there are no spoken commands available for these cameras. Had I been able to say “stabilization off ” and have the camera obey, it would have saved the day.
A properly captured image will always look better in the end than an image with suboptimal capture and heavy editing afterward. A very small amount of effort and care in having better captures not only gives you better results, but can be a huge time saver.
We need to remember that as great as cameras are today, it is still the eyes and brain of the photographer that determine the ultimate result. Setting a camera on “auto everything” is likely to lead you only to mediocrity and to many frustrating hours on the computer editing suboptimal files.
A little bit of thought and f inesse during the capture process goes a long way toward producing outstanding images.