From a recent post I made in the Digital Photography School Forums explaining High Dynamic Range (HDR) photography:
The camera has a limited dynamic range. It can capture more exposure steps than -1 -> +1 but noticeably less than the human eye.
For sake of argument, lets say the camera can catch 7 different intensities of light. I don’t have exact figures to hand but it gives a reasonable idea. At a centred exposure setting that will be everything from -3 to +3 (including 0). An area that is very dark (< -3) wil come out black and a bright area (> +3) will show as white. By combining that with over and underexposed shots (assuming -2, 0 and +2 as the settings) you now have information which covers -5 to +5. That is as much or more than the human eye can deal with.
The process is actually analagous to what the eye does naturally. As you look round, you build up a picture of the scene around you. Your eyes adjust automatically to varying light levels and the brain composites this information into what you “see”. Step from a dark room into sunlight and it takes a noticeable time to adjust. Look into the shadows and shapes will gradually resolve themselves.
Therefore HDR can be a naturalistic process, which is why it can look like it has been painted (artists spend a long time observing a scene and so include details that a glance would miss). That, of course, has nothing to do with the very heavily processed images that are also presented as HDR which are more about garish colours and sharpening halos than seeing more fully into what is there.