One of the discussions I am taking place in at the moment on the Digital Photography School forums concerns the question of what is meant by macro photography. The standard definition is that 1mm in real life is represented by 1mm on your film, which necessitates a lens that allows you to focus on things that are very close. While that may have been a decent rule of thumb when most people were shooting on 35mm file, I find it inadequate for defining macro in the digital world.
Cameras come with a range of different sensor sizes (my Nikon D40 has a 23.7 x 15.6mm sensor) and, if that were not enough, it is not really size that matters but the process by which incoming light is captured and stored as a series of discrete points that create the illusion of a recognisable image at a range of resolutions. Zoom in close enough on the smoothest gradient or the sharpest edge and it breaks down into a collection of coloured squares.
If I take a picture of my hand (about 20cm long) which fills the full width of the picture, that clearly is not a macro image by the traditional definition. However, when I view it on my monitor (about 40cm wide) it appears perfectly sharp while 1mm in real life is represented by 2mm on the screen. At one image pixel per screen pixel, I would need to scroll across a couple of screen lengths to see the whole thing and we would be at 1mm in real life being represented by about 4mm on the screen, while the individual pixels are still too tiny to discern and the illusion holds up.
I propose that we need a new definition of macro photography based not on the variables of sensor size, pixel density and final viewing size but instead on an absolute measurement, namely the field of view that fills the frame in the optimal zone of focus. As a starting point, I suggest the following scale:
- 10cm = close-up
- 5cm = macro
- 2.5cm = true macro (for the purists)
- >2cm = super macro