I've been doing some reading on Lippman and I think I can see the source of the confusion.
In a Lippman system, there is an emulsion on one side of a substrate and a mirror on the other (In the original system, the mirror consisted of mercury poured into a receptacle). When exposing a Lippmn photograph, the object/scene is imaged onto the emulsion side of the system by means of a lens. It seems to be a box camera with a "Lippman sandwich" instead of a standard emulsion. In such a situation, the image that's being captured has no phase variation, being flat. There is no variation of phase in the object along the z axis for any particular colour, because there is no variation of the object itself in the z axis. Also, being projected as a flat image, all the light enters the system perpendicularly. This results in a set of planes that are parallel to each other and parallel to the front and back of the Lippman system. This is what you see in any description of the Lippman technique. However, each colour component of the scene/object creates its own set of planes. Therefore, let us say that the object being recorded has only red and green components. The components may not be broadband, but it would help since any coloured object seen in white light probably has a broad bandwidth, let's say the green is 520 - 540 nm and the red is 620 - 640 nm. So, now this object is projected onto the film front surface via the lens, allowing light to enter the system (almost) perpendicularly. This results in a set of planes that are separated by 265 +/- 5 nm and 315 +/- 5 nm. These planes are parallel to the front and back surfaces of the system and the variation of +/- 5 nm give the requisite bandwidth - assuming, of course, perfect recording, ie I'm ignoring scatter within the emulsion. These planes are assumed to be infinitely thin (I doubt they really would be since the standing wave has a sinusoidal variation, but the standard description of the technique does not infer any width to the planes.) So, now the system apparently needs to be seen in diffuse parallel light. Since light cannot be both diffuse and parallel at the same time, I'm assuming that "diffuse parallel" means a multi-component collimated source, such as sunlight. Let me therefore call a collimated, milti-spectral source, such as the sun, the "reconstruction source". In this situation, the reconstruction source would reconstruct the colour components of the scene - the red and green - due to constructive interference from the stacks, ie it's an interference stack much like the AR coating on glasses. Just as the green sheen on a pair of glasses with an AR coating is seen on the surface of the glasses, So, the image of the green and red objects are seen on the surface of the Lippman system. If you observe from a different angle than normal incidence, then the colours will shift because the planes will present a larger difference to the incoming light, Note however, the object is represented by a series of parallel planes,parallel to each other and parallel to the front and back surface of the system. No twisted, tilted or curved "planes". Any planes that are not parallel will not act as an interference stack, in general. There may be parallel components, but only if the curvature of the surfaces is low.
In a Denisyuk (or reflection) hologram, this is not the case. In a reflective holographic system, the light entering the system is not imaged onto the medium (usually) but the light entering the system is a complex wavefront. The shape of this wavefront carries with it the variation of phase in the object being recorded along all three axes. Of course if the x,y extensions are small wrt to the z extension, the phase variation along the z direction is predominantly recorded. The result of this is that the phase wavefront is captured relative to, and with respect to, the shape and form of the reference wavefront. That is, the reference wavefront may distort the capture of the actual phase wavefront of the object, but this distortion is reversed if the hologram is presented with the conjugate of the reference wavefront. In any case, what's recorded in the emulsion is a complex set of curved, twisted and tilted surfaces, whose separation depends not only on the recordoing wavelength but also on the direction of the reference beam. These surfaces represent the z axis phase variation of the object. The reconstruction is also accomplished by constructive interference, just as in the case of a Lippman, but now the interference stack is replaced by complex, curved surfaces, almost none of which are parallel to the surface of the emulsion. In order to create the "interference stack" effect, the reconstruction source must have the same curvature of the reference beam that created the curvature in the first place. Due to the complex nature of the cuvature of the surfaces, any beam with a different curvature, ie a different phase at the plate, would cause the "wrong interference stack effect".
If, however, you were to make a reflection hologram of two collimated, counter-propagating beams (or one beam and a mirror), then you'd have a set parallel planes just like a Lippman photograph.
Martin wrote:When did the term "phase" (and, linked to that, "coherence length") come up first? Was it connected to the invention of the laser?
No, the terms are considerably older. Descartes first came up with a concept of the structure of light, as opposed to the effects of light. Descartes first modeled light as a pressure on a universal, elastic material that pervaded the universe (the famous "Cartesian vortices"). Newton proposed that light consisted of particles (no, not photons, photons are completely different). Because of the authority of Newton, all wave theories of light were slapped down, including one by Euler. The wave theory was brought back by Young (of fringes fame) and further refined by Fresnel and Huygens. Etienne Malus discovered polarisation in 1808 from studying crystals. However, they still held to the Cartesian system, in which light was a pressure variation and therefore a longitudinal system, like sound. Finally, it dawned upon Young in 1825 that light may be a transverse wave. Then all the results achieved by Young, Fresnel and Malus made more sense. Once Maxwell synthesised electric and magnetic effects into an oscillating electro-magnetic system and showed that light was such a system, then all the power of the wave equation could be brought to bear on the properties of light. In particular, the Abbe theory of image formation in 1874 brought diffraction, phase relationships and coherence into imaging theory. The theory of the laser rests on a 1917 paper by Einstein, but by then, everything known today about diffractive wave optics was known.
Martin wrote:Yes, the question is where a "narrow band source" begins..
Well, in terms of imaging, I'd say "broad band" is the product of the light from the image multiplied by the scotopic function at any given wavelength. So, an extremely powerful source like the laser can be "narrow band" simply because the product of the laser power and the scotopic value at 633 (for a HeNe) is so large.