Just realised something!

Light and its behaviour and properties
Dinesh

Just realised something!

Post by Dinesh »

Right now I'm in the process of studying digital holography (Computer Generated Holography - CGH). Part of this study is reading up on the Whittaker-Shannon theorem. This theorem is the well known theorem that if you have a band limited function you have to scan at twice the highest frequency to completely recover the signal. Well, OK, but as I peruse the subject, I see an example of hearing as a band limited function, since the human ear is capable of only a range of 20Hz to 20 kHz. But, it suddently occurred to me that vision is also a band limited function, the response function of the retina is limited to between c/[400*10(-9)] Hz and c/[700*10(-9)] Hz. OK, it's obvious in hindsight, but it never occurred to me that the colour signal that enters the eye is band limited and so susceptible to the Whittaker-Shannon theorem!
Joe Farina
Posts: 804
Joined: Wed Jan 07, 2015 2:10 pm

Just realised something!

Post by Joe Farina »

Birds would be less susceptible to the Whittaker-Shannon theorem since they resolve more visual events per second, compared to humans*

*At least I assume so, since this is the first time I've heard of the Whittaker-Shannon theorem.

By the way Dinesh, if I remember correctly, you worked at Physical Optics Corporation? If so, I would like to ask a question. Today I saw a paper called "Constructive use of high order harmonics in holographic Lippmann mirrors" by Chris Rich and George J. Vendura, Jr. (both of Physical Optics Corporation of Torrance, CA). This was in SPIE 1212, Practical Holography IV (1990). They used DCG as the recording material.

I wanted to ask, if you are familiar with this paper, was there any thing useful which could be applied to reflection display holograms in DCG? I'm not able to understand it. If you don't have (or remember) this paper, I will be happy to scan and send it to you.
Dinesh

Just realised something!

Post by Dinesh »

Joe Farina wrote:By the way Dinesh, if I remember correctly, you worked at Physical Optics Corporation? If so, I would like to ask a question. Today I saw a paper called "Constructive use of high order harmonics in holographic Lippmann mirrors" by Chris Rich and George J. Vendura, Jr. (both of Physical Optics Corporation of Torrance, CA). This was in SPIE 1212, Practical Holography IV (1990). They used DCG as the recording material.

I wanted to ask, if you are familiar with this paper, was there any thing useful which could be applied to reflection display holograms in DCG? I'm not able to understand it. If you don't have (or remember) this paper, I will be happy to scan and send it to you.
Joe, I'm not familiar with the specific paper, but I am familiar with the concept. I'm still in touch with Chris (he's now at Wavefront, just down the road from me), but I have no idea who George J. Vendura, Jr is (sounds like a character in a mystery play!). However, I believe there is an interesting back story and not a little irony in this paper.

I've talked this before in regards to that efficiency curve you showed for eta in dcg. You remember that efficiency curve you showed from a paper from the University of Arizona group? Remember that it showed a rise, then a fall, then a rise again? Well, the reason for the fall and then the rise is due to harmonics from the modulation profile. Ideal recording would produce a perfect sinusoidal modulation profile, but no material can track the eact values of a sinusoid, so the real profile of a real emulsion will have harmonics - Fourier components due to the deviation of the modulation profile from a strict sinusoid. You can also create harmonics by deliberately developing the emulsion to create some specific modulation profile which will generate specific harmonics.

The irony and back story to this is that sometime in 1987, we (POC) was asked to create a reflective filter at around UV wavelengths in the 200nm range simultaneously with a reflective filter in the vis range. At that time, we had no UV lasers, so we could not do the UV part of this. However, at that time, it occurred to me that there were two ways of doing this: pseudo-colour and harmonic manipulation. In the pseudo-colour technique, you shrink the emulsion to generate a blue-shifted image, but no one had achieved pseudo-colour in dcg. So, I gave it a try, developed a few methods and succesfully created a UV grating by shooting at 488 and shrinking it. Then I shot a plate at 488 and then shrunk in to create two gratings: one at 488nm and one at about 350nm. It then occurred to me that pseudo-colour would never get me below 350nm odd, so perhaps I could get lower using a harmonic technique. I then worked out some techniques to develop the plate to a given modulaton profile to generate a filter at 244 (half of 488). Well, cut a long story short, the methods I came up worked and I got a grating at both 488 and 244. Now this was all done in between working on real POC projects (both Chris and I would often chat about unconventional methods to create unconventional holograms and then try these out when we were in "downtime"). Anyway, I showed what I'd done to Joanna and she got mad at me and told me not to waste my time on all this and throw all the plates out. About 6 months later, she asked me if I still had those plates at 244 and I mentioned that she told me to throw them all out. By 1990, I'd left POC so it's interesting that they then wrote a paper about exploiting the use of harmonics!

Is it useful for display? I don't know offhand. I suppose if you had a need for display UV holograms it might be. Then again, you could exploit the different efficiencies to creat an image where a difference in efficiencies might be beneficial, for example a bright foreground with a (controlled) dimmer background. If you overmodulated the lighting for the background, you'd end up with a more square-wave modulation profile for the "background" fringes. I must admit, off the top of my head I can't think of any particular way of exploiting harmonics.
Joe Farina wrote:Birds would be less susceptible to the Whittaker-Shannon theorem since they resolve more visual events per second, compared to humans*

*At least I assume so, since this is the first time I've heard of the Whittaker-Shannon theorem.
The Whittaker-Shannon theorem says that if you have a band limited function, so that given a function g(x), then it's Fourier transform G(f) = 0 for all f > f(c), you can scan that function by a series of sharp lines - Dirac delta functions. This will give you a set of numbers at specific positions on the curve, which defines a new function. If you take the FT of the new function (which is basically just a set of numbers), you get a series of sinc functions surrounding the points at which you scanned. So long as you the scan frequency is twice the highest frequency in the original function, ie f(c), and the function is band limited, you can recover the original function exactly. If the function is not band limited, you get what's called "aliasing". What I found interesting is that the ear is capable of Fourier transforming - a good ear can make out the harmonics in a tune - but the eye cannot Fourier transform - your brain cannot decompose a given colour into primaries. But, if the colour signal that enters your retina is band limited, you should be able to scan the colour signal and isolate it's harmonics. Why can't the eye do that? Perhaps because the retina integrates, but the cochlea is composed of tiny hairs that move under the influence of sound, but those tiny hairs have lengths that are harmonics.
Joe Farina
Posts: 804
Joined: Wed Jan 07, 2015 2:10 pm

Just realised something!

Post by Joe Farina »

Thanks for the detailed reply Dinesh. I sent the paper, in case it has something interesting.
Dinesh

Just realised something!

Post by Dinesh »

Got it, Joe. Notice they don't actually say how they do it. All they say is: "The film was developed by conventional water and alcohol processes. An efficient second harmonic can only be obtained by the proper balance of exposure and development to achieve high modulation" (second para in section 4: "Experimental Results"). This is not quite right, you need to create a particular modulation profile, not just a "high modulation" with the "proper balance" of developing agents. But, yes, this is pretty much as I explained; you need to control modulation by processing. They started by shooting IR and then created a harmonic in the vis, while I shot it in the vis and created a second harmonic in the UV.

Still can't think of any display advantages, I'm afraid!
Dinesh

Just realised something!

Post by Dinesh »

OK, while staring at the ceiling, one application in display occurs to me.

Let's say that you want to make an H1/H2 of an image, but you want the exact same H1 to produce two or more H2's, each one with a different ref, so that each one reconstructs with a different recon beam. I have no idea why you'd want to do this, but let's say you did. Perhaps two people want your image, but they each have different requirements for the recon angle. You shoot an H1, but deliberately overmodulate. Now, you have more than one sinusoidal component in the emulsion. If you reconstructed the H1 at the right angle, you'll get two or more reconstructions ("orders"). Now, you place an H2 plate at each H1 reconstruction and reference each H1 reconstruction with a different H2.
Joe Farina
Posts: 804
Joined: Wed Jan 07, 2015 2:10 pm

Just realised something!

Post by Joe Farina »

Dinesh wrote:Notice they don't actually say how they do it. All they say is: "The film was developed by conventional water and alcohol processes. An efficient second harmonic can only be obtained by the proper balance of exposure and development
Well, at least "the proper balance of exposure and development" is good aphorism for successful DCG holography in general ;)

Thanks for reading the paper, and thinking about a possible display application.
lobaz
Posts: 280
Joined: Mon Jan 12, 2015 6:08 am
Location: Pilsen, Czech Republic

Just realised something!

Post by lobaz »

Hi, Dinesh,
I would not incorporate Whittaker-Shannon in the ear/eye mechanism as there is no sampling in time domain, of course. Both the inner ear and a single rod/cone in the retina do in fact windowed Fourier transform. In the inner ear, there are several hundreds of "detectors", one for each FT window length (dependent on local width of the basilar membrane), i.e. we have nice frequency resolution for sound. The retina has just three types of detectors for color vision (dependent on pigment of the cone), so in fact we sample the frequency extent of the visible light frequencies with just three samples. Although I used the word "sample" here, it is still dangerous to use Whittaker-Shannon - its simplest form does not assume aperture effects, it just works with Dirac-delta sampling. But if you want, you can call metamerism as the aliasing error of the eye. :)

By the way, welcome to CGH!

Petr
Dinesh

Just realised something!

Post by Dinesh »

Hi Petr
Thanks for the comments. I think I understand that you cannot use Whittaker-Shannon on the eye because of the limited number of elements in the spanning set. The Dirac function did not occur to me. As I understand, the Whittaker-Shannon theorem relies on scanning by Shah functions (must it be so?) and clearly the signal into the eye is not scanning by Dirac functions. I was struck by the fact that the signal into the eye is band limited, so I assume it is possible to use Whittaker-Shannon to recreate the signal entering the eye, even if biology does not actually do this. However, the biological system of three cones is tremendously redundant, since there is so much overlap between the cone sensitivities. So, I was wondering: Did nature choose redundancy over precision? Why? Could it be that speed was more important than absolute precision? I remember something Isaac Asimov wrote that (over)intelligence was not a good characteristic for survival. If a tiger was approaching you, and you were trying to figure out all the possible strategies to evade the tiger, you'd probably get eaten. If you just ran, you'd probably survive. However, if nature can do W-S on the eye but choose not to do so, is it possible in machine vision? Do they already do this?
lobaz wrote:By the way, welcome to CGH!
Thanks. The problem with self study is that it's difficult to come up with good, original problems. Reading all the papers, it seems that faster algorithms is what's desired.
lobaz
Posts: 280
Joined: Mon Jan 12, 2015 6:08 am
Location: Pilsen, Czech Republic

Just realised something!

Post by lobaz »

Hi, Dinesh,
Dinesh wrote:I think I understand that you cannot use Whittaker-Shannon on the eye because of the limited number of elements in the spanning set.
Whittaker-Shannon states that a signal in the time domain (let's stick with the time domain) can be reconstructed from samples taken in the time domain - under certain conditions. A single cone takes somehow samples in time domain by 1) prefiltering the input with bandpass filter to allow just (e.g.) "blue" to come in, 2) integrating incoming signal under really large time window (compared to the frequency of the signal). The step 2 is the main limitation - it is not possible to undo this integration as the window spans many periods of the incoming signal. The fact we have three types of detectors does not help much - all of them take step 2 and, moreover, their function is not time correlated.
Dinesh wrote:The Dirac function did not occur to me. As I understand, the Whittaker-Shannon theorem relies on scanning by Shah functions (must it be so?) and clearly the signal into the eye is not scanning by Dirac functions.
Shah distribution is in fact just a sum of equally spaced Dirac delta distributions. The sampling of the signal f(t) is in fact just multiplication with shah, i.e. sampled_signal(t) = f(t)*III(t). In advanced versions of the Whittaker-Shannon it is assumed that a real detector measures a signal during a non-zero time interval. This is taken into account in both sampling and reconstruction of signal. In this version, you can assume either sampling process is different from simple multiplication, or sampling process is the same but the sampled function is not. So - yes, common use of Whittaker-Shannon relies on shah.
Dinesh wrote:I was struck by the fact that the signal into the eye is band limited, so I assume it is possible to use Whittaker-Shannon to recreate the signal entering the eye, even if biology does not actually do this.
Sadly, it is not possible. Imagine a short pulse of red light which can be perceived, say 1/400 s, immediately followed by a short pulse of green light. As the reaction of the eye is so slow (due to step 2 above), you cannot tell which one came first. I still assume we are talking about time domain!
Dinesh wrote:However, the biological system of three cones is tremendously redundant, since there is so much overlap between the cone sensitivities. So, I was wondering: Did nature choose redundancy over precision? Why? Could it be that speed was more important than absolute precision? I remember something Isaac Asimov wrote that (over)intelligence was not a good characteristic for survival. If a tiger was approaching you, and you were trying to figure out all the possible strategies to evade the tiger, you'd probably get eaten. If you just ran, you'd probably survive. However, if nature can do W-S on the eye but choose not to do so, is it possible in machine vision? Do they already do this?
I think it has a lot to do with the speed of the processing, or, better said, affordable resources that can be used for visual signal processing. A fast system has to use small processing unit (small brain) and this also implies reduced input bandwidth (the eye has to filter out a lot of information so that the processing unit is not overloaded). Maybe this is the reason that at least some predators lack color vision, and some oysters have great infra-to-UV range of vision and can tell CW polarization from CCW!
Dinesh wrote:Thanks. The problem with self study is that it's difficult to come up with good, original problems. Reading all the papers, it seems that faster algorithms is what's desired.
CGH is in a similar situation as computer graphics in early 1970's. We are still struggling with the most basic problems such as effective hidden surface removal or real material modelling. Fast algorithms are nice at any time but I don't think they are essential now. For example, ubiquitous computer graphics algorithm "z-buffer" was invented in 1970's but was dismissed then - who could take one megabyte of memory for a temporary buffer in 1970? Today, any graphics card supports this algorithm. So I think that now we should concentrate how to produce quality high resolution holograms in a reasonable time and keep in mind that the algorithms should be made in hardware some time.
Post Reply