*This article on science with telescopes is a guest article by Vishnu Unni. C, a research trainee in optics at the Indian Institute o*f

*Astrophysics*

*(IIA).*

**Introduction**

Telescope is an important instrument that has helped mankind to position itself in the vastness of the universe, learn about our own celestial family named solar system, our own galaxy and various other galactic and extra-galactic objects. We all have an idea of its history and etymology to some extent. But what we often do not know is its underlying principle.

From the lenses and mirrors inside a tubular structure to measuring the size of a star in a far distant location, let us learn the science behind a telescope.

**Pinhole Camera**

Before getting started with the telescopes, let us understand the simplest form of an imaging device: a pinhole camera. A piece of paper with a hole of size of about half a millimeter will easily help you form an image of a light source on the wall opposite to it if we roughly hold the paper about 5 cm away from the wall. But how does that work?

Essentially a pinhole is an aperture and the reason behind the formation of an image is the important and interesting property of light called diffraction (fundamentally defined as the bending of light around corners). But how does that help in forming the image?

**Related: Basics of Astrophysics Chapter 3 (All About Telescopes)**

**The Interference Pattern**

Let us revisit one of the basic diffraction experiments that has historical importance in revolutionizing the modern science by demonstrating the wave nature of light: Young’s Double slit experiment, carried out by Thomas Young in 1801. Prior to this experiment, it was believed that light is made up of particles. A picture is given below that shows the diagram of the experiment and the results which are uniformly spaced fringes.

The given image is the result of the experiment if the sources has same phase. If the phase of one source lags or lead the other source, the fringes will shift left or right accordingly. One may be interested in the mathematics of derivation and will find that the angular fringe width is the ratio of wavelength of light and the size of slit (lambda/d). Most interestingly, we will find that the *fringe pattern is the magnitude square of Fourier transform of the slits!* This is easy to verify by doing the typical example of square wave pulse with two pulses representing the 2 openings of the slit set-up.

**Subscribe Us On Youtube**

Subscribe Our Youtube Channel for all the astronomy updates and science videos including the latest episodes of the ongoing Q And A Series

The image on the left side represents two square wave pulses and the one on the right represents the square of the Fourier transform of the pulses.

**Image From Fringes**

Now consider the pinhole aperture as a collection of multitude of such point sources. Each pair combination of such sources produces one series of fringes. We can have various separation between the sources inside the aperture. All of them will add up to produce the image. So, we now know that the image formed by the pinhole is nothing but the Fourier transform of the electromagnetic field at the pinhole aperture. Now that is interesting!

Who knew the example problem we did in the class will help us understand the science of the image formation? But wait, why doesn’t the same happen for a larger aperture? well, it does happen, but in the theory of double slit experiment and the Fourier transform relation, there is an approximation that the screen is at a distance far greater than the size of the aperture. So 5 cm is a large distance compared to half a millimeter sized aperture.

**The Role of Lens**

But how does this help in understanding our telescopes with lenses. What is the use of forming a teeny tiny image? Or to get a large image, keep the aperture of 10 meter at Himalayas and holding the screen at the Bay of Bengal? No, we don’t do that, we use lenses. But isn’t that related to the image formation by refraction of light? The answer is no. *A lens is nothing but an aperture with a phase shifter*. It has a varying thickness.

The varying thickness adds different phase values to the incoming waves. We know the relation: phase added = 2*pi*d*n/lambda, where n is the refractive index. The varying value of d in different locations bends a planar wave into a spherical one owing to the spherical shape of the lens which adds a quadratically (x^2) varying values of phase to the light. This brings the interference pattern that was supposed to form at an infinite distance to the focal plane of the lens. Again, the image formed is the magnitude square of Fourier transform of the field incident on the lens.

You may also like:**NASA’s 4 great observatories in space****Hubble’s 10 most spectacular images****The power of the James Webb Space Telescope**

**The Resolving Power**

Now we need to know how good the telescope is. This comes from the resolving power of the telescope. Resolution of an image actually means that we must be able to distinguish one object from the other. This happens when one of the object lies next to the other and they appear as two sources in image plane.

The angular resolution criteria comes from the width of the finest fringes in the double slit analogy we used before. The fringe width is lambda/d, where d is the separation between them. That maximum possible value of d we can have inside an aperture is the diameter (D) of it. So the finest fringe width will be lambda/D. This is the smallest angular resolution we can have, to differentiate one object from other. ‘Just resolved’ images are shown below:

But is that all there is to the telescope? No, we just saw the tip of the iceberg. There is a lot more to consider. The above one is an ideal case; practical cases comes rarely to this much performance. The next article in the series will shed light onto it.

If you have any questions, you can contact me at **vishnu.unni.c@gmail.com**

AravindGreat article ,.. nice simple explanation…👍🏻 Waiting for the next one in the series

Shahar GiblyI didnt underatand what did you mean by “A lens is nothing but an aperture with a phase shifter. It has a varying thickness.”, Or by “The varying thickness adds different phase values to the incoming waves.”?

I mean, isn’t lens perpuse is changing the size or shifting directions (uper to down etc) of an “image” /light sourse rays ?

Tnx!

VishnuYes,

The change in phase..

dφ = 2pi*(path_length)/λ’

Where the λ’ is the wavelength inside the glass material.. the lens has a varying thickness from center to the edge.. this causes a different change in phase to the rays hitting in different places of the lens.

Ray optics is not actually capable of explaining many phenomenon like interference.. so wave optics has to be implemented to explain..

Pingback: Why Are The Iconic Hubble Images Origially Black And White?