This article on Hubble images is a guest article by Vishnu Unni. C, a research trainee in optics at the Indian Institute of Astrophysics (IIA).
If you are a space enthusiast, you must be in awe of the beautiful images captured by the Hubble Space Telescope. The colorful shots show us that nature is much more of an artist than a scientist or an engineer. But, in reality, the images the telescope sends us are black and white. Colors are added to these raw black and white images while post-processing. So why is that? Why does the most advanced telescope that can capture galaxies that millions of light-years away, take black and white shots? To answer this question, let us first make ourselves familiar with some basic concepts.
In the previous article, we saw the image is square of Fourier transform of the electromagnetic field at the opening of the telescope This is Known as Wiener Kinchin theorem. Most telescopes have a circular opening. So, when we image an object at a large distance, which is infinitely small but is giving a good amount of light, we will have a planar wavefront incident on the opening. This will have the same phase.
So, its image will be simply the square of Fourier transform of the opening of the telescope which has a constant phase value. Let’s assume that value is one and do the Fourier transform. We will get the image as follows; this is the instrument response (Point Spread Function).
We asked the telescope to capture a point source and this is the best the telescope can do. So a Point Spread Function or the PSF is nothing but a mathematical function that describes how a point source is spread in an image.
Now let’s move onto why the PSFs are not always perfect.
Getting The Best PSF
We saw how an image forms in a telescope. But that is an ideal case. The imperfections in the mirrors, lenses which make up the telescope destroy the quality of the image. These imperfections are called aberrations. We have seen the light dispersing through a prism. This phenomenon acts as an aberration in telescope with lenses and is called chromatic aberrations. This will cause the blue light to converge to a point where the red light converges. The effect is shown in the below image.
We can get rid of the problem using mirrors instead of lenses. But we still have a lot of aberrations as shown in the image below. They are caused by the geometry of the mirrors used. Unlike the spherical mirrors we have seen in science classes, good telescopes have aspherical shapes. The shape may be parabolic, hyperbolic, ellipsoidal, or more complex polynomial.
But a designer, or an optical engineer, tries his best to get the PSF (Point Spread Function), which is the image of a point object as close as possible to the theoretical PSF-which is limited by the diffraction as we saw in the previous article. But why should we improve the PSF? Oh… It’s very crucial in astrometry, which is the measurement of the accurate position of stars!
If we get an image with aberration as shown above, in the whole coma or astigmatic image, then what is the actual position of the star? I must remind you that a pixel in your image can be a distance of thousands of light-years where the star or galaxy is located. The famous Hubble Space Telescope (HST) was sent up to orbit without correcting spherical aberration, which caused blurry images. It took another service mission to correct it.
Why Are The Hubble Images Black And White?
The reason for black and white images from the Hubble telescope is that the sensor of HST is not like our commercial camera sensor. Let us understand the sensor first. The sensor is like a bucket collecting water.
The stars are the pipe feeding it with water. If the stars give a little amount of light, then we will have to wait for more for the detector to be filled, the same way it would take a small pipe to fill the bucket. So, the Hubble telescope spends minutes to take one image. This is known as the exposure time.
The process that helps to make digital images is called photo-ionization. The detector material releases electrons when light falls on them. This concept is the same for all sensors, commercial, or for science purpose. Then what is the difference? Efficiency! The science detectors are more efficient in this process, as they deal with very faint sources with accurate measurement of light.
This is the image of the famous Orion Nebula. It’s a spectacular image. At the same time the following is another raw image from HST of Orion nebula using WFC camera onboard. It is not as interesting as the colour image, if you observe very closely, you may find some noise as well.
The reason we don’t have color images is that the commercial detectors have 3 subpixels in one pixel: each one for capturing images in Red, Green, and Blue channels separately. When displayed, we see it as a color image. So why science detectors don’t have it? They have a large single pixel instead of 3 pixels. The commercial sensors try to collect all the light possible, but for science, that is not needed.
We wish to observe only in specific colors to know about the material properties precisely. The science detectors are standardized too. The number of electrons when the light of a particular wavelength is falling on it is accurately known. This will help us to know the amount of light by measuring the electrons.
The color in which we observe will be determined by the state-of-the-art color filters put before the detector. The color is chosen by the science goals of the telescope. All space telescopes have many filters. Smaller the pixel, larger is the noise. This is why space telescopes have one big pixel instead of three.
But the astrophotography is not easy either. The astrophotographer spends a full night to take hundreds of images, each with an exposure time of several minutes, and hours of time in editing and controlling colors in the image. The images may not have accurate digital counts proportional to that of the source, but they look amazing and have a huge role in bringing all of you to read articles like this. ?.
Similar editing done and added colors are what we usually see in websites. The adding of colors can be artificial, or also can be by adding images taken with different filters.
Wait a minute! What is the point of taking these black and white images? Or why do we need to measure the amount of light from the stars accurately? Well… applications are immense. It helps us to know the temperature of the star which helps us to classify them. Precise knowledge about the amount of light from a star helps in figuring out if it has a planetary system around it by measuring the dimming of the star.
Similarly, looking for evidence of dark matter involves monitoring gravitational lensing of light around massive galaxies. We need to know the sources in the galaxy to accurately measure the additional light from other objects that were gravitationally lensed around them. These are general examples for uses of accurately measured intensity of stars.
The Space telescopes are a marvel of engineering and technological skills. It takes a huge amount of engineering expertise to make a space telescope. The designer makes the best PSF achievable within the limits. The mirrors are polished so well. The alignment of mirrors of the telescope is sensitive to even a movement by a few microns. The electronics are complex, we have to get the best of all detectors for taking images, we need the best material to keep the telescope protected from the drastic space weather.
We need a good strategy to keep the orbits of the telescope useful. Any problem in the telescope, has to be solved remotely. Hubble Space telescope is one example besides GALEX, KEPLER, etc.
But, considering telescopes in orbit around the earth and ground-based telescopes, space telescopes are lucky to have no atmosphere while taking images. A ground telescope suffers a greater deal from the atmosphere owing to the turbulent nature of the atmosphere. Let’s check it in the next article.