In my last post on this topic we learned several challanges for light to reach us. Aside from possible interstellar clouds that obscure our signal, there's also a significant distortion and dimming from our own atmosphere. But the fun doesn't stop there!
In this post we'll explore the final step of astronomical obesrvations: the detection and the problems associated with it.
The first device used to observed starlight was undoubtedly the human eye. In some ways this tool is far superior than any other in use today, yet in many others, is extremely difficult.
Around 150BC, Hipparchus, a greek astronomer, developed a system using the unaided human eye in which he classified the brightness of stars. The system places stars in categories called magnitudes. Brightest stars he gave a magnitude of 1. The faintest he could observe he gave a magnitude of 7.
Surprisingly, the human eye works well at determining what category stars should be placed into. The problem with this system is that the human eye doesn't work linearly. That means that if star A is twice as bright as star B, then the human eye doesn't actually A as twice as bright as B. The reason for this is that the human eye uses a logrithmic scale. Accordingly, the magnitude scale is logrithmic as well, making the system rather difficult.
The magnitude system is even more frustrating when you stop to consider that it runs backwards! Bright stars are smaller numbers. Thus, when you graph things using the magnitude system, you generally have to flip the Y axis over. Over time, revisions were done to the magnitude system so that there was a strict mathematical definition of magnitudes. However, it still remains largely the same.
This standard of naked eye observing was common for many centuries. During the Middle Ages, calls for astronomical predictions increased, and astrologers (the forerunners of astronomy) began developing new instruments to more accurately measure the position of stars and planets.
One important person amongst these people was Tycho Brahe who was an extremely interesting character. Apparently he was a rather convivial fellow who enjoyed drinking and dueling. In fact, he had at some point lost his nose in a duel and had it replaced with a brass one.
Legend has long held that Brahe met his death thanks to his other addiction. During a dinner with royalty in Prague in 1601, he supposedly drank too much. Yet due to the rules of ettiquite, he was not allowed to leave the table until the host left. After drinking too much wine, his bladder eventually burst. However, more recent investigations have shown that he most likely died of mercury poisoning.
Yet although his lifestyle was unconventional, he did make excellent contributions to astronomy. His observatory housed large sextants with which he could measure the position of celestial objects to far greater position than any other at the time. Amongst his discoveries was the discovery of a "novae", or "new star" in Cassiopiea. This destroyed the long held belief that the heavens were static and unchanging.
However, Tycho did not release his data. Upon his death, his apprentice Johannes Kepler used his data to lend strong support to the heliocentric model.
The next major revolution came with the development of the telescope. Although its development is generally attributed to Galileo, this is incorrect. The telescope had already been in existance, but Galileo made several improvements and was the first to make recorded astronomical observations using it. Among them was the realization that there were many small craters on the moon as well as stars too faint to see to the unaided eye.
For many years, observations were done by hand:
This continued until the invention of photographic film the mid 1800's. This allowed for major revolutions in the astronomical field. The main advantage that film has over the human eye, is the ablility to do long exposures, and thus, bring out detail that would otherwise be lost.
New nebulae that were far too faint to see even with powerful telescopes were suddenly discovered as well as millions of new stars. Photographic film is also very useful because it allows for very large fields of view.
However, photographic film wasn't without problems. One of the largest was a property known as the "quantum efficiency". Don't be intimidated by the big word. It really just means what percentage of incoming light is actually turned into an image. With standard photographic film today, less than 5% of light that falls on the film actually goes towards making the image.
For standard use everyday, this isn't a problem because there's millions of photons streaming into your camera. However, when every photon is precious, only getting a few precent is a sore deal. Eventually, techniques were developed that increased the quantum efficiency to closer to 10%. But these techniques were difficult and extremely expensive.
Due to the poor light sensitivity, this means that exposure times will have to be relatively long. Since the Earth is constantly turning, this means that the telescope will have to track the night sky perfectly, until the image is finished.
Another problem with film is that its quantum efficiency is different at different wavelengths. Thus, film was great for making pretty pictures, but made extracting numerical data extremely difficult.
Similar to the human eye, photographic film is also non-linear. For certain brightnesses, it performs quite well, but after a certain point, no matter how much you increase the light falling on a given part of the film, the image will not appear any brighter in the result. Again, this non-linearity makes extracting data difficult.
By the 1970's a new type of device known as a Charge Couple Device (CCD). These are the receptors used in most digital cameras today. One of the largest advantages of CCDs is that they have extremely high quantum efficiency. Even a cheap CCD will often be above 60%. The tens to hundred of dollar astronomical grade CCDs will have quantum efficiencies of closer to 90%! Higher quantum efficiency means less exposure times which means its easier to track, as well as the ability to do more science in one night!
Another major advantage is that CCD cameras are very linear! Doubling the brightness of an object doubles the output value the computer reads out.
So what are the disadvantages?
One of the main ones is that CCDs are very expensive. This stems from the fact that they're also very hard to produce. The difficulty in producing them leads to small sizes. Thus, only images of small sections of sky can be taken. To solve this problem, several CCDs can be gluded together side by side. But if one CCD is expensive, an array is even worse.
The next is that CCDs take a long time to "read out". This means that getting the image from the CCD camera onto a computer takes quite awhile in comparison to film which is just a sort of *snap* next frame. If you've ever downloaded a full resolution image from a digital camera to your computer, you'll know that this can take a few seconds. However, because of how CCDs work, astronomical CCDs take even longer because we're worried about preserving every bit of data. Additionally, the images for astronomical CCDs are much larger in size than those for a digital camera.
Lastly, CCDs are very "noisy". To explain this problem, we'll have to first take a look at how CCDs work.
A CCD in essence is just a very large grid of (generally) silicon. When a photon comes and hits an atom in one of the boxes in the grid, it will knock off an electron. Thus, that box will have an electron floating in it. The more photons strike a box, the more free electrons.
Once the exposure is done, the CCD counts how many electrons are in each box. The number is then displayed on a computer as a brightness of that box.
One of the problems with this is that, to read how many electrons are in each box, the box must move to the counter. If we're not careful, electrons can spill out during this process, hence the reason astronomical CCDs move the boxes slowly where digital cameras run the boxes over as fast as they can.
However, photons hitting the silicon isn't the only way to get an electron. On a microscopic level, atoms are always bouncing around. The hotter something is, the more they bounce. Occasionally, when two atoms bump into one another, this collision will knock off an electron. The result of this is an extra electron that looks like it came from a photon, when in reality, it had nothing to do with it. This sort of noise is called a "dark current" since the CCD will read like it's getting light even when the shutter is closed.
Another source of noise is what's called "bias". If a CCD were cleared and then immediately read out without having any exposure time, it would still read some signal just due to electronic circuits not being perfect.
The third source of error in CCDs is imperfections in the equipment. When making CCDs, it's impossible to make each of the boxes the exact same size given that each box is only a few microns across. Inherently, if one box is larger than the one next to it, it will collect more photons than its dimunitive neighbor. However, when displayed on your computer, each pixel on your monitor is the same size. Thus, the pixel corresponding to the smaller box will be darker than it should be if the boxes on the CCD were perfect, while the bigger boxed pixel will appear brighter.
Additionally, problems in the equipment arise due to dust in the optics. Dust will cast a shadow onto the CCD. Since the dust is out of focus, this will cause it to appear as a darkened doughnut shape on the image.
So by this point, we have gas and dust in space blocking light, our atmosphere absorbing and blurring the light, the CCDs reading non-existant light as well as unevenly distributing what light it finally recieves...
How do we ever get anything useful!?
I'll explore how astronomers can correct for many of these problems in my next post in this topic.
How's that for a cliffhanger?