Wednesday, July 05, 2006

Astronomical Data - 2c. Image Calibration

So far in this series on Astronomical Data, we've explored the origins of light (hip-hoppin electrons) and its strange physicality (a wavy particle thing).

In section two, we've discussed some of the troubles light has in getting to observers once. Natural sources, like gas and dust clouds and our atmosphere are the first challenge.

Next up there's all sorts of difficulties in the CCD chip between imperfections in the size of each box, heat induced noise, and just a general electronic noise.

So the question after all this was, with all this distortion, how ever can we get any useful data?

The answer is that we have to correct for as many of these as possible. Thus, we'll now explore what methods go into this.

Because it's more straightforward, I'm going to start with the nature of correcting for problems with CCDs before going into the effects caused by nature.

If you recall there were three main problems with a CCD that need to be fixed:

1. Certain photon collecting boxes were more sensitive than others (because they're bigger) while others are less sensitive (because they may have dust or other obscuring material.

2. General motion of atoms can bump off electrons giving false signals.

3. General electronic noise.

To explain how this cleaning process works. Let's start by looking at a raw image



Looking at this, you’ll probably notice quite a few things. One thing is that the background sky isn’t actually black. This is a result, primarily of the general electronic noise and/or the bumped off electron noise. Another thing that should stand out is that there are some dark doughnut looking things. The highly technical, jargon filled, and oh so scientific term for these are “doughnuts”. As mentioned earlier these are a result of out of focus dust particles somewhere in the system.

What may not be so obvious is that certain regions are slightly brighter than others. You might also not notice (depending on your monitor settings and how zoomed in on the image you are), that the pixels right next to each other in the sky where it should be even, certainly aren’t.

All of these in some way or another are (probably) due to the three items I’ve mentioned before (I’ll discuss a few other sources of noise that aren’t always present towards the end).

So since there’s three main causes of noise, it makes sense that there will need to be three steps in the cleaning process.

Since those dust doughnuts really stick out, we’ll start with those. To correct for this, astronomers take what’s known as a “flat frame”. If you remember back on day 9 of my internship, we went to the observatory and I posted an image of a white square inside the telescope dome.

This white square is evenly illuminated (well, as best as possible) by an out of focus slide projector across the room. We then take an image of this flat field using the telescope.

The result is something like this:



Now those dust doughnuts really stand out. This will allow astronomers to fix all those dust doughnuts as well as any individual pixels that are slightly more or less sensitive than others. The trick is “how?”

The first thing that must be done is to figure out what the average value is supposed to be. So a computer program takes each pixel and finds the average. Keep in mind that a fairly low end astronomical CCD can be 2048 pixels2 (ie, 4.1 megapixels). If it weren’t a computer, that’d be quite the task.

The next is to compare the value of each pixel on this flat frame to the average. If it ends up higher, that means that individual pixel is, for one reason or another, too sensitive. If it’s too low, that means the pixel was undersensitive (either because it’s box on the CCD was smaller, or there was dust blocking it, etc…)

So now we can play Goldilocks with each pixel and see whether it’s too sensitive, not sensitive enough, or juuuuust right. But what’s better, is that for those that are off, we also now know by how much!

Taking this information, we then apply it to the actual image taken of whatever object we happen to be looking at. Pixels that were darkened will be brightened by the amount the flat field told us they were below the average. Pixels that are too bright will be dimmed similarly.

One problem down!

Problem #2 was that, as atoms move around, they’ll bump into one another, knocking off an electron which gives a false signal.

The easiest way to stop this from happening is to stop it before it starts and just keep those atoms from moving! To do this astronomers cool the CCD to extremely low temperatures using liquid nitrogen (which is also good for blowing up watermelon).

In doing this the amount of electrons that get bumped off is something like one per hour. Statistically not even worth mentioning.

However, for smaller, non-professional grade telescopes this isn’t an option for multiple reasons. First off, liquid nitrogen isn’t something you can just pick up at the grocery store. But even if you could get a hold of some, the equipment necessary costs tens of thousands of dollars. Lastly, the equipment is also very heavy, hence it can only be used on telescopes large enough to lug it around.

So how do astronomers using smaller telescopes deal with this problem?

The idea is very similar to what’s done with a flat field image. This process is known as a “dark frame”. A dark frame is essentially an image taken without the shutter open. They’re taken for the same amount of time that the actual image is taken for. As with anything else, several are taken and then averaged so we get a better idea just how much thermal noise there is. Dark frames aren’t nearly as fun looking as the flats.

As with the flat frame, the extra readings, as determined by the dark frame, are subtracted out.

So that only leaves one source of noise: The random electrical noise.

While I didn’t state it earlier, this feature is actually controlled by the manufacturer. The reason for this is rather technical but some small amount does need to be present to be able to analyze the uncertainty in measurements. However, for the main image processing it does need to be subtracted out like the thermal noise.

Therefore, another image is taken, called a “bias frame”. This time, the shutter is left on, and the exposure time is 0. This means that there’s no thermal noise involved and any electrons there are just from electrons seeping in because it’s an electronic device.

So by taking another image, we can figure out how much noise there is and subtract that amount from each pixel. Again, the image isn’t very exciting, and generally looks like this:



So now with these three sources of noise, we’re almost ready to start actually analyzing the image!

But wait! There’s more!

Aside from electronic noise, there’s one more source of outside error that I haven’t mentioned. I left it for now because it’s actually one that’s corrected in the computer as well. To present it, we’ll jump back to the earlier post regarding where light comes from.

You’ll recall that a photon pops out when an electron falls into a lower orbital from a higher one. This is where light in stars comes from. There’s lots of atoms with jumping electrons there.

You might find this a bit hard to swallow, but there’s another place that has atoms: Our atmosphere. Shocking isn’t it?

While these atoms aren’t generally heated to as high a temperature as a star, there is enough heat to cause electrons to get bumped up and then fall back down. This is known as fluorescence. The more atmosphere you’re looking through, the more light you’ll get from it. This is the reason that distant hills look washed out.

So even at night, there’s small amounts of light generated by our atmosphere that we need to take into account. Thus, we will generally measure how bright the empty areas are, and subtract that from everything.

Let’s review what we’ve fixed so far today:
1) Under or over sensitive pixels using “flat frames”
2) Heat induced thermal noise by cooling with liquid nitrogen or subtraction of “dark frames”
3) Electronic noise with “bias frames”
4) Sky noise via average subtraction

With these four corrections made, the only thing left in the image should be light coming from the star! No dimming or brightening involved.

However, there’s a few more problems that may need to be taken into account that I’ll go over now.

The first is dead pixels. Sometimes the buckets collecting the photons are just broken and don’t work. In the end, there’s nothing that can really be done about this. What’s generally done is to take an average of all the frames immediately adjacent to them and just assume that would be the approximate value. Not a perfect solution, but the best that we can do.

Another rare, but rather annoying occurrence is that the CCD can be struck by what’s known as a cosmic ray. These ultra-high energy particles stream right through your body all the time. They’re extremely small, so chances are, they’ll go right through. However, sometimes they do hit.

If they do, it completely destroys the atom it strikes, sending a shower of particles everywhere. The scattered particles strike more atoms and make an even bigger mess.

The end result is thousands of electrons suddenly falling into the collecting boxes in the CCD. This means that the readings for those boxes will be extremely high. In fact, so many electrons are generally splattered into the boxes, that those boxes overflow. The technical term for overflowing is “saturating”.

As with before, there’s nothing that can be done about this. Hopefully such strikes will be in an area of the CCD not important to the area being studied. But if it is and it made big enough of a mess, that entire exposure is a wash.

So with all that, I hope you have a bit more of an understanding, and possibly appreciation, for all the work that goes into taking an image, and getting it ready to be analyzed. It’s not an easy task. But with a bit of work, that initial image will finally look something like this:



No dark doughnuts. No grey sky.

In my next mini-essay on this topic, I’ll do some explanation of how those beautiful images are created using CCD cameras, given that CCDs are only black and white.

Then it will be on to part three in which I begin discussing what sorts of things we can learn from all this light we’ve been collecting.

4 comments:

Mek said...

Great post. I'm interested in amateur astrophotography so I found it very enlightening. I finally what those terms mean!

Bruce Princeton said...

cool.
HAve you written any essay on right ways of storing liquid nitrogen also?

Nom├Žd said...

The images in the post are hosted on a defunct website. Any chance you can re-upload them to some service such as PhotoBucket, Picasa or Flikr, so we can enjoy the visual presentations of the text? :)

Thanks.

Jon Voisey said...

If I can find the original images I will.