Last time, I walked through the basic idea of photometry leaving off with the ever so epic cliffhanger that things are always harder than they seem at first. Nature may be able to be broken down into a series of simple principles, but the combining of those principles always serves to make life hard for us science types. We try to find ways to minimize some of those confounding tricks nature plays by making simplifying approximations, but we can't always get rid of them. And this is no less true in photometry.
So let's now take a look at what happens when we start putting a bit more reality into the discussion.
I'd said earlier, that stars behave like black bodies and showed a bunch of nice, pretty curves. That'd be a great way to do things if only there weren't that one little bit of the star that mucks the whole thing up: The atmosphere. Since it can absorb photons out of the nice, simple blackbody spectrum (forming the absorption spectra), we can't just ignore them and toss our photometric filters wherever we'd like. If we did, we might end up tossing one right over the calcium K line which would cut out a ton of the light we'd receive and make whatever filter we slapped over that wavelength give fainter readings than it should. No good!
Instead, the position of filters and photometric system is carefully chosen to avoid such potential pitfalls. We have the spectra for thousands of stars, and we know where lines will typically be. Thus, we can select regions of the spectra where there isn't absorption (or emission) lines and you have primarily a continuous blackbody spectra.
Well, hopefully. Depending on the photometric system you choose, the bandpasses that they're standardized for can either be wide, intermediate, or narrow band. The Johnson/Cousins system is a wide band system. In fact, it's so wide, the filters actually have a bit of overlap. In such a case, lines are unavoidable, but since you're also picking up continuum on either side, they're (hopefully) swamped by signal.
For smaller bandpass systems, this is less of a problem... so long as those lines stay where we can keep track of them. But they don't always do that. If the object we're looking at is moving towards or away from us, the entire spectrum can get shifted one way or the other. That's great if you're trying to do radial velocity measurements, but if it shifts a spectral line into one of your filters, it could be trouble! If that's the case, you'd probably have to use some other filter system to avoid the lines.
But this all assumes that you can avoid the lines. In hot stars, it's not too hard. Nearly all elements in such really hot stars are completely ionized so you don't have any electrons in the orbitals. As such, you don't tend to have very many absorption lines. However, the cooler the star, the more electrons fall into orbitals to do the absorbing and the more lines you get. If you start getting to really cool stars, it's not just atoms you have to worry about, but molecules which can absorb even more because they can store energy in vibrational and rotational states too. Thus, in the spectra of a cool star, lines are everywhere! No chance of avoiding them there. Thus, errors are much larger in cool stars than in hot ones.
And line problems don't end there! If you do have a star that has absorption lines, remember that that's taking out light from the spectrum. Since that light has energy, and energy must be conserved, that energy is going to manifest itself somewhere else. Since you can't get more energy out than you've put in with the absorption lines, that means that the bluer lines that are taken out (blue light is shorter wavelength and higher energy) will get broken down into more, longer wavelength (and lower energy) photons that will get remitted at a wavelength that's not absorbed. Thus, when you have something being absorbed, it can pop back out at longer wavelengths enhancing the signal in that part of the spectrum. Typically, this isn't a big problem for a few lines, but if you have a whole bunch of closely spaced lines (like you do in cooler stars) a line blanketing effect kicks in and it can cause some problems.
However, there can be times when you actually want to stick your filter right into an absorption feature. And example of this would be the atmospheric activity value that I keep seeing the research I've been working on. The idea behind it is that stars like the Sun have whopping great lines due to absorption from calcium (the H & K lines in the visible part of the spectrum). But for active stars, there's actually a tiny emission peak at the center of this great whopping dip. Thus, if you can measure that emission peak in relation to the depth of the absorption line, you can get a handle on the atmospheric activity. Thus, you can toss a nice intermediate band filter on the H or K line, and a narrow band filter on the emission bit and again, without having to go through all the time and trouble of getting complete spectra, you've got the information you need.
So it's not always bad, but there's still other challenges.
The next major one is that light, as it passes through our atmosphere and optics, ends up getting smeared out. Instead of stars being perfect, infinitely small points that only fill a single pixel on our cameras, the signal gets spread out. If we look at the brightness as a function of distance from the center of a star on our CCD, we'll get something that looks like the image to the right. In the center, the image is the brightest, but some of the light is smeared off in every direction, making it get dimmer and dimmer as you move from that central point. However, since that trail that's dropping off is still some of your precious photons, you can't just ignore them! You have to worry about that too.
This isn't really all that hard though. To see why, let's look at that same star plotted slightly differently. Instead of being a 2-D plot, this one is of the same star's intensity profile plotted in 3-D. The grid represents the grid of CCD pixels and the height above the plane is how bright the star looked on that pixel. We can see it's the same sort of thing that happened in the 2-D image; It's brightest at the top and trails off. But we can still deal with that because at some point, it's dropped off enough that you don't really lose much by just chopping it off and counting up what you have. Essentially, the star looks like a big hill and if you chop the hill off at the bottom and count up all the dirt in it, you can still do just as well as if all that dirt was in a thin narrow column. Crisis solved!
At least, until another star comes along. The method I just described (aperture photometry) works great for fields of stars in which the stars are relatively isolated. However, if you have two stars that are close enough together that the hill of one blends into the hill of another, then you can't just chop it off at that certain radius because you'll be getting dirt (light) from the other hill. And you can't ignore it in the parts where it overlaps because that's your signal! For just random sections of the sky, this isn't typically a problem, but in high density regions like the plane of the milky way and clusters, it becomes a huge problem.
Time for another trick. And this one's really sneaky. The idea is, since we're looking at a moderately small part of the sky, any atmospheric perturbations that are inflicted upon our field will be more or less the same. Since the light is going through the same optics, that should be the same too. Thus, the amount of distortion should be the same for all stars. What that means in more useful terms is that the shape of each hill should be the same. They should all be described by the same (Gaussian) profile. The only thing that's different is how tall or short the hill is. But the rate that the hill falls off is identical for every star.
So if we can figure out what that shape is, we can make a model hill that we just slide up and down for brightness. To find the shape (known as the point spread function) of the hill, you first need to find some isolated stars whose hills aren't being polluted by other nearby stars. The more stars like this you can build your model off of, the better the model, and the better the data. This method is called "Profile Fit" or "Point Spread Function" (PSF) Photometry. It's not too bad since computer algorithms will try to pick out those isolated stars. Unfortunately, they're not that great and you have to go through each one manually to confirm it's really isolated (and not on the edge of the CCD or anything). When I was doing this for my San Diego internship, the computer would find about 200-250 candidate stars. For each image. For each filter. It took two solid weeks of work to get good modeling stars. It's laborious (which is why the task is relegated to undergrads and data monkeys), but it's doable.
So there's some of the problems that astronomers face doing photometry and how they can sometimes be overcome. This is pretty much all there is to understand how photometry works and we do what we do. About the only thing I haven't gotten into very much is a more detailed explanation on just what else we can get out of various filter systems. There's more than just the temperature (for example, the DDO photometry system can give an indication of iron abundance), but that discussion requires delving into each photometric system independently and I'll save it for another post.
Hey. Thanks for writing! Interesting discussion of photometry.
ReplyDeleteHey. Thanks for writing! Interesting discussion of photometry.
ReplyDelete