Basic CCD Imaging
One of the most important advances which has been made in astronomy in recent times has been the ability of the amateur, for a relatively modest investment, to capture images of astronomical objects of a quality which rivals those which historically only large observatories could produce. You can't completely get around the limited aperture of our smaller instruments of course, but the new, highly light-sensitive cameras which are now available to the "general public", and the relatively sophisticated mounts and guide systems make it possible to do hitherto incredible things from your own back yard, and also from fairly light polluted sites. But of course it's never quite as easy as it sounds, and unless you have very expensive equipment and are very quick to learn, our opinion is that there's a pretty steep learning curve before you will be able to produce high quality images. These pages are aimed not at the specialist imager, but rather at the beginner - the person who has a telescope and a bit of cash, and who wants to start out on the road of taking and processing their own images. Please remember as you read this that there are numerous alternative CCD cameras, software programs and telescopes available, and only a few are mentioned here, based on our own experience. But the basics remain the same, and hopefully the information set out here will help beginners begin, and will encourage them not to throw their expensive cameras and telescopes into the nearest lake in frustration!
CCD stands for Charge Coupled Device, and put simply the working part of a CCD camera is a "chip" made up of an array of pixels (photo sensors) on a silicon substrate. While the CCD camera shutter is open, photons of light fall on the photo sites on the chip, where the photons are collected and converted to electrical current. When the shutter closes at the end of the exposure the accumulated charge on each pixel is automatically downloaded to a computer and the whole array is stored as a digital image. The current generated by each pixel depends on the total amount of light (number of photons) which have fallen on it for the duration of the exposure, and so the generated current is directly proportional to the intensity of the light or the length of the exposure. So the response to light for a CCD camera is not non-linear as in the case of film emulsions, and that is one of the key benefits of a CCD camera, the response to light intensity is linear and constant. Unlike photographic emulsions the longer you expose the more light you collect and the brighter the image - in a linear relationship over time. CCD cameras are also ten times more sensitive than film, and so it is possible to image very faint objects quite quickly. A lot of people actually use their camera on "focus mode" to locate, center and focus an object, allowing the camera to take a continuous series of very short exposures whilst moving the telescope around the sky to locate and center the object. The variation of signal strength with light intensity across the chip creates a black and white image of whichever object you are photographing, lighter areas showing up brighter (more electrical current) than the darker areas. It is also possible to produce color images, by taking red, green and blue filtered images of the same object, and then combining them using special image processing software. This combination of red, green and blue images is much like the way a TV set produces color using three different electron guns, combining the electron beams at a point on the screen to make a color image. Luckily these days there is sophisticated software available quite inexpensively to both operate your CCD camera, guide the telescope as it tracks the object across the sky, take the pictures and also process the images.
The quantum efficiency of a CCD chip - a measure of how much of the light falling on it is converted into a signal - varies with the camera, but for my camera it is approximately 50%. Which means, as we have already said, that images can be captured in a much shorter time frame than for film emulsions. Once an exposure has been made the pixels are discharged (downloaded) and the signal passed to an on-board amplifier in the camera from where it is converted from an analogue signal to a digital one, using a special converter. It is then sent to the computer and saved as an electronic file, but more about this step later. First, how do we go about setting up to take a CCD image, what equipment do we need, how do we connect it together, how do we take the images and save them, and more importantly how do we process those images so that they look really good?
What Equipment Do We Need?
In order to be able to take good quality images of astronomical objects the following items must be available:
A Telescope and the right connecting adapters so you can mount the CCD camera to the telescope
Either an Equatorial Mount, properly polar aligned, or an Alt-Az Mount with a Wedge or a tracking platform (*explanation further down)
A CCD Camera
Some form of Finder Device to locate and centre objects (this can be manual or using a "Go-To" system)
A Computer - preferably a Lap Top, with sufficient memory capacity to handle and process the files
Possibly a Focal Reducer for the telescope so you can reduce the image size to fit the size of the chip on your camera
Quite a lot of Patience
Any kind of telescope will do but before you are able to take images you will need to be sure that the camera can be properly connected to the telescope. There are a variety of different cameras on the market, and although there are a variety of different types of connector, some attempt has been made to standardize. But beware that telescopes from some specific manufacturers often require connections and fittings specific to that manufacturer, so you could need a lot of non-interchangeable stuff if you have more than one telescope. There will be many ways to connect your camera to your telescope, usually with a nose piece on the camera which slides into a sleeve on the telescope, or sometimes a clamped locking ring. Just be careful to buy the necessary fittings for your telescope/camera set up. Don't forget to also fit a security cable to the camera in case it slides out of the telescope and falls to the ground. Not a good thing to do! Whichever way you finally decide to connect your camera to the telescope, do please be sure to do it first in the daylight! It's really tough to fiddle around in the dark if you haven't done it before, and we can guarantee you will drop an essential screw in the grass, if not the camera itself! It's also a good idea to practice balancing the whole assembly in the daylight before you try to do it in the dark. Knowing how many counterbalance weights you need and roughly where they should be positioned can be a great help before you do it by touch and feel only. Finally, there is no worse feeling than to get to the observing field, possibly several hours drive, only to discover you are short one essential connector, rendering your imaging session useless.
To be able to take astro photographs the telescope must be equatorially mounted and accurately polar aligned. An important note is that if you have a telescope like one of the many excellent catadioptric mMeade® Go-To scopes, then you must buy yourself an equatorial wedge before you can attempt astrophotography. Many people have asked us why this is, because the Meade telescopes centre on an object and track it very precisely. The answer is that the mounts they use are called "Alt-Az" mounts (altitude/azimuth) which means they move up and down and left and right. Objects do not move in this way across the sky, they move in a wide arc, rotating about the celestial pole, near the star Polaris. The best illustration we can give uses Jupiter and it's moons. When Jupiter rises in the east, if you observe it through a telescope you will see that the moons and cloud belts are inclined to the horizon at an angle, rather like this:
As the planet (or any other object) rises in the sky, it will ultimately reach it's highest point for the night - called "transition" - and at that point you would see the following:
So in this example you can see that the object appears to have tilted over, and if you were taking a picture using a camera fixed to a telescope which was only capable of moving "up and down" and "left and right" (altitude and azimuth), then the object would appear to rotate (twist) within the field of view, which would of course blur the image. At the high magnifications which we use to image distant objects, this effect becomes noticeable quite quickly, and any exposure longer than perhaps 30 seconds would make the blurring obvious. You need a wedge!!
There are a lot of CCD cameras around, and we will make no attempt here to judge one type against another. Our first camera was an SBIG (Santa Barbara Instruments Group) ST-7E because we were living in America and because they have an excellent reputation for reliability, performance and after sales service. In fact we were so pleased with the performance of the camera that some years later we bought another SBIG camera, their larger ST-8E. One of the main variations in cameras is in the number of pixels in the "array" and also the size of the chip. Our original ST-7E camera had a chip measuring 6.9mm by 4.6mm and a pixel array of 765 x 510, making a total of 390,150 pixels, whereas the ST-8E had a chip four times larger, and with four times the number of pixels. Each pixel in both cameras has an effective size of 9 microns, and ultimately the size of the pixels determines the sharpness or "resolution" of the images it will take. All of this also to some extent defines which objects will fit on the chip. The larger the chip the larger the object you can fit on it, for a given telescope. There are many different cameras available these days, with much larger arrays and we are currently using a QSI (Quantum Scientific Instruments) 683 camera with a seven position filter wheel.
Some cameras have a built-in guide chip, which is a small additional CCD chip, mounted alongside the main chip and used exclusively to guide the telescope to keep it centred on the object you are imaging. How it works is that the guide chip takes a series of images of a selected guide star while your main chip is exposing the main image. If the guide star moves away from its position on the guide chip, the camera feeds commands to the telescope drive system to make corrections to bring it back to where it was. In theory that all sounds fine, but in practice it can be very difficult to do. It can sometimes be tough to find a guide star which happens to fall on the guide chip and which is bright enough to guide by. It can also be very difficult to find a guide star bright enough to be detectable through filters if you are imaging in colour. Alternative methods do exist for guiding. Some people use a second telescope bolted to the side of the main telescope and attach a separate guide camera to it. This method works well but the guide scope has to be rigidly fixed to the main scope because flexure between them will destroy the accuracy of your guiding. The other problem with this sort of set up is that it restricts the ability to operate inside a dome as the slit is sometimes not wide enough for both scopes to be able to see out in certain configurations.
The method we currently use with the QSI 683 camera is off-axis guiding. This is where a small amount of light is siphoned off by a prism installed in the camera and is directed out sideways to the guide camera. This avoids the problem of imaging through filters because the prism is located before the filters. One additional comment about imaging is that some cameras are not fitted with an anti-blooming device, which is a mechanism to stop individual pixels from over-filling when they become saturated with electrons. It prevents brighter stars in an image from "bleeding" into neighbouring pixels and creating an amorphous "blob" where the star should be. If you have a non-ABG camera then you have to take more care when imaging, because you have to make a more careful determination of which exposure length to use. Too short and you will have too faint a signal and your sound to noise ratio will be poor. Too long and you will "burn out" the brighter stars. Overall, unless you plan to do astrometry it is easier to use an anti blooming camera.
This depends on the brightness and type of object you are imaging. for brighter objects take a 1 minute exposure in white light and measure the intensity of the brightest part using your imaging software. Aim for around 40,000 ADU maximum. If it brighter than 65,000 (for most cameras) then you have over-exposed so reduce the exposure time. If it is a lot less then increase the exposure time. As a guide, for brighter galaxies on a 12.5 inch telescope we would expose either 5 minute or 10 minute images.
A Computer and Software
As has been said above, make sure your computer is adequate for the task. Ideally you will need a lap top which you can set up alongside the telescope and use to control the scope and take the images. Some people who are lucky enough to possess an observatory have their desk top computers set up in a next door room or in the observatory itself, but you do have to be careful about lengths of your cables and wires. Signal loss can be quite significant if you are a long way away. We use Maxim DL software to run the camera, and also to collect and initially process the images. Final processing is done in Adobe Photoshop. Some people have additional or different image processing software - it is a personal thing. For finding things in the sky we use the Temma Go-To system on our Takahashi mount which in turn is driven by TheSky 6 software. There is nothing more frustrating than wasting good imaging time trying to find your object. Use a good Go-To system. You won't regret it.
One thing you will definitely need is a way of darkening your computer screen when you are imaging. Yes, we know that you can select "night mode" in all the programmes and the screen will look really dark in the daytime, but under a dark sky you will find that your screen is still way too bright, and you will be hounded off the observing field by others unless you do something. We have found that multiple sheets of red perspex sandwiched between two clear perspex sheets is the best solution.
Taking The Pictures
Turn on the camera, cool the chip down to the set temperature (we always use -10C) and make sure your scope os accurately polar aligned and the go-to system is on and ready. This is perhaps the place to explain in more detail why we cool the camera. A CCD imaging chip is simply a complicated little piece of electronic circuitry. It therefore generates electronic "noise" which can be reduced significantly by cooling the chip. Don't cool it too far though. If you try to drive the temperature too low your cooler will have to work hard and the cooler itself can generate electronic noise. In any case the noise can be electronically removed later on, so we operate at -10C. Electronic noise is visible as "sparklies" all over the image. When we first saw them we were all excited because we thought they were stars, but no such luck - they're noise. The advantage of a CCD image is that you can take a "blank" exposure - called a Dark Frame - and subtract it from the main image to produce what is called a Dark Subtracted Image. You can also take and subtract from the raw image something called a flat frame, which compensates for the fact that when looking through a telescope there is always some darkening towards the edges of the frame - a phenomenon known as vignetting. Here is an example of what we mean:
Above are a raw image straight from the camera, a dark frame and a flat field. The dark and the flat are subtracted electronically from the raw image to produce the final calibrated image (below).
So what we do next is to take a series of images, just like the image of Messier 33 which we have used above to demonstrate dark subtraction. But first do make sure the image is properly focused. There are many ways to do this and you can also buy auto-focusers which do the job for you. But do make sure your focus is spot on as it will disappoint you the next day if it is not. Remember as well that focus can change during the night as temperature drops and your telescope changes length ever so slightly.
Now is a good time to explain just what "binning" is, because you might well be wanting to bin some of your images (and that doesn't mean throw them away - not yet, anyway). Imagine your camera chip as an array of pixels all working independently so that each pixel sends it's own signal to the amplifier when you download the image. That is called working in un-binned mode. But your cameras is capable of connecting together groups of pixels, usually either in groups of four or groups of nine. The net effect of this is to create clusters of pixels which work together as one pixel. The first case (clusters of four pixels working together), is called 2x2 binning, and the second case (clusters of nine pixels working together), is called 3x3 binning. It's hard to tell you all the circumstances under which you would choose to use one or the other forms of binning, but in our case we only ever really use un-binned and 2x2 modes. What binning does is it increases the speed and sensitivity of the camera, but it also reduces the resolution. You are of course looking for as much resolution as you can get, but the human only sees detail in black and white, so binning the colours to increase camera sensitivity for the colour shots goes doesn't actually affect the sharpness of the final image and gives you a much better signal for your colour component. Our recommendation is therefore to take unbinned luminance (white light) images and 2x2 bin the colours. Decide on the exposure (covered above) set up a guide star and start the autoguider and away you go. Try to take enough 1, 5 or 10 minute images to give you a total exposure time of maybe an hour of white light and 30 minutes each for the colours.
As described above dark subtract and flat field all of your raw images and save them as new files. It is important that the darks and flats which you use for this are median combined masters of several exposures. Take maybe ten darks and ten flats for a certain camera temperature and telescope and then let the image processing software lead you through the median combining process, at the end of which you will have a "Master Dark" and a "Master Flat" which should look somewhat similar to the one we showed previously.
Now we have come to the point where we want to stack our individual calibrated images to create a master, stacked image which is roughly equivalent to a continuous exposure of several minutes. Again, follow the instructions in your image processing software to firstly combine the luminance images to produce a stacked final luminance. Save it as a separate file. The do the same with the Blue, Green and Red images and save each as a stacked file, so now you should have a set of four stacked masters. Don't forget to double size the colour images at this point because they have been taken using 2x2 binning and so are half the size of the white light image. Finally use your image processing software to combine your LRGB components to make a final colour version.
Using the same image series we used to demonstrate dark subtraction, here is a complete set of stacked masters for M33, the large spiral galaxy in Triangulum. You will see that the sizes of the stars in each image are approximately the same. This is important. What you might find, especially if your focus has changed slightly between filtered image series, producing slightly larger stars in one image as compared to another, is that when you combine these images to make a coloured image, the stars will have little coloured rings around them, with the colour of whichever image had the larger stars! Focus is important. Here are the stacked master in White Light, Red, Green and Blue, followed by the finished image. Good luck.