CCD USER OBSERVING TIPS

Bruce L. Gary (GBL)
Hereford, AZ

This web page is intended for astro-CCD users who would like ideas for improving their observing procedures.  I'm a believer in the idea that the first stage of learning consists of floudering!
Floundering creates a "readiness" for things not tried.  But there's a time when floundering becomes wasteful, or worse, produces procedures that are flawed without the observer's knowledge.  There's probably an optimal mix between floundering and learning from others.

Any reader of this web page should be warned that my procedures are still evolving, and they may have flaws that I'm unaware of.  I occasionally seek help from others, more experienced in what I'm trying to do, and this may be what you're doing by viewing this web page.  But remember, don't let someone else's "wisdom" get in the way of creating your own!

I. Planning for the Night

Every night affords possiblities, even if it's cloudy!

Cloudiness Forecast

Uppermost on my mind is what kind of observing conditions are likely. The main questions to answer are cloudiness, atmospheric seeing and sky darkness.  Since I usually start wondering about this mid-day, the answers available are quite reliable.

For a cloudiness forecast (of North America) I go to the web site CloudinessForecast .  Actually, I go to the site that's specific to my location, and if you don't find a location near you on their web page, just e-mail Attila Danko and ask him to add your site coordinates to their list.  My assessment of their forecast accuracy (generated by Allan Rahill, Canadian Meteorologie Centre) shows them to be pretty good.  Forecast accuracy for the past 2 months is 80% correct for their 6-hour forecast (11 PM conditions querried at 4 PM, Pacific Standard Time), and 76% for their 18-hour forecast.  This web page site also gives sky darkness forecasts (based on the atmosphere's vertically-integrated water vapor content and the moon's location for your site).

For West Coast observers, there's unique problem during April to July - called "marine layer stratus."  It's a low altitude stratus cloud that forms right after you've done pointing, focusing, flat fields and you're finally ready to observe that fast fading supernova or GRB afterglow.

Seeing Forecast

The same Canadians have an "atmospheric seeing" forecast web site, but it's under development and should be used only as a crude guide to seeing conditions. Now that this web site is available I use it instead of checking maps of atmopsheric pressure. Generally, the best seeing occurs when a high pressure system is overhead.  The worst seeing is when local winds are high, which occurs when the site is between a low ans high pressure system (when the horizontal gradient of pressure is greatest).  My site is near mountains, so my seeing is often dominated by "local effects."  For example, downslope winds usually reach my site sometime before midnight, and the non-laminar flow associated with the interface of this surface-based air mass that's moving downhill and the stably-stratified overlying air mass can cause serious seeing degradation.  I'll go into more detail about "seeing" in a later section of this web page.  For the purposes of planning a night's observing session it is sufficient to say that if my objectives require good resolution (sharp images) or faint stars (that are close to my typical limiting magnitude) then the seeig forecast has to be "good" to include these observing objectievs.  If the "seeing" is forecast to be average or poor, but the sky is likely to be dark, then suitable observing objects inlude bright stars requiring photometry and extended nebulae.

Target Planning

Keep it simple!  Don't try to do too much on any one night; rather, do a good job on maybe just one object.

The target chosen should be compatible with observing conditions.  Sometimes I've planned on going deep with my 10-inch (now replaced by a 14-inch), but low altitude clouds came in so I reconfigured cameras and drove to a nearby mountain site to make "pretty pictures."  Some of my best memories have come from these unplanned trips; like the time I made a rare (low latitude) 35 mm camera shot of aurora that wasn't visible from an overcast Santa Barbara (I now live in Arizona).

When the sky is really dark, think about those wide angle pretty pictures where a dark sky is essential.  Asteroid hunting can be deferred to another day.

If there's a full moon, and the seeing is good, consider planets - or, heavens forbid, the moon!  Mars and Jupiter are always changing, and Saturn changes also in its slow way.  The moons of Uranus and Neptune are fun full moon projects.

All nighters aren't necessary if what you really want rises before dawn.  Plan to be rested when you wake up at 2 AM to catch that supernova that's just re-appearing from behind the sun.

If your telescope is on a German equatorial mount, you have my sympaties.  Mine won't track past the meridian more than ~1 degree.  When the meridian limit is reached, you have to do a "meridian flip."  The meridian flip involves many things.  Since you're moving 180 degrees in both declination and local hour angle, if there are any cables that could "catch" on the mounting structure it is necessary to be present at the telescope to perform this flip operation (my "warm room" is in my house, so I have to walk out to my backyard sliding roof observatory to perform the meridian flip moves).  The Paramount ME, costing $10,000, is German equatorial mount with excellent tracking performance, and it permits cable routing through the mount in such a way that it is unlikely that cables will catch on anything and streatch (or worse, loosen your CCD and cause it to fall on the floor).  After performing the meridian flip, many other CCD control and display settings have to be changed.  IF you're using an SBIG AO-7 tilt-mirror stabilizer, you have to reverse the sign of the X-axis telescope drive.  If you're using MaxPoint (does the same as TPoint, except that it can be used with MaxIm DL), you have to switch to a pointing file that was created for the western half of the sky after crossing the meridian with a German equatorial.  Bottom line:  if possible, never plan on observing through the meridian if you have a German equatorial mount.  But if you're using aq fork mount, try always to observe through the meridian, when objects are highest in the sky.

II. Observing Tips

Observing Log

An observing log is absolutely necessary!  I use a ball point pen with quadrille paper (for graphing).  For protection from dew on the paper (a real nuisance) I use a clip board with a metal top that folds open.  Ball point pens are for "observing time" notes, and pencil markings on the observing log are for "later time" clarifications.  It very important to guard the sanctity of what was noted at the time of observing from what you think happened the next day.

At the end of the night's observing I like to write freestyle my "impressions" of the data.  Notes like "Good data tonight" captures something essential that can be useful in reconstructing how much credence to place in data if it becomes important months later.

Flat Fields

Flat field corrections overcome three problems 1) vignetting, 2) dust doughnuts, and 3) patterns of pixel responsiveness to incident light levels.  My SBIG ST-8E arrived with serious dust doughnut problems; it had dozens of 15 micron diameter rings of low response with higher than normal response in their centers, one with a contrast of 7.5%.  After a cleaning almost all of them disappeared, leaving a paltry 0.9% dust doughut off to one side.  Dust must accumulate somehow, so flat fields are needed every night that photometry is done.

Every flat frame must have a dark frame of equal exposure subtracted.  A dark frame of equal exposure contains the bias and thermal noise characterstics which are shared by the flat frame "light" exposure, so by using a dark frame of equal exposure you're automatically correcting the flat frame light exposure for unwanted effects that render it useful for flattening other exposures for vignetting, etc.  With MaxIm DL I specify that a dark frame be taken for each light frame, and change the exposure time to force a new dark frame exposure to go along with each light frame.

Pretty pictures don't necessarily need flat field correcting.  Only the pretty picture "deep" objects need good quality flats; the brighter objects can use lower quality flats.  But for photometry it's absolutely necessary to use good quality flats.

The blue sky before near sunset produces good flat fields.  I'm careful to not observe low in the sky, since my CCD FOV is large enough that I don't want elevation angle gradients.  Another virtue of the dusk sky is that it is blue, and artificial light sources are notoriously deficient in blue light.  In other words, if you use a home-made artificial light source, you're going to have problems with long exposures for the blue filter's flat frame, and long exposures mean you're going to have cosmic ray artifacts, which means you're going to have to take at last 3 light and 3 dark exposures (for median combine) in order to get one artifact-free blue flat frame!  Whew!  And that's a lot of work.

I'm experimenting with halogen lamps, which use tungsten filaments running at a hotter temperature - producing a "whiter" source spectrum (but still more red than blue).  I'll have more to say on this later.

Artificial light sources for flat field creation is an attractive option, however.  They can be used anytime during an observing session.  Thus, you can start observing after midnight and go to bed before sunrise, and get flat fields from these contraptions.  You also have the option of rotating your CCD in the middle of the night, and re-doing the flat field.  SBIG recommends using two white boards.  The big board is veiwed by the telescope, while a smaller board illuminates the big board.  This arrangment allows the small board to be illumiated by a light source that has non-uniform directionality (like a flashlight).  Note, the big white boards must not be oriented so that the light incident upon it is specularly reflected into the telescope.  That would mean that the white board appears non-uniformly bright, which would invalidate the flat field.  I find that the big board, when properly illuminated, gives flat fields that agree with the blue sky flat fields (differences much less than 1% across the entire FOV).  I don't use a second small board; rather, I've constructed a light source with a diffusing cover, and this light is isotropic enough that the big board is illumiated uniformly.  I'm using a 75 watt halogen lamp in a goose neck table lamp on the ground for the light source, and the big board is a white foam board (20x29 inches).

For pretty pictures of bright objects flat frames with 1 or 2% errors are unimportant.  I've constructed something that fits inside the dew shield and uses a night light, and this is adequate for poor quality flats..

The vignetting component of non-uniform response is of course sensitive to the orientation of the CCD.  Rotating it 90 degrees will produce an entirely different response, requiring a new flat field.  In theory, if the CCD were attached to the telescope the same each night, and if new dust doughnuts never appeared, it should be possible to use last night's flat field to correct tonight's observations.  This shortcut appeals to me, so in order to investigate it I have glued a bubble level to the CCD and another to the telescope near the CCD mounting location, and when I attach the CCD I do so carefully so that both have the same level reading.  This should make the vignetting component of the response field identical each night.  (This assumes that focal length differences for cold and warm nights changes vignetting by only small amounts.)

Cloudy nights are ideal for testing flat field strategies.  It's easier to work carefully when no stars are waiting!

The best time of the evening for exposing good quality flats is just after sunset.  Short exposures are desireable in order to avoid stars from appearing on the flats.  For my SBIG CCD the shortest exposure that can be made is 0.1 seconds.  I try to start my flats as soon as a 0.1-second exposure produces an image with a maximum count of under 40,000 (out of a maximum that can be registered of 65,535). Anything greater than ~40 kct (kct = kilo-counts, or 1000 counts) will be in the saturation zone. Ironically, the blue filter will be the first one that can be used. The very first time after sunset that a blue flat that is unsaturated can be made will depend on the f-ratio of the telescope system (since the sky is an extended source).  My Celestron has f/1.9 in prime focus mode (using a Celestron Fastar lens, or a Starizona HyperStar lens) and f/11 at Cassegrain (without a focal reducer).  For spread-out objects, like the sky, the prime focus is 33 times faster (f-ratio squared), and the sky has to be 33 times dimmer for starting flat fields when I'm working in the prime focus mode.  Whereas I can begin flat fields 15 minutes before sunset for the Cassegrain f/11 configuration, I have to wait until just after sunset for the prime focus configuration.

Flat frames should also be exposed long enough to produce sufficient signal that using them to correct (flatten) an image does not introduce significant noise.  My goal is to end up with flats that have a maximum count, after dark frame subtraction, of between 20 kct and 40 kct.

The blue filter will be the first one that can be used for flat frames.  For prime focus, I can start blue flats ~2 minutes after sunset.  Five minutes later the sky has darkened so much that the blue filter exposure times have to be increased (to keep the counts above 20 kct).  The second filter that can be used for flat frames is infrared.  Those can begin 2 or 3 minutes later than the blue ones.  Then comes red, then visible, and later comes clear.  The following graph shows one evening's maximum sky counts versus time after sunset using a prime focus f/1.9 configuration.

Figure 1.  Maximum flat frame counts (after dark frame subtraction) for viewing the sky ~10 degrees from zenith (at an azimuth opposite the sunset azimuth) using the CCD at Cassegrain focus with an effective f/5.8 and BVRI filters. The Y-axis is the count rate [counts per second] near the center of the iamge, where sky brightness is a maximum. A second-order fit to log(CountRate) is shown for each filter. The sun was changing elevation at a rate of 0.21 degrees per minute (latitude 31 degrees, September 28). This chart should be valid for any aperture telescope provided it is configured for f/5.8 and a CCD chip is used that has a similar sensitivity to that used in the SBIG ST-8E (KAF1602E).

To illustrrate how to use this graph consider the desire to take clear filter exposures of the clear (blue) sky such that 0.1-second exposure times yield maximum count values just below saturation, which we'll assume is 40,000 counts.  The highest CLR point in the graph is at 2 minutes before sunset and has a count rate of 520,000 counts per second.  A 0.1-second exposure would produce 52,000 counts, which is slightly too high.  We must therefore ask "at what time does the CLR filter setting produce 400,000 counts per second?"  The answer is 1 minute after sunset, so this is when flat fields can be started using a CLR filter when the shortest exposure time for the CCD is 0.1 seconds.

Flat fields can be made as late as 20 minutes after sunset, but in order to achieve a high level of counts, such as above 20,000 counts, it is necessary to use longer exposures than 0.1 seconds.  For example, at 20 mintues the CLR filter has a count rate of 13,000 coutns per second, so a 1.5 second exposure is needed to reach a maximum count of 20,000.  For long flat field exposures there is a greater risk of stars appearing in the image, which could ruin the quality of the flat field. Visual inspection of all flat fields is therefore necessary, even for the short exposure ones.  In desperation (after it has darkened too much for re-taking flat fields), a flat field image with a star present can be edited using "pixel edit" (a featuer of MaxIm DL) to achieve a smooth appearance as a replacement of the star region.

Flat fields for the system used in producing Fig. 1 can be taken much before sunset using any filter other than CLR.  The infrared filter appears to be the best one for starting early, as it's count rate is changing very slowly with time before sunset.

Since the sky is a "distributed source" Fig. 1 should be valid for any telescope aperture that is working at f/5.8, when using a CCD with a similar sensitivity as the SBIG ST-8E (which uses the Kodak KAF1602E chip).  Things change for the worse when working at "fast" f-ratios.  For example, when I'm configured at prime focus (Fastar, HyperStar, etc) with a f/11 Celestron, a very fast f/1.86 is achieved.  More sky light photons are intercepted by each pixel for the same sky brightness, other things being equal.  The following version of Fig.1 applies for my prime focus configuration.

Figure 2. Maximum flat frame counts (after dark frame subtraction) for viewing the sky ~10 degrees from zenith (at an azimuth opposite the sunset azimuth) using the CCD at prime focus with an effective f/1.86 and BVRI filters. The Y-axis is the counts obtained when using my CCD's shortest exposure time [counts per 0.1 second] near the center of the iamge.  A first-order fit to log(CountRate) is shown for each filter. The sun was changing elevation at a rate of 0.21 degrees per minute (latitude 31 degrees, September 28). This chart should be valid for any aperture telescope provided it is configured for f/1.86 and a CCD chip is used that has a similar sensitivity to that used in the SBIG ST-8E (KAF1602E).

This graph shows that the prime focus configuration is "less forgiving" than the Cassegrain configuration when making flat frames.  The desired times for making flat frames with short (0.1-second) exposures that have maximum counts between 20 to 40 kct has a duration of as short as 3 minutes (for a clear filter) and as long as 6 mintues for the infrared filter.  If the optimum windows are missed, you'll have to use longer exposures and risk having stars appear in the flat frame images.  If you notice stars, either move the telescope pointing, or exposure the flat field's "light" frame during the telescope's movement.  Long exposures increae the likelihood of having cosmic ray defects (brigth spots if the "light" frame was affected, dark spots if the "dark" frame was affected).  When long exposure flats are used, it is important to inspect them for these defects and reject the flat frame or perform a pixel edit "tough up" before combining it with other flat frames to produce a flat frame for use during the night.

Obtaining good quality flat frames is more complicated than casual CCD users realize.  I'm still learning, and quite often my flats are clearly not good enough.  This section of the web page is likely to expand in the future, as I learn more about this difficult "art."

Atmospheric Seeing

"Seeing" is short for "astronomical seeing."  The best quantitative way to describe seeing is to specify the "full-width at half-maximum" parameter for a Gaussian fit to a star image's "point-spread-function" (PSF).  Throught my web pages I use FWHM for short exposures (typically 4-seconds) to describe "seeing."

The best sites are able to produce short exposure iamges with FWHM = 0.3 "arc.  Typical values are 0.7 "arc.  Amateurs, living at less desireable sites, should be happy when FWHM = 2.0 "arc, and can expect typical values to be in the range 3.0 to 4.0 "arc.

Figure 3.  FWHM ("seeing") as a function of elevation angle for my site during the summer season.

Figure 4.  Same FWHM data plotted versus air mass.  The black-dashed trace is a 2nd-order fit and the green-dashed trace is a cube-root fit.
 

Figure 5.  Ratio of FWHM to the air mass model versus time after sunset (for my site).
 
 
 
 

Focusing

My Meade LX200 10-inch Schmidt-Cassegrain has a focus setting that seems to depend upon just two properties: temperature and the sine of elevation angle.  Using a circular dial that is gluded to the focusing knob I've recorded focus determinations under various conditions and performed a least squares solution using many possible independent varibles, and only these two paramters matter.  Temperature is read out with a digital sensor at the end of a long cord, with the sensor attached to the telescope and the read-out display on an outdoor observing desk.  I have a chart at my desk showing the LS solution for best focus versus temperature and elevation angle, which I refer to before starting every focus check.  Having the starting guess close to the final setting makes the focus check procedure go faster.  Once I get an "offset" for that night (caused possibly by slightly different "seating" of the CCD attachment for each night), I'm able to count on the "chart reading plus offset" to make adjustments without having to perform focus checks for the rest of the night.  Good focus perfromacne is monitored using the images as they're made.  Since I use MaxIm DL/CCD to control the CCD, filter wheel and telescope, I'm able to work with old images as new ones are being exposed.  I usually check FWHM for a few isolated stars on each image to be sure my focus setting is remaining true to prediciton.

I used to think I needed a Crayford focuser, but the procedure just described works so well that the small penalties of adding to vignetting with the Crayford has changed my mind.

Dark Frames

Whenever a baseline is subtracted from a signal measurement the result of this subtraction has an uncertainty that is "square-root of two" larger than the uncertainty of either the signal measurement or the baseline (assuming each has the same integration time).  Therefore, it is customary in situations where base levels have to be subtracted to spend equal time on both measurement types (which is the most efficient way of using obsdrving time).  My first thought when I began CCD work a couple years ago was to always take pairs of "dark" and "light" images, do the subtraction, and repeat.  But then I was swayed by the thought that if the image moves slightly between frames, due to small polar axis mis-alignments, it was unnecessary to make new dark exposures for each light exposure, so I reduced the ratio to maybe one dark for every 4 lights.  But this caused noise "patterns" to appear in the average of the set, since one dark frame was used in locations that moved (opposite to the star field's motion) in a uniform way.  This convinced me that its bad to use just one dark frame with a set of light frames, so I next began a pattern of 3 or more dark frames per set of 5 or more light frames.  The darks were at the beginning, middle and end of the sequence, in order to assure that if any residual CCD temperature changes were present the average dark frame would at least be compatible with the average light frame (since dark frame artifacts are so very temperature dependent).  I know that others have a libratry of dark frames categorized by CCD temperature, but I haven't been brave enough to try that yet.

One other issue enters in here:  the longer an exposure the greater the chances of artifacts appearing in the exposed image.  For my CCD, a SBIG ST-8E, having a layout of 1020 x 1530 pixels, each pixel 9 microns in size, if minute or longer exposures are used I can count on at least one artifact appearing in the exposure.  The artifact problem can be "solved" by taking 3 or more exposures for each the dark and light images, and later performing a "median combine" of the darks to get a clean dark, using this clean dark with the lights, and doing a median combine of the resultant lights.  This last step assures that artifacts in the lights are removed (as well as the artifacts in the darks). It sounds complicated, but it isn't when it becomes habit.

There is some loss of information when using "median combine" versus average, but not much.  I like to create median combine sets, after which it's safe to use averaging to of the clean sets.  This seems like a good compromise.

Overall, I probably devote 30% of an evening's observing to dark frames.  I'm still exploring ideas for reducing that, and am open to suggestions.

Polar Alignment

Finding faint objects quickly
Exposure duration
Drifitng during image sequences
 
 

Exposure Times and Guiding Options

Many shorts vs a few longs
Cosmic ray artifacts plague longs
Automated sequences
 
 
 
 

III.  Post-Observation Analysis

Professional astronomers have a saying that "For every hour at the telescope there will be 100 hours of back-home analysis."  That should be close to true for amateur astronomers too, if you really want to get the most fun out of the hobby.
 

Gaussian Matters

The payoffs for using small apertures to measure a star's brightness require an understanding of Gaussian properties.  This section could have been placed ahead of the Focusing section in order to motivate observers to perform careful focusing.

The Gaussian function is ever-present in matters "stochastic."  Stochastic just refers to a phenomenon in which random events dictate outcomes that are under study after many such random events.  For example, when you drop pennies on the floor you end up with a distrbution of "pennies per square inch" that is a 2-dimentional Gaussian.  The equation for a 1-dimeniosnal Gaussian is:  Z(x) = e -(x/S)2 which can be read the exponential of the negative of "x divided by S squared."  This is true when the average value for x is zero.  The constant S, for sigma, is a measure of how broad the distibution is.  This function goes from a high of Z = 1.00 at x = 0 to a value Z = 1/2 at x = 0.60056 * S, and then decreases further to 1/e at x = 1 (and x = -1).  A 2--dimensional Gaussian is simply Z(x,y) = e -(x/Sx)2 times e -(y/Sy)2 where Sx and Sy are the width constants in the x- and y-dimensions, and the function is centered on x,y = 0,0.  For a perfect telscope system the "point-spread-function" for photons intercepted by the CCD chip is a 2-dimensional Gaussian.

When measuring the brightness of a star a set of 3 rings are used.

The innermost ring covers what I call the "signal circle" which should be large enough to enlose most of the star's emission.  The outer annulus is the "reference annulus" and the average counts within this annulus is used as a background level for subtracting from the signal circle pixels.  The sum of signal circle pixels counts with background subtracted is "intensity."  A star's intensity is proportional to its brightness.  The middle annulus is a gap that allows the reference annulus to be unaffected by the star's Gaussian edges, while allowing the signal circle to be small.  There's a reason for keeping the signal circle small, but not too small, as discussed below.

I'm going to summarize terminology used on this web page (which is not necessarily the same terminology used by others):

    "signal circle" area with a "signal circle radius [pixels]"
    "gap annulus" area with a "gap width [pixels]"
    "reference annulus" area with a "reference annulus width [pixels]"

When reducing data is is useful to have abbreviations for parameters used in the analysis.  The following illustrates how I record the three parameter values describing the three photometry areas.

    "RGA427" means that the signal circle has a radius of 4 pixels, the gap annulus has a width of 3 pixels, and the reference annulus has a width of 7 pixels.

Now we're ready to learn more about Gaussian distributions.  Whereas mathematicians like to use the "sigma" parameter, in the above equations, to describe the width of a Guassian, observers almost always prefer to use FWHM, or full-width at half-maximum.

In this Gaussian FWHM = 20 pixels.  Whereas the mathematician would give the equation for the Gaussian is Z(x) = exp(-(x/12.0112)2), an observationalist would give Z(x) = exp(-(x/0.60056*FWHM)2).

The question we want answered is:  How large should the signal circle be in order to maximize SNR (signal-to-noise-ratio)?

Before answering this we must first calculate how many counts can be expected versus size of the signal circle.  The following graph was calculated ina spreadhseet for a 2-dimensional Gaussian.

The "intensity" trace is the level on the Gaussian at the signal circle edge.  The "volume" trace is the total number of counts (minus background level) integrated across the two dimensional Gaussian within the signal circle area.  The "SNR" trace requires a little more background.

Each pixel has readout noise, due to the thermal jiggling of particles that make up the physical pixel elements of CCD chip. The colder the chip, the lower the read-out noise.  Even for a very cold chip, there's a noise component due to the sky background having a finite brightness.  No sky is absolutely dark, even in space.  Both componenets of noise are approximately the same for all pixels for a typical observing situation.  Since noise is random, the noise from the sum of N pixel elements is proportional to the square-root of N.  The numbe of pixel elements in the signal circle increases as signal circle radius squared, so the noise from the sum of counts within a signal cirlce increases in a linear way with signal circle radius.  For example, in going from a signal circle radius of 0.4 times FWHM to 0.8 times FWHM, the noise from the total counts doubles but the total number of counts increses by a factor 2.29 (0.82 / 0.35, readings from the above graph).  So there's a SNR payoff in using a signal circle radius of 0.8 * FWHM versus 0.4 * FWHM.  This concept was used to produce the dashed blue trace in the above figure, which shows that the best SNR is achieved when the signal circle radius is 0.8 * FWHM.  Bigger signal circle areas produce bigger counts, but they also produce a greater percentage uncertainty on the total count value.

Other factors may influence the best choice of signal circle radius.  When an interfering star is close to the object of interest, it may be necessary to choose a smaller signal
circle to avoid including the interfering star.  Interfering stars can also be a problem for the reference annulus.  This is illustrated in the next figure.

A supernova that is near a galactic arm may be measured with greater precision by avoiding inclusion of the arm emission in the signal circle.  There's another subtle effect when the object is bright, not worth describing here.

Reference annulus areas are always best when they're big.  Big reference areas produce a more precise average background level for subtracting from the signal circle counts.  However, as noted above, big reference annuli are likely to include interfering stars, which will bias the determination of average background level (unless fancy histograms are used).  Also, a refernce annulus must be suitable for the several reference (comparison) stars as well as the object of interest.  Common sense should dictate the choice of a reference annulus widtth as well as the middle gap width.

Returning to the signal circle, it should be noted that if the best SNR is achieved by using 0.8 * FWHM, and if smaller signal cicle radii are best, then it follows that small FWHM leads to better SNR.  The equation for SNR (not shown here) can be simplified by stating that SNR is proportional to the reciprocal of FWHM (all other things bein g equal).  Thus, it is important to observe under good "atmospheric seeing" conditons, and to perform careful focus adjustments.  For example, on a good seeing night, with FWHM = 2.0 "arc for example (that's good for my site), compared with a poor seeing night, with FWHM = 4.0 "arc, the ratio of SNR for the two days (for equal exposure times, etc) is 2.0.  On the poor seeing night the same SNR can be achieved, but it will take 4 times as long (since SNR is proportional to the square-root of total exposure time, theoretically).  A good figure of merit for each night's observing is to square the reciprocal of FWHM that's achievable.  Don't waste a night of good seeing!

Reduction Log

Data analysis is also called "data reduction."  This is an odd term since you're producing lot's of additional data to keep track of.  In theory, after data reduction you can throw away the raw data, as well as the reduction notes and intermediate images, and thus reduce your archive.  However, it is good practice to never throw away anything!

The sooner raw data can be analyzed, the better.  Anomalies that happen while observing may be crucial to understanding an image under analysis, and the anomalous circumstance is not always noted in the observing log.  Since memories fade, it is important to try to analyze observations the very next day.
 
 
 

Select Only the Best for Averaging

FWHM criteria
 
 









____________________________________________________________________

This site opened:  April 24, 2002 Last Update:  September 28, 2003