Bruce L. Gary (GBL)
Hereford, AZ, USA
2004 June 24

    Observing Strategy
    Iterative Analysis Procedure


If the earth didn't have an atmosphere that absorbs starlight then all-sky photometry would be trivial. The observer would merely take a CCD image of a region with well-calibrated stars, then another image with the same settings of the region of interest. Corrections would still be required to correct for the observing system's unique spectral response, but these "transformation corrections" would be straightforward.

If the region of interest has well-calibrated stars then the atmosphere has only second-order effects, and fairly good quality brightness measurements could be made using only an image of the region of interest followed by the same straightforward transformation corrections. This is called "differential photometry" and it accounts for almost all measurements made by amateur astronomers who submit CCD observations to the American Astronomical Variable Star Observers (AAVSO).

Occasionally there are situations where an accurate brightness is needed in a region where there are no well-calibrated stars. Asteroids are constantly moving through parts of the sky that do not have nearby well-calibrated stars. Anyone wanting to determine the spectrum of an asteroid is "on his own" for establishing accurate magnitudes for stars near the asteroid. Novae and supernovae are the most common examples of stars that appear where there are no well-ccalibrated stars. When a bright one of them is discovered the AAVSO does a good job of quickly preparing a chart with a magnitude "sequence" that is based on quickly-arranged for observations by a professional astronomer (Arne Henden) who uses "all-sky photometry" to establish the sequence. Not every nova or supernova can be supported this way.

Since there is a growing number of amateurs with CCD skills conducting projects that involve accurate brightness measurements there is a growing need to explain how to perform all-sky photometry. I have encountered enough situations where I needed to create my own chart of sequences that I have been motivated to learn all-sky photometry techniques. In the process of this learning I have developed what I believe is a simpler method for all-sky photometry.

When I started writing this web page I thought I would try to discourage amateurs from trying to do all sky photometry. I had many concerns about the feasibility of an amateur astronomer taking on a task that required sophisticated observing strategies and data analysis.  I was especially concerned about the amateur being blindly misled in his analysis by spatial inhomogeneities and temporal variations of atmospheric properties which I had spent a career studying as an atmosperhic scientist before my retirement. I was prepared to advise amateurs to restrict themselves to "differential photometry," which is based on a single image of the region of interest. However, while I was learning to do all-sky photometry for my own projects I developed a procedure which I now believe makes it feasible for other amateurs to make their own chart sequences using a version of "all-sky photometry" that I am calling "Iterative All Sky Photometry." MY purpose for this web page has thus changed from trying to discourage amateur astronomers from doing all-sky photometry to trying it using a technique that I think is simpler, more intuitive, less prone to errors and more accurate than traditional techniques.

The observer who is impatient may click the link "Observing Strategy" or "Analysis Procedure" given above. For those wanting some grounding in fundamentals I begin with a section that reviews atmosphere absorption, as well as difficulties related to temporal variations and spatial inhomogenieties atmospheric properties.

I also recommend Arne Henden's tutorial web page "How to Create Good Sequences" for a professional all-sky photometrist's recommended tutorial. It's good, but too brief. Let's hope his forthcoming book has a lot more detail on this subject.



First, imagine that you're out in space, holding a magic "unit surface area" so that it intercepts photons coming from a star. The surface can be a 1 meter square, for example, and its magic property is that it can count photons coming through it for any specified interval of wavelengths, for any specified time interval. Let's give it one more magical property:  it can count only those photons coming from a specific star that you designate.  We might imagine that this last is achieved by some kind of screen way out in front, having at least the same 1 meter square aperture and located so that the magic surface sees only things from the direction of the star.

This device measures something commonly thought of as brightness, but which astronomers call "flux." Photons of the same wavelength have the same energy, so merely counting photons is equivalent to measureing the energy passing through the magic unit surface. Energy per unit time is power, as in "watts."  This magic device measures watts of energy, per unit area, per nanometer of wavelength interval. This is called flux, or a version of flux referred to as Slambda. Let's just call it "S".


If we point our magic unit surface area so that we measure S from one star, then S from another, we'll get two S values:  S1 and S2. We can arbitrarily define something called "magnitude difference" according to the following:  M2 - M1 = 2.5 * log10 (S1 / S2 ).

Let's now arbitrarily assign one star to have a magnirude value of zero. Then, all stars brighter will have negative magnitudes, and all stars dimmer will have positive magnitudes. If the flux from that universal reference star is S0 , then any other star's magnitude will be given by:  

    Mi =  2.5 * log10 (S0 / Si )                Eqn 1

We've now devised a system for describing how many photons pass through a unit surface area, per unit time, per wavelength interval, at a specified interval of wavelengths. And we've arbitrarily devised a dimensionless parameter, called "magnitude," for the convenient statement of that number. Magnitude defined this way is convenient because we don't have to give a long value, such as 1.37e+16 [photons per second, per square meter, between 400 and 500 nanometers]; rather, we can simply say M = 12.62.

But wait, we're not done. The wavelength interval is a crucial part of the measurement, since the measurement will vary greatly as we change wavelength intervals. Let's just call the meaurement for the 400 to 500  nanometer (nm) wavelength region a "blue" magnitude. And we can define "green" magnitudes, and "red" magnitudes, etc, by specifying the wavelength region to be 500 to 600 nm, 600 to 700 nm, etc.

Atmospheric Transmission

Now let's take our magical instrument from outer space down through the atmosphere to the surface of the earth. When we look up, we will count fewer photons. Some of the photons are being absorbed by molecules in the atmosphere, and others are being scattered. The scattering is from two kinds of things:  molecules and particles. The molecules scatter in a Rayleigh manner, affecting blue photons most, whereas the particles (also called aerosols) scatter in a way that depends upon the ratio of the wavelength to the circumference of the particle (Mie scattering when this ratio is small, Rayleigh scattering when the ratio is large). At some wavelengths, especially within the "red" band, water vapor molecules absorb (resonant absorption) at specific wavelengths. Ozone molecules also have a preferred wavelength for absorbing photons. For these bands the loss of photons will depend upon the number of water vapor molecules, or ozone molecules, along the line of sight through the atmosphere.

To first order, the loss of photons making a straight line path to the magic unit surface area, located at the surface of the earth, will depend on the following factors:

    Blue        Rayleigh scattering by molecules, non-resonant absorption by molecules, scattering by aerosols
    Green      Non-resonant absorption by molecules, scattering by aerosols
    Red         Non-resonant absorption by molecules, scattering by aerosols, water vapor molecules resonant absorption

Let's talk more about the aerosols. They may consist of dust particles, salt crystals that are swollen by varying amounts of absorbed water (these are important at coastal sites), sulphate particles (SO4 molecules stuck together plus with water), volcanic ash in the stratosphere, urban smog, water droplets within clouds, and ice crystals in cirrus clouds. All of these aerosols are capable of presenting angular structure that can pass through a line of sight quickly, such as between the measurement of one star and the next. Even during clear conditions, when the eye cannot see changes in water vapor, the total number of water vapor molecules along a given line of sight can change by a factor of two in less than an hour (personal observation).

The zenith view will usually have the smallest losses. During a typical observing period the entire sky will undergo a uniform rate of change of all of the above factors contributing to loss of photons. Tracking an object from near zenith to a low elevation will cause photon losses to change due to both the increasing amount of air that the photons have to traverse, and due to the changing conditions of the entire air mass in the observer's vicinity. We'll have to come back to this pesky subject later.

If the atmosphere in the observer's vicinity does not change during an observing sesson, then the photon losses will be proportional to the number of molecules and aerosols along the path traversed by the starlight. The losses are exponential, however. Each layer of the atmosphere absorbs or scatters a certain percent of the photons incident at the top of that layer. For example, if the zenith flux is 90% of what it is above the atmopsphere, the 30 degree elevation angle flux (where twice as many air molecules and aerosols will be encountered) will be 90% of 90%, or 81% of the outer space flux. Simple geometery says that:

    S(m) = S(m=0) * EXP{-m * tau)                Eqn 2

where S(m) is the flux measured for an air mass value "m", S(m=0) is the flux above the atmosphere, "EXP" means take the exponential of what's in the parentheses, and "tau" is the optical depth for a zenith path. Tau, the zenith extinction, can is sometimes assigned units of "Nepers per air mass." (Apologies to astronomers accustomed to seeing air mass represented by the symbol "x"; I'm going touse "m".)

The following figure shows atmospheric transparency for a clear atmosphere with a moderatley low water vapor burden (2 cm, for example).

Atmospheric transmission spectrum

Figure 1.   Atmospheric transmission versus wavelength for typical conditions (water vapor burden of 2 cm, few aerosols), for three elevation angles (based on measurements with interference filters by the author in 1990, at JPL, Pasadena, CA). Three absorption features are evident:  a narrow feature at 763 nm, caused by oxygen molecules, and regions at 930 and 1135 nm caused by water vapor molecules. Four thick black horizontal lines show zenith transparency based on measurements made (by the author) with a CCD/filter wheel/telescope for typical clear sky conditions on another date and at another location (2002.04.29, Santa Barbara,CA, SBIG ST-8E, Schuler filters B, V, R and I).

Since zenith extinction changes with atmospheric conditions differences of several percent can be expected on different days. In Fig. 1 the transparency in the blue filter region (380 to 480 nm) differs by ~10% between the two measurement sets. Changes of this order can occur during the course of a night. This may be an unwelcome thought, but it is a fact that careful photometry must reckon with (discussed below).

Hardware Spectral Response

The concept of "spectral response" is important throughout all that is dealt with here, so let's deal with that now. Consider a single observation (or integration) of a field of interest using a single filter. The term"spectral response" refers to the probability that photons of light having different energies (wavelengths) will successfully pass through the atmosphere (without being scattered or absorbed) and pass through the telescope and filter and then be registered by the CCD at some pixel location. This probability versus wavelength, called spectral response, varies with photon wavelength, ranging from zero at all short wavelengths, to maybe 20% (as described below) near the center of the filter's response function, and back to zero for all longer wavelengths. The spectral response will be a smooth function, having steep slopes on both the short-wavelength cut-on and long wavelength cut-off sides of the response function. The entire journey of a photon through the atmosphere, the telescope, the filter, and it's interaction with the CCD chip, where it hopefully will dislodge an electron that will later be collected by the CCD electronics when the integration has finished, can be summarized by "probability versus wavelength" functions, described next.

Assuming the observer is using a reflector telescope, or a Schmidt-Cassegrain with small losses in the front glass corrector, the photon that makes it to ground level has a lossless path through the telescope to the filter. For observers using a refractor telescope, there may be losses in the objective lens due to reflections and absorptions. For a good objective, though, these losses will be small.  The remainder of this section deals with what happens to ground-level photons that reach the filter.

Filter Pass Bands

There are two commonly used UBVRI filter response "standards" in use, going by the names Cousins/Bessell and Johnson.  Most amateurs use filters adhering to the Johnson response shape. The two systems are essentially the same for UBV, and differ slightly for the R and I filters. Observations made with one filter type can be converted to the other using the CCD transformation equations, so it would be wrong to say that one is better than the other. The choice of one system over the other is less important than a proper use of either one (as Optec forcefully states on their web page).  Even filters from different manufacturers differ slightly from each other. The following figure shows a typical filter response for BVRI filters made by Schuler.

Schuler filter spectral response

Figure x. Spectral response of a set of photometric quality filters.

Considering those 500 nm photons coming in from a 30 degree elevation, for which only 67% make it to ground level, they may have another 70% probability of passing through the V-filter, for example. In other words, only 47% of photons at the top of the atmosphere and coming in at a 30 degree elevation angle make it to the surface of the CCD chip.

CCD Chip Quantum Efficiency

Photons that make it through the atmosphere and filter still must reach the CCD chip if they are to register with the observer's image. There's a matter of cover plates, protecting the chip and preventing water vapor condensation, which is a minor obstacle for a photon's journey. The real challenge for photons is to deposit its energy within a pixel part of the CCD chip and dislodge an electron, setting it free to roam where it can be collected and later produce a voltage associated with the totality of electrons collected at that pixel location. The fraction of photons incident upon the CCD that can produce electrons in a collection "well" is the CCD's quantum efficiency. The quantum efficiency versus wavelength for a commonly used CCD chip is shown in the next figure.

CCD response

Figure 3.  Fraction of photons incident upon chip that free electrons for later collection (KAF 1602E chip, used in the popular SBIG ST-8E CCD imager).

Considering again 500 nm photons, of those that reach a typical CCD chip, such as the one used in SBIG's ST-8E, only 40% dislodge an electron for later collection and measurement.  For the V-filter, therefore, only 19% of those photons at the top of the atmosphere, coming in at 30 degrees elevation angle, actually get "counted" during an integration under typical clear weather conditions.  This number is the product of three transmission functions given in the above three figures.  Each filter has associated with it a total transmission probability, and it depends upon not only the filter characteristics, but also upon the atmosphere and the CCD properties.  For the system used in this example, the following figure shows the spectral response for photons arriving at a 30 degree elevation angle, under typical weather conditions, going through Schuler filters and being detected by the KAF 1602E CCD chip.

Spectral Response Due to All Sourcse of Photon Loss

The following figure shows the fraction of photons starting at the top of the atmosphere that can be expected to contribute to a star's image for a typical atmosphere conditions, using typical filters and a commonly used CCD.

BVRI final spectral response

Figure x.  Response of entire "atmosphere/filter/CCD system" for typical water vapor burden, few aerosols,30 degree elevation angle, Schuler filters and SBIG ST-8E CCD (KAF 1602E chip).

The reader may now understand how it happens that different observers can have different system spectral responses for their specific systems and atmospheric conditions. Two observers may be making measurements ar the same time from different locations and using different filters and CCD imagers, and unless care is taken to convert their measurements to a "standard system" their reported magnitudes would differ. The magnitude differences will depend upon the "color" of the star under observation, as described in the next section.

Different Observers Have Different Pass Bands

To illustrate the fact that different observers can have different pass bands when they're both making B-filter measurements, let's consider two observers working side-by-side but using different filters and CCD. For example, before I purchased a SBIG ST-8E with Schuler filters, I used a Meade 416XTE CCD with their RGB filter set. The Meade B filter was intended for RGB color image creation, not for photometry. Since the filters weren't designed for photometry (as Meade acknowledges) they will require large corrections during the process of converting observations made with them to a standard system.  For the purpose of this discussion, illustrating the concepts of filter differences, the Meade 616 filters provide are suitable example of the need to be careful. The next figure shows the "atmosphere/B-filter/CCD" spectral responses.for the two systems under consideration.

B filter responses for different systems

Figure x. Spectral response of different systems. The solid trace consists of a Schuler Bu filter, intended for photometry, and a SBIG ST-8E CCD, whereas the dotted trace is for a Meade B-filter and 416XTE CCD.  The response for both systems corresponds to observing at an elevation angle of 30 degrees in a typical, clean atmosphere (2 cm precipitable water vapor). Both response traces are normalized to one at their peak response wavelength.

The Meade system has a spectral response that is shifted to longer wavelengths compared to the Schuler/SBIG ST-8E system. This shift may not seem like much, but consider how important it can be when observing stars with a spectral output that usually is falling off at shorter wavelengths throughout the wavelength region of these filter pass bands. The next figure shows a typical star's brightness versus wavelength.

B filter responses w/ star spectrum

Figure x.  Spectrum of a typical star, Deneb, having a surface temperature of 4800 K, in relation to the two system's B-filter spectral responses.

When a typical star (such as Deneb, shown in the figure) is observed by both systems, the Meade system is observing "higher up" on the stellar brightness curve, producing a greater spectrum-integrated convolved response than for the Schuler/8E system.  (The "spectrum-integrated convolved response" is the area under the curve of the product of the stellar source function with the filter response function.) For example, the ratio of spectrum-integrated convolved responses in this example is 1.137, corresponding to a magnitude difference of 0.14. In other words, the Meade system will measure a blue magnitude for Deneb that is too bright by 0.14 magnitudes, and whatever correction algorithm is used should end up adding approximately 0.14 magnitudes to the Meade system's B-magnitude. Redder stars will require greater corrections, and bluer stars will require smaller corrections.

Corrections of this amount are important, which illustrates the need for going to the trouble of performing CCD transformation equation corrections.  Observers using filters intended for photometry use will presumably require smaller corrections than the 0.14 magnitudes of the example cited here. Since it is reasonable to try to achieve 0.03 magnitude accuracy, corrections for filter and CCD differences are an important part of the calibration process.

To the extent that the atmosphere can change the spectral response of an observer's atmosphere/filter/CCD for any of the BVRI configurations, it may be necessary to somehow incorporate "atmospheric extinction effects" into the data analysis procedure in order to assure that magnitude estimates are high quality. For example, Rayleigh scattering grows as the inverse 4th power of wavelength, so high air mass observations will shift the short wavelength cut-on of the blue filter more than the same filter's long-wavelength cut-off. In effect, high air mass observations are being made with a blue filter that is shifted to the red. The effect of this will be greater for red stars than blue stars. A simple method is described in a later section of this web page for performing a first order correction for atmospheric extinction effects.

Extinction Plot Pitfalls

The next pair of figures show what will happen when losses (absorption plus scattering) vary linearly with time. For simplicity, I have neglected spatial inhomogenities, which are also going to be present when there are temporal variations. First, consider a plot of the log of intensity versus air mass when the atmosphere does not change.

  Extinction plot constant atmos

  Figure 2. Extinction plot for constant atmospheric losses. It is assumed that Log(I) outside the atmosphere is 0.00 and that each air mass has a loss of 0.1 Nepers (optical depth = 0.1).

An "extinction plot" uses log of measured intensity plotted versus air mass. If the Y-axis were simply "intensity" then perfect data plots would be curved; using Log(I) produces straight line plots, which simplifies analysis. In the above plot the modeled data has a slope, defined as dLog(I)/dm, of -0.45. The slope is a dimensionless quantity, and it should be the same for all stars (havng the same spectrum) regardless of their brightness. A slope of -0.045 corresponds to a transmission of 90% per air mass (i.e., 10 raised to the power -0.045 = 0.90). Another way of describing this situation is to state that extinction amounts to 0.11 magnitude per air mass (i.e., 2.5 * Log (0.90)). Thus, for a view where m=1 the observed star intensity is 90% of its outside the atmosphere value, at m=1 it is 81% (90% of 90%), etc. This is a typical extinction value using a V-filter under good sky conditions.

Now consider the same plot but for a linearly temporal change in atmospheric loss, as might occur when an air mass transition is in progress.

 Extinction plot for varying atmos

 Figure 3. Extinction plot for temporally varying atmospheric losses. The dashed lines are fitted to rising and setting portions of the measured intensity.  

I've seen this double-branched extinction plot many times during my studies of atmospheric extinction. One branch corresponds to the rising portion of data and the other corresponds to the setting portion. If intensity measurements are made during only one of the branches then a fitted slope would imply an incorrect extinction per air mass value.

Several properties of the atmosphere can contribute to temporal changes in atmospheric transmission. Sub-visible cirrus (just below the tropopause) that is present in one air mass but not its surroundings is probably the most common source. Volcanic ash can be present above the tropopause (in the lower stratosphere). Volcanoes also eject SO gas, which combines with water vapor to form sulfate aerosols, and these also will be distributed in a non-uniform manner.Water vapor in the lower troposphere is almost always undergoing change at a given site. Vapor burdens (the vertical integral from the surface to the top of the atmospehre) can vary from 1 cm to 6 cm (precipitable water vapor), and I have documented factor of two changes during a half-hour interval at a site in Pasadena, CA.  Smog is found near urban sites and the associated aerosols can contribute to large changes in atmospheric transmission. Clearly, some sites will be more prone to these atmospheric transmission changes than other sites. Coastal and urban sites will be worse, generally, but even desert sites will experience some fo the above sources of atmospheric change.

This serves to illustrate that careless observing strategies can produce extinction plots with misleading slopes, and the slopes are used to derive atmospheric extinction (Nepers of loss per air mass, or optical depth per air mass).

I can think of a couopleways to deal with an atmos[phere that is varying with time. First, restrict all observations to a small air mass range while alternating between the ROI and standard star fields. Second,  alternate observations of standard star fields that are rising and setting. If this second approach is used the two stars will exhibit discrepant slopes for the same air mass region, and this will allow a determination to be made of the temporal trend as well as an extinction value for the approximate midpoint of the observations.

I don't want to describe more details of how to process extinction plots when trends exist, for my purpose here is to simply sensitize the all-sky photometrist to a problem that is easily overlooked.

This Fundamentals section has described the reasons all-sky photometry is a challenging task, and I hope it has sensitized the prospective all-sky observer to the importance of doing it carefully. The next section will give specific observing strategies that I have found useful in performing all-sky photometry.


Planning! Before every night's observing session I formulate a plan. This is especially important for all-sky photometry. Every observer will have their preferred procedure, but each procedure should address the issues raised in the previous Fundamentals section. In this section I'll present my preferred way of all-sky observing.

Assume there's just one "region of interest" (ROI) where a photometric sequence is to be determined. I use TheSky 6.0 to make a list of the ROI's elevation versus time. The preferred time to observe is when the ROI is at the highest elevation, but this is not always possible. For us poor people who mistakenly bought a German equatorial mount telescope, there's a big penalty for crossing the meridian. A dozen things have to be changed after a "meridian flip" and the data on each side of the meridian are not guaranteed to be compatible. I NEVER do all-sky photometry with my Celestron CGE-1400 using data from both sides of the meridian. In fact, I always observe on just one side of the meridian the entire night, photometry or not, and thus avoid the nuisance of all that "meridian flip" routine. (I'll never buy another GEM, never! For my diatribe against German equatorial mounts for photometry, go to

After deciding on the range of time (and elevation) for the ROI observations, groups of well-calibrated standard stars can be chosen. The guideline for this is to observe standard stars in such a way that they are observed throughout the same range of elevations as for the ROI. I try to include at least at least one group as close to zenith as possible and another as low in elevation as possible. In addiditon, I try to include groups of standard stars at the beginning and end of the night's observing session athat are at the same elevation, preferably near the ROI's low elevation range. This choice will be useful in assessing the existence of temporal trends.

With TheSky displaying in the "horizon" mode it is easy to see which stars are at elevations of interest. I often use the Landolt list of well-calibrated stars for my primary calibration. The Landolt list contains 1259 stars in groups that are mostly along the celestial equator, as shown in the follwoing figure.

Landolt map

Figure x.  Locations of Landolt stars.  Declination [degrees] is the Y-axis and RA [hours] is the X-axis. (Note that RA shows a negative sign, due to a limitation of my spreadsheet program.)

TheSky is able to display this list of stars (with whatever symbols you like) by specifying the Landolt text file's location in the Data/SkyDatabaseManager menu. You can download this Landolt text file from LandoltTextFile

Since none of the Landolt stars go through my zenith I also depend on Arne Henden's set of sequences (constructed mostly for the AAVSO, I suspect). His data can be found at Arne1 and Arne2.

A third source for standard stars is the large database maintained by the AAVSO. You first have to pick a star that's in the AAVSO chart database and then download a chart for it. Not all charts are good quality, and only a few have more than just V-magnitudes, so this data source is inconvenient

Example Observation Session

To illustrate the procedure I use for all-sky photometry I will use an observing session conducted June 19, 2004 (UT). A Celestron CGE-1400 telescope was used with a SBIG ST-8XE CCD at prime focus, using a Starizona HyperStar adapter lens. Custom Scientific B, V and R photometric filters were used. All images were made with the CCD cooled to -10 C. Focus adjustments were made several times during the 3-hour observing session. When I'm using a prime focus configuration the Custom Scientific filters are not parfocal, and this required that I refocus for each filter change. I used a table of previously-determined filter focus offsets. Sky conditions were "photometric" and based on atmospheric seeing of the past few nights and the recent reliability of ClearSkyClock seeing forecasts for my site, I suspect that the seeing afforded FWHM ~ 2.3 "arc (for a Cassegrain configuration). The prime focus configuration produces a "plate scale" of 2.8 "arc/pixel, and due to slight distortions caused by an imperfect prime focus adapter lens I was able to achieve FWHM of no better than 7.5 "arc. I at least covered a large field of view, 72 x 48 'arc, using a "fast" system, f/1.86.

The purpose for the June 19, 2004 observing session was to establish a photometric sequence for a cataclysmic variable that had been recently discovered as a suspected nova. The object has a designation of 1835+25 (plus a temporary designation of VAR HER 04). This outbursting binary star is located in the constellation Hercules (about a minute of arc from the border with Lyra), at 18:39:26.2, +26:04:10. The magnitude for the June 19 observations was ~12.7.

The following figure is a screen shot of TheSky ver. 6.0 showing how two Landolt star groups and one Arne Hendon sequence (for an AAVSO blazar) were selected for use to calibrate a star field near the new nova, labeled "Nova 2004".

Figure x. Resized and compressed screen shot of TheSky in horizon display mode showing the region of interest ("nova 2004", within middle-left oval), two Landolt star groups (middle-right ovals), Area 111 (lower-left oval, containing Landolt stars) and an AAVSO chart with Arne Henden calibrated stars (top-left oval). Other Landolt star groups are shown by the blue symbols. The meridian is shown by a red dashed trace from the lower-right horizontothe top-middle border. Elevation isopleths are shown by thin red traces parallel to the horizon. (The reduced size, compression and conversion to JPEG greatly degrades readability, made necessary to keep the file size reasonable.)

Other "planetarium" display programs can probably also be used to select standard stars (but the TheSky is a great deal and has all the features any CCD photometrist would want).

After selecting the groups of standard stars to be used as a primary magnitude standard an observing schedule should be decided upon. Recall the two strategies described briefly in the previous section for dealing with slowly varying extinction: 1) restrict all observations to a narrow range of elevations and alternate observations of standard star regions with the ROI, and 2) alternate observations of standard star fields that are rising and setting and determine the trend as well as an average extinction value that can be used with the ROI observations (made later, earlier or during the standard star observations).

Since my telescope is on a German equatorial mount the second option is not feasible. Therefore, I always employ the first strategy for dealing with the possibility of extinction trends during an observing session. For the observing session being used as my example I decided to start out observing the "Landolt A" group (right-most oval) and then the ROI (lower-left oval). This timing places the two targets at about the same elevation, and closely spaced in time, which means that this pairing will be unaffected by later assumptions about temporal extinction trends and extinction values. This is a conservative strategy, and it assures that something useful could be done with just the first two targets (in case of equipment problems or other show-stoppers prevented later targets from being observed). It may not have been the best strategy, but for this observing session I then observed Markarian 501 (upper-left oval), the ROI, Landolt A, and continued through that cycle until one of the targets reached the meridian, then an observation of Area 111 at a low elevation (since it rose late in the night's observing).

Notice that the first Landolt A standard stars and the Area 111 standard stars are to be made at about the same elevation and are at the beginning and end of the night's observing session. This makes them an ideal pair for assessing any trend in extinction value.

My observing plans included a list of things to do before dark. For example, review the previous observing session log notes to recall if anything anomalous occurred, review which filters are in the filter wheel (I have a SBIG "pretty picture" set and a photometric set), review the CCD settings that were last used (hardware present, flip values, image scale for proper centering), review the adequacy of the last pointing map, and determine if any telescope reconfiguration and rebalancing will be needed. For this specific observing session I had to reconfigure from Cassegrain to prime focus using the Starizona HyperStar field flattening lens, and this had to be done before dark. This also requried a rebalance and an approximate refocus using nearby mountains (so that the flat fields would be at the correct focus setting). I also had to replace some "pretty picture" filters with photometric filters. My plan had to include new flat fields shortly after sunset, for each filter, and a completely new pointing calibration (using MaxPoint). Every reconfiguration and rebalance requires a new pointing calibration. The plan called for setting the CCD cooler to a value that could be established quickly (such as +5 degC), prior to the flat field images of the zenith sky after sunset. The plan also included a sequence of targets, as well as the brightest star to be included in the analysis (needed for establishing an exposure time that does not lead to "saturation" of the brightest star). Users of MaxIm DL 4.0 will recognize some the tasks listed above. That's the hardware control and image analysis program that I use.

All pre-target items on the plan were implemented before sunset. After the flat fields were complete I cooled the CCD as far as possible. I determined that the night's CCD cooler setting would be -10 degC for the entire night, since that's as cold as I could get without exceeding a cooler (TEC) duty cycle of 90%. For all-sky photometry it's absolutely essential to use the same CCD cooler setting the entire night, and to periodically check that the cooler duty cycle is not approaching 100% (due perhaps to a warming that could be caused by sundowner winds for example).

I observed each target field with the B, V and R filters. Each filter had to be focused differently since my photometric filters are not "parfocal" when at prime focus (the pretty picture filters are parfocal). An exposure time had to be determined for use on all targets. This probably isn't necessary, but that's my current preference. A long exposure would cause saturation for the brightest stars, rendering them unuseable for photometry. I decided upon an exposure time of 10 seconds after quickly studying the first V-filter exposure of a star field with known magnitudes.

While observing I keep an "observing log" with an ball point pen.  After observations have terminated I use only a pencil to add comments based on recollections. This ink/pencil distinction can help in reconstructing what really happened months later if a re-analysis or review of the observations is made. For each exposure sequence I note such things as UT start time, object, filter, exposure time, guided (by AO-7 image stabilizer) or unguided, focuser sensor temperature (taped to the telescope tube), outside air tempreature, wind speed and direction, and sometimes my impressions of data quality as I calibrate and review images coming in "on the fly."  When a focus sequence is performed I log the focus setting and 6 FWHM readings for a sequence of focus setting valeus centered on the expected best focus position. As soon as this set is completed I plot FHWM  for the best 3 of each set of 6 FWHM readings, and do a hand fit trace to establish the best focus setting. I note what the best FWHM value is for the focus sequence, and use that as a reference to monitor drift away from best focus during later imaging in order to determine when a new focus sequenmce is needed.

On this occasion, June 19, all standard star fields and the ROI were observed successfully. Depending upon how tired I am I try to begin data analysis shortly after observations are completed. Invariably, data analysis is best done as soon after observations as possible. The memory of odd occurences and other impressions fade with time, and during analysis all these extra memories are potentially helpful.


Differential photometry with MaxIm DL can be done manually or with a photometry tool. The photometry tool is preferable because it rejects outliers in the sky reference annulus. However, for all-sky photometry the needed measurement is "intensity", and this is not recorded by the photometry tool. (If you're not using MasxIm DL and your image analysis program doesn't display intensity as you move the 3-circle pattern over the image, then buy MaxIm DL or a program that does display intensity.) Therefore, manual readings are required for each star to be included in the analysis.

It is absolutely necessary to use the same signal aperture size for ALL manual readings of intensity for an entire all-sky observing session. Small changes in the sky reference annulus are OK, as are small changes in the gap annulus - but give careful thought to the signal aperture size before starting to measure and record intensities.My suggestion is to use the fuzziest image that you're convinced you need to use and select a signal aperture radius of ~1.5 times the FWHM of a star that has a signal-to-noise ratio (SNR) within the range 50 to 300. Be certain that the FWHM is for a star that is not saturated (i.e., that has a maximum counts value less than 40,000).

Raw images should be calibrated using the appropriate dark frames. Flat frame calibration should also be applied to all images. For each field and filter and observing sequence, a median combine should be performed...

Star Intensity Readings

When making readings of intensity there is the matter of how to position the aperture circles. For bright stars (SNR > 50, for example) I use the position that yields the greatest intensity. For fainter stars I manually position the aperture circles so that brightest pixels are at the center of the signal aperture circle. Often a greater intensity reading occurs at one of more pixels away from this setting, but that is usually due to the sky reference annulus including a slightly different set of pixels and random noise can change the sky reference level depending on which pixels are included in the sky reference annulus. This effect is unimportant for bright stars, but it can dominate intensity results for faint stars.

Intensity readings are made for all Landolt (and other) standard stars, as well as a selection of stars in the ROI. These should be noted in a reduction log (using pencil). Intensity is defined to be the sum of all "extra" counts within a signal circle, where "extra" means differences with respect to the average count value within a sky referecne annulus. The sky reference annulus is separated from the signal circle (aslo called an aperture circle) by an annulus shaped gap, which allows the signal circle to be kept as small as possible while preventing the sky reference annulus from "contamination" by starlight from the star in the signal circle.

Extinction Determination

After all stars have their intensity measured for all images the next step is to determine an atmospheric extinction for each filter. Considering each filter in turn, and determine the air mass range for each field. Probably it is best to choose the field with the greatest air mass range for determining extinction, for each filter (note: it's OK to use the ROI with unknown star magnitudes for this purpose). When a star field has been chosen for determining extinction for a given filter the measured intensities should be entered into a spreadsheet program (such as Excel).

Extinction plot for R-filter

Figure x. Extinction plot for one field of stars (the ROI) for R-filter measurements. Each intensity measurement was converted to "2.5 * LOG(Ir)" before it was plotted. The slope for each fitted straight line is -0.19 "2.5 * LOG(Ir)" units per air mass, which was the best fit.

The graph shows that one slope [having units of LOG(I) per air mass], with offsets for each star, fits all extinction plotted data. A best fit slope value can be determined by a variety of techniques. I find it useful to enter trial values for extinction, K [mag/m] or magnitude change per air mass, into a cell in the spreadsheet and note the average RMS residual fit between a least squares fit model and all measurements. Each time an extinction value is entered the spreadsheet calculates a best fit intercept for each star (using K*mav and sum of 2.5*LOG(I) /N, where N is the number of air mass obserations and mav is the average air mass; details left to the student). This "trial and error" procedure is a manual least squares solution for K, and it should be performed for each filter. Afterwards, there will be a set of K values for each filter, such as 0.25, 0.20 and 0.15 [mag/m], corresponding to the magnitude change per air mass for B, V and R filters.

Extinction Trends Evaluation

So far this procedure has not allowed for a determination of extinction trend. Since the June 19 observing date was planned to have negligible effect from extinction trends it is not necessary to evaluate it. However, for the purposes of this web page let us do this using the first and last standard star fields that were observed at about the same air mass. Since B-filter measurements experience the greatest level of extinction the B-filter data can be expected to show a greater effect than the other filter data. However, for this particular data set there are no pairs of osbervations separated by a large time span that are at the same air mass. The V-filter data comes the closest to having similar air mass measurements with a large time span separating them, so that's the data I'll analyze for an extinction trend.

Extinction trend?

Figure x. Deviations from extinction model that has no extinction trend for V-filter observations of Area 111 (m=1.55) and Landolt A (m=1.40) taken 1.8 hours apart.

The two sets of V-filter measurements appear to fit the same model for converting intensity, B-V color, extinction and air mass to V-magnitude. The two data groups deviate from this single extinction model by 0.012 +/- 0.030 magnitude. Taking into consideration their average airmass of 1.47, and the 1.8-hour time span separating them, the two groups of data imply an extinction trend of 0.004 +/- 0.011 [magnitude per air mass per hour]. This is statistically insignificant, and the trend, even if it were real, would be 3.0 +/- 7.6 [%/hour] decrease in the average 0.15 [magnitude/air mass] extinction value. Even if this extinction trend were real it would produce V-magnitude errors of +0.008 +/- 0.020 at one end of a 3-hour observing interval and -0.008 +/- 0.020 at the other end of the observing interval (for an average air mass of 1.33). Uncertainties at this level are small compared with the other uncertainty components, so it is legitimate to assume that extinction was not changing during this particular observing session.

Converting Intensity to Star Magnitude Using Standard Stars

It is intuitively reasonable to think that a star's measrued "intensity" (as defined above) contains information about its brightness. Consider the simplesst possible plot of this, using well-calibrated standard stars, in which Intensity is plotted versus true magnitude.

Measured intensity vs V-mag

Figure x. Measured intensity of standard stars in V-filter images.

Indeed, the intuitive idea that measured intensity is related to true V-magnitude is borne out by this graph. The graph suggests that we should plot the LOG (to base 10) of measured intensity versus true V-magnitude. In fact, if we plot 2.5 times LOG(1/Intensity) we should have a parameter that is closely related to magnitude, subject to the same offset for all stars.

LOG(I) vs V-mag

Figure x. Plot of 2.5 * LOG(I) +21.96 (arbbitrary offset) for the measured intensities (I) of standard stars in V-filter iamges.

Sure enough, by playing with arbitrary offsets it was possible to find one, +21.96, that affords a fairly good correlation of converting "2.5 * LOG(I) plus offset" with the true V-magnitudes of standard stars. This equation doesn't take into account the different air masses of the various images from which intensity measurements were made, nor does it take into account the different colors of the standard stars. The colors should matter since the observing system has a unique spectral response (caused by the corrector plate, prime focus adapter lens, filters and CCD response, as well as the atmosphere). Let's plot the deviations of the above measurements from the fitted line and see if these differences correlate with air mass and star color.

Error vs V-mag

Figure x. Error of simple equation "magnitude" versus true V-magnitude for standard stars.

The next question we naturally think of asking is "Are these errors correlated with air mass and star color." A multiple regression analysis should show wheter there is a signitificant correlation with either independent variable. The answer is "yes, the errors are correlated with both air mass and color." The coefficients are +0.17 +/- 0.08 (air mass coefficient) and +0.078 (B-V coefficient). Both correlation coefficients are statistically significant. When these correlations are taken into account the simple equation becomes:

    Equation V-mag = 21.96 + 2.5 * LOG(1/I) +0.17 * (m-1.42) +0.078 * (B-V) - 0.90)                Eqn 3

and the residual errors are shown in the next graph.

Error corr'd for m & color

Figure x. Error of simple equation "magnitude" versus true V-magnitude for standard stars, corrected for air mass and color correlations.

In this plot the faint stars exhibit a larger scatter than the bright ones, which is to be expected. Allowing for that, the scatter is greatly reduced compared with the previous graph.

I've decided to employ one more parameter to improve the fit of equation V-magnitude and true V-magnitude: an empirically derived multiplier term for "2.5 * LOG(1/I)" which should be close to 1.000 but in this case the best multiplier term has a value of 1.003. I suspect that this term is needed to account for the fact that bright stars undergo a slight amount of saturation, as their maximum count value was close to 40,000 counts.  The final equation V-magnitude for this data set is shown in the next graph.


Figure x. Equation-based V-magnitude versus true V-magnitude for 35 Landolt and Arne Henden stars.

  Predicted V-mag = Cv1 +Cv2 (2.5 * LOG(50000/INTv)) +Cv3 * (m-1.33) +Cv4 * ((B-V) - 0.90), where                                Eqn 4
            Cv1 = 10.187 [mag]               related to CCD temperature, exposure time, signal aperture size, telescope aperture, filter width and transmission,
            Cr2 = 1.003                           empirical multiplication factor (related to non-linearity of CCD A/D converter),
            Cr3 = -0.20 [mag/air mass]     extinction for V-filter for the specific atmospheric conditions of the observing site and date,
            Cr4 = -0.07                            related to observing system's color response (i.e., the old CCD transformation equation coefficients),
            INTv = intensity using V-filter (integrated excess counts within signal aperture that exceed expected counts level based on average counts within sky reference annulus), and
            m = air mass,
            with a residual RMS of 0.030 magnitude (for the 31 standard stars brighter than magnitude 13.5)

A similar analysis perforemd for the B-filter data yields the following.

Eqn B-mag vs true

Figure x. Equation-based B-magnitude versus true B-magnitude for 34 Landolt and Arne Henden stars.

Predicted B-magnitude can be expressed using the following equation, involving 3 different constants:

            Predicted B-mag = Cb1 +2.5 * LOG(50000/Ib) +Cb2 * (m-1.33) +Cb3 * ((B-V) - 0.90), where                                 Eqn 5
            Cb1 = 9.74 [mag]                   related to CCD temperature, exposure time, signal aperture size, telescope aperture, filter width & transmission,
            Cb2 = -0.25 [mag/air mass]    extinction for V-filter for the specific atmospheric conditions of the observing site and date,
            Cb3 = +0.16                          related to observing system's color response (i.e., the old CCD transformation equation coefficients), and
            Ib = intensity using B-filter (integrated excess counts within signal aperture that exceed expected counts level based on average counts within sky reference annulus),
            with a residual RMS of 0.056 magnitude (for all 34 stars in this analysis)

Eqn R-mag

Figure x. Equation-based R-magnitude versus true R-magnitude for 18 Landolt and Arne Henden stars.

The R-magnitude prediction equation is:

            Predicted R-mag = Cr1 + Cr2 * [2.5 * LOG(50000/INT)] +Cr3 * (m-1.30) +Cr4 * ((V-R) - 0.53), where                                Eqn 6
            Cr1 = 10.428 [mag]                related to CCD temperature, exposure time, signal aperture size, telescope aperture, filter width and transmission,
            Cr2 = 1.01                              empirical multiplication factor,
            Cr3 = -0.19 [mag/air mass]     extinction for R-filter for the specific atmospheric conditions of the observing site and date,
            Cr4 = -0.19                            related to observing system's color response (i.e., the old CCD transformation equation coefficients),
            INT = intensity using R-filter (integrated excess counts within signal aperture that exceed expected counts level based on average counts within sky reference annulus), and
            m = air mass,
            with a residual RMS of 0.037 magnitude (for all 18 stars in this analysis)

Iterative Procedure for Unknown Region

The previous section shows that if you know a star's magnitude you can derive it from an image - big deal! This is leading somewhere, though. The equations, above, state that once you've derived coefficients that work well for standard stars, allowing you to determine magnitude from measured intensity, air mass and color index, it is possible to apply them to any unknown star provided you have values for the same 3 independent variables. But for an unknown star even though you can measure its intensity at a known air mass it doesn't have a known color index, either B-V or V-R. What to do?


Consider the following example (and it's an extreme example since this star's color is unusual).

Star 1835+25 is a suspected nova (later determined to be a cataclysmic variable, and very blue). During the June 19 observing session it was measured to have the following:

    05:29.6 UT    m = 1.23    Iv = 5830    SNR = 218
    06:33.5  UT   m = 1.08    Ib = 3308    SNR = 141

Let's convert these measured properties to a V-magnitude (and a B-magnitude).

We'll start by assuming B-V = 0.90, a typical value for the standard stars used in the analysis for this night's observations. This allows us to calculateapproximate V- and B-magnitudes. Using Eqns 4 and 5:

    1835+25's V-mag = 10.187 +1.003 * 2.5 * LOG(50000/5830) -0.20 * (1.23 - 1.33) -0.07 * (0.90 - 0.90)
    1835+25's V-mag = 12.55

    1835+25's B-mag = 09.74 +1.000 * 2.5 * LOG(50000/3308) -0.25 * (1.08 - 1.33) +0.16 * (0.90 - 0.90)
    1835+25's B-mag = 12.75, and B-V = +0.20

From this first iteration we determine an approximate value for B-V = +0.20. This differs significantly from the standard star value of +0.90, but that's OK because 1835+25 could be an unusual star. Let's adopt thisB-V value for the second iteration.

    1835+25's V-mag = 10.187 +1.003 * 2.5 * LOG(50000/5830) -0.20 * (1.23 - 1.33) -0.07 * (0.20 - 0.90)
    1835+25's V-mag = 12.60

    1835+25's B-mag = 09.74 +1.000 * 2.5 * LOG(50000/3308) -0.25 * (1.08 - 1.33) +0.16 * (0.20 - 0.90)
    1835+25's B-mag = 12.64, and B-V = +0.04

The new B-V value departs even more from the standard star typical value, but at least it changed by only 0.16 magnitude units. Let's adopt this B-V value for the next iteration.

    1835+25's V-mag = 10.187 +1.003 * 2.5 * LOG(50000/5830) -0.20 * (1.23 - 1.33) -0.07 * (0.04 - 0.90)
    1835+25's V-mag = 12.61

    1835+25's B-mag = 09.74 +1.000 * 2.5 * LOG(50000/3308) -0.25 * (1.08 - 1.33) +0.16 * (0.04 - 0.90)
    1835+25's B-mag = 12.61, and B-V = 0.00

It appears that the changes in V and B-magnitudes are getting smaller each iteration, and the B-V changes are also getting smaller with each new iteration. It seems like convergence is occurring. Just one more iteration is needed (since my spreadsheet shows that no more changes occur after it).

    1835+25's V-mag = 10.187 +1.003 * 2.5 * LOG(50000/5830) -0.20 * (1.23 - 1.33) -0.07 * (0.00 - 0.90)
    1835+25's V-mag = 12.61

    1835+25's B-mag = 09.74 +1.000 * 2.5 * LOG(50000/3308) -0.25 * (1.08 - 1.33) +0.16 * (0.00 - 0.90)
    1835+25's B-mag = 12.60, and B-V = -0.01

The following figure is a screen capture of an Excel spreadsheet showing how the above iteration was performed (effortlessly).

Iteration example

Figure x. Screen shot of an Excel spreadsheet showing how cells containing V, B and B-V were autoamtically calculated once values were entered for m and INT. Rows 1 to 4 contain V-filter coefficients for converting measured V-filter image star intensity, airmass and color to V-magnitude. Rows 8 to 10 contain measured intensity for 3 V-filter images having different air mass vales. Rows 12 to 21 correspond to rows 1 to 10 for three B-filter images. The sequence of solutions for V amd B-magnitudes is shown in columns 5, 7, 9, etc. Convergence is apparent. SNR is used to calculate stochastic uncertainty SEs, shown in column 15 for individual images. Column 16 contains average magnitudes using the converged solutions and shows the stochastic uncertainty of the averaged magnitudes based on column 15 information.

It is apparent that convergence is achieved after 3 iterations for V and 4 iterations for B. Rows 8 and 19 in the above figure are for one V-filter and one B-filter image. The figure shows how additional rows can be used to process additional images.

The iterative procedure just described was performed on a larger data set for the 1835+25 star field, and for other stars in this field, to produce the following photometric sequence map and table.

Image with V-mags noted

Figure x. V-magnitudes for the nova's star field. North is up and east to the left. FOV is 24.8 x 25.1 'arc (EW & NS). FWHM resolution is ~8.5 "arc.  The nova is shown by a rectangle, and at the time of the images that were used for this average image (June 19.2 UT) the nova had a V-magnitude of 12.75.

The following table shows B, V and B-V values for most of the stars in the previous figure (for June 19.2). The error entries are the orthogonal sum of measured stochastic SE uncertainty and estimated calibration SE uncertainty.

      V mag
      B mag
          R mag
 12.75 +/- 0.04
 12.65 +/- 0.04
  12.57 +/- 0.04
 -0.10 +/- 0.07
Star 1330
 13.30 +/- 0.04
 13.89 +/- 0.04
  12.88 +/- 0.04
 +0.57 +/- 0.07
Star 1498
 14.98 +/- 0.05
 15.78 +/- 0.07
  14.59 +/- 0.06
 +0.80 +/- 0.10
Star 1561
 15.61 +/- 0.05
 16.19 +/- 0.10
  14.90 +/- 0.08
 +0.58 +/- 0.14
Star 1653
 16.53 +/- 0.15
 17.20 +/- 0.25
  15.86 +/- 0.10
 +0.68 +/- 0.29
Star 1277
 12.77 +/- 0.04
 13.17 +/- 0.04
  12.46 +/- 0.04
 +0.40 +/- 0.06
Star 1017
 10.17 +/- 0.04
 10.70 +/- 0.04
    9.76 +/- 0.04
 +0.53 +/- 0.06
Star 0894
 08.94 +/- 0.04
 09.46 +/- 0.04
    8.52 +/- 0.04
 +0.52 +/- 0.06
Star 1204
 12.04 +/- 0.04
 12.42 +/- 0.04
  11.75 +/- 0.04
 +0.38 +/- 0.06

Notice how "blue" the putative nova is, with a B-V color index of -0.10 +/- 0.07. This contrasts with the "red" star north of 1835+25 (not listed in the table), which has a B-V color index of +1.62 +/- 0.10 (apparent in the following color images)

RVB (color) image of 1835+25

Figure 1.  This is the mysterious blue star 1835+25 (VAR HER 04). North is up and east is left for both images. The image is a RVB color image has a FOV = 10.6 x 13.9 'arc, which is a 4% areal crop of a prime focus image taken with a 14-inch Celestron telescope (prime focus, f/1.86), and the FWHM resolution is ~8.5 "arc. Notice the very red star north of 1835+25 (having B-V = +1.62, versus -0.10 for 1835+25).

Closing Comment

As an aside, I want to say that the standard procedure suggested for use by AAVSO members is to convert apparent magnitude to true magnitude using "CCD Transformation Equations" with a set of CCD Transformation Equation Coefficients (TEC) determined once or twice per year. This procedure has the limitation that a TEC set established under one atmospheric extinction condition and one range of air mass values (and one telescope configuration) cannot be expected to perform well under different atmospheric extinction conditions and different air mass situations (and different telescope configurations). Professional astronomers employ a rigorous but more complicated set of transformation equations that explicitly take into account extinction. The procedure I am using overcomes the limitations of the simpler transformation equations without the complication of the rigorous transformation equations.

If you're still unconvinced about the merits of the iterative all-sky photometric procedure described on this web page, then you are welcome to continue using CCD tranformation equations, which I derive and explain at TEC. Even though I have derived the CCD CCD transformation equations from essentially first principles, so presumably I understand them, I find them cumbersome and prone to error during their implementation. I think the reason theya are prone to error is that they are not "intuitive" - so if you make a mistake somewhere in the procedure you're unlikely to realize it. But, "to each his own."

Return to Bruce's AstroPhotos


This site opened:  June 20, 2004 Last Update:  June 25, 2004