Bruce L. Gary (GBL); Hereford, AZ, USA


Observationalists have a tradition of identifying three measurement uncertainty categores: 1) stochastic (random with a Gaussian distribution), 2) calibration uncertainty (also called systematic errors), and 3) accuracy (the orthogonal sum of the first two).  The term "precision" is (usually) equivalent to stochastic uncertainty.

This web page uses the following symbols for these uncertainty categories:

    SEs = stochastic uncertainty, (precision, from random noise having a Gaussian distribution)
    SEc = calibration uncertainty (systematic errors)
    SEa = accuracy (orthogonal sum of SEs and SEc)

SEs comes from stochastic "noise" (a random process, with a Gaussian distribution).  SEs is an inherent limitation of an observation, and no amount of calibration will change it.  Although SEs is a more straightforward uncertainty to estimate than SEc, it nevertheless is subject a subtle effect when using aperture photometry (described below) which renders it an estimated property even though it's estimate is an objective calculation.

SEc comes from a variety of sources.  For example, reference stars that have poorly-determined magnitudes produce a systemtic error.  Neglecting  the use of CCD transformation equation correction, or using poorly determined CCD transformation equation coefficients, also produce systemtic errors.  Other examples are the use of an imperfect flat field, neglect of detecting and editing cosmic ray defects, allowing stars to register in the non-linear portion of the CCD, etc. A ,onger descritopn of these errors is given below.

SEa is the orthogonal sum of SEs and SEc,  i.e., SEa = SQRT(SEs2 + SEc2).  When two observers compare their data this is the uncertainty that should be used.

Note that SE is an abbreviation for "standard error."  I have a quibble with use of the term "error" when the errors are unknown.  I lament the lost opportuinity for establishing a tradition of using a better term, such as "standard uncertainty."  Thoughout these web pages my use of SE really refers to "standard uncertainty."

For some tasks only SEs are needed.  An observer who uses the same hardware, observing strategy and analysis procedures, most systematic errors will be shared by all observations.  In this case it is possible to seach for small variations in a target object, or decay rate, that are at a level which is much smaller than SEc (and SEa).  However, when more than one observer are comparing data to achieve the same objective (variation or decay rate determination), either SEa is required or an offset analysis must be performed.  The offset analysis, allowing one observer's data set to be shifted in order to allow an intercomparison of data sets, is an empirical process, and it requires that sufficient data by both observers be available for approximately the same epochs.  When these conditions are met, then the SEs for each observer is relevant, and SEa can be disregarded (since systematic errors are empirically removed for the specific purpose of searching for variations or decay rate).  I therefore recommend that AAVSO provide for the submission of both uncertainty types, SEs and SEa.


SEs is easier to estimate than SEc.  I don't know of a really good way to estimate SEs from annulus photometry (MaxIm DL's photometry procedure doesn't do this).  Nevertheless, something is better than nothing, I believe, as long as we know what was done and ask all observers to do it the same way.  The simplest procedure I know of is to ask each observer to use SNR to estimate SEs.  For example, when reducing an image the user will usually try a set of aperture parameters, defined here by the terminology:  signal circle's aperture radius, gap and reference annulus width. I'll refer to a set of 3 specific values as an AGA choice.  The user must choose these 3 parameters with intelligence.  Namely, the user should choose an signal circle aperture radius as small possible to maximize the amount of signal in relation to noise uncertainty within that circle.  However, even though signal circle should contain most of the "object" star's point spread distribution (PSD); the middle annulus region, or gap, should  be set to a value that includes some of the PSD (discerned from the brightest reference star).  The gap annulus should also include any interfering stars that are close to an object or reference star. If the star is bright, it is currently my practice to adjust the center pixel location for this photometry annulus pattern in such a way as to maximize the object's intensity (and SNR).  When the object is faint, I believe this is unwise, and the user should manually adjust pixel location of the aperture pattern so that the signal circle is centered on the faint star.  For a fading supernova, when the star's location is known from earlier images, the aperture pattern can be placed at the pixel location that corresponds to where the SN should be located.

The precautions associated with faint stars is subtle.  It's motivated by the fact that when an object is faint any small changes in which pixels are included in the signal circle, and reference annulus, will produce stochastic fluctuations in the apparent intensity of the "object."  The user who is guided by intensity exclusively will therefore usually overestimate a faint object's brightness.  Slight changes to AGA values will change SNR as well as the object's intensity (and ultimately magnitude).  This is caused by different pixel involvement in the signal circle and reference annulus, and each pixel has a different stochastic noise value.  Each aperture radius and reference annulus width choice that is compatible with the above guidelines provides an equally good estimate of SNR.  And since each of these several AGA choices includes different stochastic contributions, they all contain "information" - albeit slightly different information from each other.  Thus, it is not exactly correct to use the SNR from just one AGA choice to calculate SEs.  I don't know of a way to combine the various SNR values for AGA choices to calculate an ensemble SEs.  Therefore, I suggest adopting the SNR corresponding to the greatest intensity (for bright objects), or centered on the visual center (for faint objects).  If we assume that this one "best" SNR contains all the information available, knowing that this is a conservative assumption assures that us that our SEs will be slightly larger than the data merits.

So, when a best SNR is available, SEs is to be calculated in the following way.  SEs = -2.5 * log10 (1 - 1/SNR).  A derivation of this equation can be found at SNR.  Notice that I chose (1 - 1/SNR) instead of (1 + 1/SNR).  Both are legitimate, but the one with a "minus" sign yields the larger of the two SEs estimates.  By allowing an observer to submit just one SEs we're implicitly assuming that SEs is symmetric (same in + and - directions), so we should choose the larger of the two SEs values to submit.  Again, it's prudent to be conservative since the real world almost always throws in more errors than we can model.

I actually use a different procedure to estimate SEs.  I use several images, and several AGA choices.  This leads to several object magnitudes.  For example, if 4 images and 3 AGA choices are used, there will be 12 (i.e., 3x4) object magnitudes in the CSV-files produced by MaxIm DL.  I import the CSV-files to a spreadsheet and calculate the population SE for these 12 magnitude estimates, then calculate the SEs for the average object magnitude using the "reciprocal root-(N-1)" relation.  Provided each image contains the same "information" this yields a much more realistic SEs than the SNR method.  I'm not recommending this for the AAVSO to impose on CCD observers because it entails more effort than many observers are willing to make.

This entire SEs procedure is somewhat subjective, which explains my use of the terminology "estimating SEs" even though it is calculated for a specific AGA choice by an objective equation.


Calibration uncertainties can usually only be estimated, making it a much more subjective process than estimating SEs.  In a typical observation there are several components to calibration uncertainty, SEc.  They should all be orthogonally added together to arrive at a final SEc.  This will be illustrated below.

Some components of SEc can be objectively assessed.  For example, the component associated with poorly-determined reference stars is straightforward, provided several poorly-determined reference stars are used and assuming the average of all reference stars in the catalogue have an average error of zero.  If USNO A2.0 R-mag's are used, for example, it is common to encounter discrepancies between reference stars of ~0.3 mag RMS [better estimate welcome].  So, if several reference stars are included in the MaxIm DL photometry analysis, the spreadsheet into which the CSV-files are imported will show that the scatter plot of "measured" versus "predicted" R-mag exhibits a population SE of ~0.3 mag.  A group of 5 reference stars then has a SE uncertainty of 0.15 mag (0.3/SQRT(5-1)), assuming that the catalogue is unbiased.  This is just one component of calibration uncertainty.

Flat field errors contribute to SEc.  Careless creation of a flat field is probably a common problem, and an entire web page on proper flat field creation is needed.  For example, the telescope shouldn't be pointed near the horizon for flat field exposures, since sky brightness varies stronly with elevation angle.  This problemis especially important for CCDs affording a large FOV.  In my experience if flat fields exposures of the sky are not taken ~20 east of zenith after sunset, errors of 1 or 2% are possible when comparing a star in one part of the iamge with a star at another location.  If a white poster board is used, then uniform illumination is crucial..  The orientation of the board should direct any "specular" reflection away from the telescope, leaving only the "diffuse" component for creating the flat field. A two-board system usually overcomes this problem.  An observer who is familiar with his system's behavior, and familiar with errors that may occur from his observing procedure, will be able to estimate this component of SEc.

Some filters used for photometry are not meant for photometry (rather, they're meant for "pretty pictures").  Data taken with these filters may require large corrections to convert them to the standard photometric scale (especially blue filters).  Even a system employing photometric filters should be calibrated once or twice yearly using M67 (or an alternate open cluster with known BVRI magnitudes).  Each user is responsible for "knowing" his system's CCD TE coefficient values and their "behavior" (changes with season, use, etc).  Whenever a new CCD is used, or a new filter, a new CCD TE evaluation must be performed.  Again, only the individual observer is in a position to know the size of this component of SEc.  The algorithm described by the AAVSO for correcting BVRI observed magnitude to true magnitude assume that the same coefficients can be used for all atmospheric conditions.  This is OK as a first-order approximation.  Very accurate corrections for differential photometry, however, require an algorithm that allows for the fact that extinction effects are proportional to air mass.  I am not faulting the AAVSO for not requiring that this second-order correction be performed, but I mention it as just another example of a systemtic error that wil be present at some level in every observer's reported observations.

A faint star in the "reference annulus" may degrade the accuracy of an object's magnitude measurement.  Trial and error with different annulus choices can be used to estimate the size of this error.  The algorithm used by the observer's photometry program may be proprietary, so it is unwise to second-guess the impact of an interfering star even if its intensity is known.  MaxIm DL uses a "median-mean" algorithm (during its photometry analysis) that is supposed to be highly effective at removing the effects of cosmic ray hits.  Aperture photometry algorithms may vary for other programs, and no algorithm can be said to be perfect for all situations.  This is another example of a systemtic error component.

When cosmic ray hits occur near the center location of a bright reference star, it is easily overlooked during visual inspection before a median combine. If this occurs, it will cause the object star to appear brighter than if the cosmic ray hit was not present.

Median combine only works when the average level for the "noise floor" is the same for all images to be median combined.  By "same" is meant that the differences in noise floor level betweenimages is small compared to the stochastic pixel noise (which, presumably is the same for all images).  If the noise floors differ by more than the pixel noise, then a median combine will not perform as expected; the median combined image could end up being exactly the middle image, with no influence by the other images.  Also, cosmic ray hits may not be eliminated, especially the low intensity hits.  In MaxIm DL there's a "Normalize" option that must be checked when doing median combine in order to avoid these problems.  The shortcomings of median combining are worse early in the night, when the sky background is still present and changing.

The sources of systematic errors are perhaps endless!  No amount of care will eliminate them.  Nevertheless, it is the observer's responsibility to "know thy equipment" and through experience ("looking for trouble," etc) gain a sense for the approximate levels for the important error sources. Usually, the observer is in the best position to know these things.  However, others may actually know when another observer is "cutting corners."  When an image is uploaded to the AAVSO, certain shortcomings can be seen immediately by an experienced observer (such as poor flat frames and the presence of cosmic ray hits).

Finally, SEc = (SEc12 + SEc22 + SEc32 ... + SEcN2)1/2 for N identified components to SEc.  If an observer frequently uses the same hardware, observing strategy and analysis procedure, then it is possible to learn through experience how well the end product performs.  If a known object is observed repeatedly, under differnt observing conditions, then the population SE of all measurments affords a means for estimating a typical total SEa, from which SEc may be determined (by orthogonally subtracting a known SEs). The next section illustrates with real data how this can be done.


This section illustrates the concept that each calibration uncertainty may depend upon star brigthness. I will demonstrate this using a real data set from a blazar monitoring project that I'm conducting.  The uncertainty plot that I present is specific to my hardware, observing strategy and analysis procedure (described at WCom).

Each observing situation will be different.  In fact, each project should be different since we should want to optimize our hardware configuration, observing strategy and analysis procedure to afford the best match of performance to the goals of the project.  Things that can change include the hardware configuration, exposure times, binning setting, the number of light and dark frames per observing cycle, etc.  The analysis procedure will certainly change in response to the goals.

The blazar monitoring project that serves for the present illustration is intended to detect hourly changes of blazar brightness, long term trends (known to exist), and slight night-to-night departures from the long term trends.  Since these goals can be achieved without reliance upon other observers, it is not necessary to produce magnitudes with good absolute accuracy.  Therefore, I am not performing CCD transformation equation corrections (since they will be the same for all my observations, with minor caveats).

After processing 2-weeks of data (with only 2 nights unobserved), the following graph for total "accuracy" was determined (note:  "accuracy" in this case has a special meaning, specific to my project's goals).

Figure 1"Stochastic uncertainty" (green and red) and the several components of "calibration error uncertainty" (dashed blue lines) are orthogonally add to produce "total uncertainty", also called "accuracy."  The 4 components of "calibration error uncertainty" are given letter symbols and are explained in the text.  The exact location of the various calibration plots are appoximately correct for a specific observing system, observing strategy and analysis procedure, also explained in the text, and will differ for each observer and each observing strategy and analysis procedure.

In this figure Line "A" represents linearity errors.  Stars that are close to overexposure will have a smaller intensity in relation to the other stars. For a non-anti-blooming CCD (better for photometry) it's important to keep all stars to be used below DN= 40,000 counts (SBIG's recommendation for the ST-8 CCD).  The linear region is different for different CCDs, and can vary from 20,000 to 65,000 counts (for SBIG CCDs).

Line "B" is an estimate of the effect of cosmic ray hits that appear on a bright star and are not detected by visual inspection.  It is easy to detect cosmic ray defects that appear on or near faint stars, but when they appear on bright ones they are usually unnoticed.  The effect is to increase the apparent brightness of the star.  If the bright star is a reference star, the object will of course appear fainter than true.  Long exposure (mine are 4-minutes) have more cosmic ray hits.

Line "C" is an approximate flat field uncertainty.  2.5 milli-magnitudes is 0.23%, and this is difficult to achieve unless the reference star and object are close together.  If they're not close together, and the project's goals is to detect changes (throughout the night, or from night to night), it is important to position the object and reference stars at the same pixel location for all observations.  This should minimize the effect of an imperfect flat field.  (If the project is to determine tha absolute magnitude of an object, so that it can be compared with other observers, then Line "C" would be much higher.  It's placement would depend on how many reference stars were used to set the magnitude scale, and the estimated accuracy of each star's magnitude.

Line "D" is caused by not knowing where to position the aperture pattern on a faint star.  If it's allowed to "snap" to the maximum intensity location, then a "too bright" bias will occur.  If it's done manually, then it is inevitable that the exact location for the star will not be selected.

Other calibration uncertainties are surely present in this data, and it is for each user to estimate which ones merit inclusion, and how their value depends on star brightness (or SNR, as used here).

The green and red traces are "stochastic uncertainties."  The green line is for the positive SE and the red line is for the negative SE.  Since AAVSO does not allow double-signed SE entries, the larger of the two (in absolute magnitude) should be used. As explained above, this SNR method of estimated stochastic SE yields a value that is slightly too large.

Finally, the total uncertainty, or accuracy, is the orthogonal sum of all SE components.  In this graph the components are orthogonally added at each SNR value, and the resultant accuracy trace is used as a guide in such things as selecting reference stars.  The accuracy trace also states how much variation can be expected without invoking variability on the star's part to explain the observed variations of apparent magnitude.

I found this chart to be helpful in understanding whether or not the blazar being monitored was variable on a day-to-day timescale (probably varies ~0.010 magitude from day-to-day, in addition to its large sustained fading or brightening of 0.4 and 1.1 magnitude per week).  It also explained why some reference stars had a greater day-to-day variation than others.  A check star at the best location on this SNR plot actually shows a 0.007,5 magnitude RMS scatter on a night-to-night basis, in agreement with Fig. 1. A wealth of detail on the hardware, observing strategy, analysis procedure and "looking for trouble" investigations can be found at WCom.


Gary Walker has suggested that the AAVSO CCD observers undertake an intense observation of a specific region of non-variable stars as a means for allowing each observer to assess his measurment accuracy.  Ane Hendon has suggested that a Landolt region be used for this project, since these stars are well-studied and are known to be non-variable. An additional argument for choosing a Landolt region is that many of those stars have well-measure BVRI magnitudes.  Each obsever could re-determine his system's CCD transformation equation coefficients as part of the effort to understand differences between observers.

Each observer would be able to estimate SEa by comparing measured magnitude with known magnitude.  From knowledge of SEs, SEc could be inferred.  Discrepancy patterns, such as with SNR, could be studied.  The previous section illustrates what would be involved. Any observer could go through an intense observing project of a Landolt region (M67 in the early Spring, or PG 1633+099 in the Summer) and derive SEc without the need of other observer results.  Nevertheless, the motivation to do this could be boosted if the AAVSO encouraged it for a specific interval of dates.


This site opened:  April 29, 2003 Last Update:  June 20, 2003