Next: Efficient data mining in the X-ray sky background
Up: Data Processing Systems
Previous: Chandra Long-Term Trending and Prognostication Tools
Table of Contents - Subject Index - Author Index - Search - PS reprint - PDF reprint

Modigliani, A. & Rosa, M. R. 2003, in ASP Conf. Ser., Vol. 314 Astronomical Data Analysis Software and Systems XIII, eds. F. Ochsenbein, M. Allen, & D. Egret (San Francisco: ASP), 808

Evaluation of Methods to Locate Emission Lines From Calibration Lamps in 2D Spectroscopic Data

A. Modigliani
European Southern Observatory, Karl-Schwarzschild-Str. 2, D-85748 Garching, Germany, Email: amodigli@eso.org

M. R. Rosa1
Space Telescope European Coordinating Facility, c/o ESO, Karl-Schwarzschild-Str. 2, D-85748 Garching, Germany, Email mrosa@stecf.org

Abstract:

Results are reported from an investigation of object centering methods in spectroscopic calibration software. The emphasis is laid on procedures which produce highly repeatable accurate results for tilted, broadened monochromatic slit images from wavelength calibration lamps in the presence of substantial noise.


1. Background

Instrument physical models are used increasingly in instrument design, in operations and specifically by science support in order to understand and to predict the behaviour of a wide range of modes of an instrument without actually operating it or without repeating elaborate measurements over and over again (see the review paper in this volume by Ballester & Rosa). A specific case is the generic 2D-spectrograph model (Ballester & Rosa 1997) which has been implemented in various forms into the ESO VLT Exposure Time Calculators, and is also central part of the calibration pipelines for VLT spectrographs such as UVES and FLAMES. A variant of the very same generic model is currently been implemented by the ST-ECF as the backbone of the wavelength calibration of all data obtained with three different detectors in a plentitude of modes of the Hubble Space Telescope (HST) spectrograph STIS.

Experience with these models shows that the predictive power in principle is so high that one can distinguish ``outliers'' into geometric effects (camera distortion, detector gridding), inaccurate entries in laboratory wavelength catalogues and insufficient description of the instrument proper on a scale below 0.1 of a spectral resolution element or pixel over fields many thousands of pixels across. At this point it is indispensable to be able to obtain the measurements of reference features in experimental data such as the wavelength calibration lines with a comparable intrinsic accuracy and repeatability.

It is obvious that the ultimate limitation for obtaining optimal calibration for differing modes of real instruments is the uncertainty associated with obtaining a ``true observed center'' for the calibration lines in the presence of noise. While ground based, visible wavelength calibration data usually can be exposed to have more than sufficient signal, space based observations are limited in exposure time by engineering constraints. HST STIS FUV MAMA wavelength calibration data in echelle mode with their typical low signal to noise ratios in the far UV are a particularly demanding case. An example of such data is shown in Figure 1, a 70 by 50 pixel window around two ``bright'' calibration lines (slit images) with a maximum photon count of 40 in the brightest pixel. The fainter line has a total of 875 counts spread over an area of 357 pixels.

Figure 1: Typical photon count distribution for two calibration lines.
\begin{figure}
\epsscale{0.45}
\plotone{P10-10_f1.ps}
\end{figure}

2. Starting Conditions and Objectives

As a reference case we used a STIS EG140H FUV MAMA mode 3000 sec long exposure, which contains some 700 identifiable lines from a Pt-Cr/Ne lamp in about 35 spectral orders between 1300Å and 1500Å, spread across a frame of 2000 by 2000 pixels. It is noteworthy that routine data for instrumental verification bracketing a typical science observation are typically exposed for 60 seconds only, i.e. a mere 2% of our study case.

In an iterative procedure the parameters of the STIS model have been carefully optimized for this mode to yield predictions of the line centers. The distribution of the residuals ``measured - predicted'' for these 700 line centers should yield a point symmetric cloud which, under ideal conditions, should be well centered around the origin and should assume a rather narrow distribution with a sigma corresponding to a fractional resolution element (line width) - in the present case about 0.2 pix.

Figure 2: Residuals between measured and predicted positions $\Delta $x, $\Delta $y. Left: Initial software, 2D Gaussian. Right: Improvements obtained using a complex gaussoid line profile and correlation techniques.
\begin{figure}
\plottwo{P10-10_f2a.ps}{P10-10_f2b.ps}
\end{figure}

However, as can be seen in Figure 2 (left), the cloud is not really concentrated and has a characteristic size of 1 pixel. Naturally the question arises whether this reflects a ``weakness'' of the physical model, or the ``inability'' of the measurement procedure to ascertain the true observed line centers on the images instead.

As part of a collaboration between ESO/DMD/DFS and ST-ECF/IPMG groups we have evaluated object centering methods and verified that the ``measurements'' are obtained with the least amount of error and systematics can be excluded. Improvements in this area would benefit also to other data reduction steps based on line centering methods.

Initial starting point was the Feb 2003 version of the STIS-ANNEAL bundle from the STIS Calibration Enhancement project of the ST-ECF. Currently the baseline package utilizes the Levenberg-Marquardt (LM) method to fit a bi-dimensional Gaussian with six adjustable parameters (widths, amplitudes, angle between the two directions, background pedestal) to the data values in a selectable 2D-window around the estimated center of the feature (tilted and distorted monochromatic image of the slit).

2.1 Improvement of Line Shape Models

It is clear that the lines (resp. slit images) in Figure 1 are NOT well described by two Gaussians. Therefore, finding a better model for the line shape offered an obvious route to improving the residuals between the STIS model and the actual measurements of line centroids. As shown in Figure 3 a bended, rotated, 2D-gaussoid looks more realistic. Unfortunately, the use of very complex analytical shapes in unattended fitting is very critical about the choice of starting values.

Thus we decided to limit the complexity of the line model by keeping some of the less important parameters fixed to appropriate values.

Figure 3: A snapshot from the IDL line-shape model analysis tool. Displayed from left to right: actual data in logarithmic scale, and contour plot, raw and smoothed data, line shape model 3D and contours.
\begin{figure}
\epsscale{0.69}
\plotone{P10-10_f3.ps}
\end{figure}

These modifications allow a slight improvement of the line centroiding.

2.2 Modifications of the LM algorithm

On the Web we located several different examples of implementations of the LM method (1D case). We searched the web for indications of limitations of the LM method in the presence of noise. Apparently the result of a fit is critically dependent on the choice of metric used to evaluate the goodness of the fit.

We analized several possibilities. Replacing in the chi-square criterium the sum of squares by the sum of the absolute distances one decreases the sensibility to outliers but also to the line wings improving the fit of line centers. But STIS data sets have often very extended lines with typically only ten photon counts in the centers. This makes very high the probability to have in the centers holes with very few or even no counts (Poisson-noise).

Taking into account the noise in the chi-square weighting factor one emphasizes those regions of the data window which by chance have less apparent shot noise. In the low signal domain of the present data that results in considerable "miscentering" even for relatively "bright" lines.

Finally we improved considerably the fit adopting a chi-square expression variant which is proportional to the inverse of the product between the measured data and the fitting function. This enforces a maximisation of the correlation between data and fit function shape. This modification is now being implemented as the default for the STIS-CE wavelength calibration baseline package. Figure 2 (right panel) shows the diagram of the residuals which should be compared to the starting conditions in Figure 2 (left panel).

3. Summary

In an attempt to understand the unexpected large residuals when comparing STIS FUV MAMA E140H echelle wavelength calibration images with the STIS-CE echelle model predictions we identified as the main source of error the inability of the fitting routines to provide accurate measurements of the noisy extended calibration lines.

Searching for remedies we tested a large number of modifications to the fitting algorithm and the fit model functions. We identified two successful modifications to the standard fitting package which is based on a Levenberg-Marquardt algorithm. We recommend to adapt the 2D line model from a simple bi-dimensional Gaussian to a gaussoid with different exponents in X and Y direction, which resemble the observed line profiles much better. We also recommend to modify the standard "chi-square" formula such that the correlation between the shapes of the line model and the line data is maximised.

References

Ballester, P., Rosa, M.R., 1997, A&A Suppl, 126, 563



Footnotes

... Rosa1
Affiliated with the Space Telescopes Division of the European Space Agency, ESTEC, Noordwijk, the Netherlands

© Copyright 2004 Astronomical Society of the Pacific, 390 Ashton Avenue, San Francisco, California 94112, USA
Next: Efficient data mining in the X-ray sky background
Up: Data Processing Systems
Previous: Chandra Long-Term Trending and Prognostication Tools
Table of Contents - Subject Index - Author Index - Search - PS reprint - PDF reprint