Like most (if not all!) sub-mm cameras, data from SHARC-II is not trivial to analyze. Each 10 minute file is roughly 30 Mb in size, and many targets (particularly faint point sources) require 3 hours of integration time (that's 0.5 GB of data). In addition, estimating the sky background is difficult and requires care on the part of the user. The purpose of this page is to provide a ROUGH outline on how to go from raw data to calibrated map. The rest of the documentation provides more detailed notes.
Data reduction takes place in three steps:
At this stage, you want to use your logs to identify what scan numbers correspond to which sources and calibrators. If you don't have them handy, you can generate logs using one of the Sharc utilities. This program takes as input a range of scan numbers, then reads the headers to produce a useful log.
The logs are useful because they keep track of the optical depth (τ) and pointing offsets (FAZO, FZAO). If significant pointing changes were made throughout the night, you will have to account for that in the data reduction (by supplying shifts). The optical depth is useful, but we generally use polynomial fits to the optical depth measurements from a whole night.
Why are the polynomial fits useful? Keep in mind that the scaling factor between τ(225 GHz) and τ(350 µm) is about 25. Thus any uncertainty in the optical depth is amplified by that large factor. Meaurement error on the 225 GHz tipper is not insignificant, and hence a polynomial fit to the optical depth (τ) readings over the course of a night gives a better estimate of the atmospheric opacity. This is also standard procedure with SCUBA at the JCMT.
Finally, you want to run “sharcgap” on your files to see if any suffer from timing gaps. They occured when buffers were overwritten with data, thus losing a small chunk (on the order of a few seconds) of data. Some files with gaps showed no ill effects, though others suffered from a de-syncronization of antennae and science data. This resulted in the signals being associated with the wrong place on the sky, rendering the data difficult (if not impossible) to analyze. These used to show up in data taken before 2004, but haven't since. Still, it is worth checking. CRUSH also checks for gaps in the files.
For almost all applications, you will be using CRUSH, the primary SHARC-II data reduction package. There is also a package called SHARCSOLVE, but its use is reserved almost exclusively to observations done in CHOPPING mode. CRUSH also supports such data. CRUSH is public, but you should contact one of us if you want to use SHARCSOLVE for chopping observations.
In both cases, the software takes a list of scan numbers (as well as configuration options), and produces a FITS file with 4 images in it. The images are: signal, noise, weight, and signal/noise ratio.
This is the step that is probably the most important, and is the source of many of the questions that the SHARC-II group receives. In general you want to reduce your calibration scans, and then apply an appropriate scaling factor to correct the science maps.
How you do this depends on whether you will use PSF photometry or Aperture photometry. Whatever method you use on your science frames has to be the same on your calibration frames. For simplicity, I will assume you use a fixed aperture for the rest of this discussion. Take your calibration frame and measure the flux within your chosen aperture. Call this the “INSTRUMENTAL FLUX”. Now, look up the true 350 µm flux of the calibrator. Many calibrators have a flux constant in time (such as Arp220), but objects in the solar system do not. We have provided a recipe for calculating the true 350 µm flux of such objects on the calibration part of the web page.
You can use the instrumental flux and true flux to derive a scaling factor. This can then be applied to the science frame using sofware from our utilites page (a simple program that reads a map and multiplies it by the scaling factor).
Then you can apply the aperture to the science frame to derive calibrated fluxes of your targets.
show
tool to see the effective image resolution). Peak fluxes offer higher precision photometry for point sources, especially on beam-smoothed maps.-faint
, -extended
, -deep
, apply spatial filtering or not), the peak flux will be the same (within noise). This has two practical consequences:bright
) and the science target with other options (e. g., with -deep
) and you can still compare these without worries if your source is compact or point-like.faint
, deep
, default) filter structures on different scales. You should never use deep
for reducing extended structures and you will not need to worry for calibrating apertures up to about 1/2 FoV in size. For larger apertures, the process gets a bit complicated and I defer any recommendations to a future document on the matter… > crush sharc2 […] -tau.225GHz=0.053 -tau=225GHz <scans> …
> crush sharc2 […] -tau.350um=1.116 -tau=350um <scans> …
> crush […] -scale=1.15 <scans> -scale=0.83 <scans> …
Use sftp to transfer data to your own computer. On a good night, SHARC II will generate 2–3 GB of raw data.
The data are stored at
kilauea:/halfT/sharcii/data2_YYYYmmm
,
where YYYYmmm is the observing date, for example 2014mar
. The most recent data are linked to
kilauea:/halfT/sharcii/current
. The data file names are
sharc2-NNNNN.fits
,
where NNNNN is the exposure number. For example, sharc2-014282.fits
is exposure 14282.
Image (fits) files produced by CRUSH are stored at
kilauea:~sharc/src/data
.
The filenames are composed of the source name and the exposure numbers.
For security, two local copies of the data are kept at all times. A third copy at Caltech is generated within a few days. If for some reason the data seems to be missing, please contact the CSO staff to remedy the problem.