SA13 master catalogue

This notebook presents the merge of the various pristine catalogues to produce HELP mater catalogue on ELAIS-N1.

In [1]:
from herschelhelp_internal import git_version
print("This notebook was run with herschelhelp_internal version: \n{}".format(git_version()))
This notebook was run with herschelhelp_internal version: 
04829ed (Thu Nov 2 16:57:19 2017 +0000)
In [2]:
%matplotlib inline
#%config InlineBackend.figure_format = 'svg'

import matplotlib.pyplot as plt
plt.rc('figure', figsize=(10, 6))

import os
import time

from astropy import units as u
from astropy.coordinates import SkyCoord
from astropy.table import Column, Table
import numpy as np
from pymoc import MOC

from herschelhelp_internal.masterlist import merge_catalogues, nb_merge_dist_plot, specz_merge
from herschelhelp_internal.utils import coords_to_hpidx, ebv, gen_help_id, inMoc
In [3]:
TMP_DIR = os.environ.get('TMP_DIR', "./data_tmp")
OUT_DIR = os.environ.get('OUT_DIR', "./data")
SUFFIX = os.environ.get('SUFFIX', time.strftime("_%Y%m%d"))

try:
    os.makedirs(OUT_DIR)
except FileExistsError:
    pass

I - Reading the prepared pristine catalogues

In [4]:
uhs = Table.read("{}/UHS.fits".format(TMP_DIR))
legacy = Table.read("{}/LegacySurvey.fits".format(TMP_DIR))

II - Merging tables

We first merge the optical catalogues and then add the infrared ones: WFC, DXS, SpARCS, HSC, PS1, SERVS, SWIRE.

At every step, we look at the distribution of the distances separating the sources from one catalogue to the other (within a maximum radius) to determine the best cross-matching radius.

UHS

In [5]:
master_catalogue = uhs
master_catalogue['uhs_ra'].name = 'ra'
master_catalogue['uhs_dec'].name = 'dec'

Add Legacy Survey

In [6]:
nb_merge_dist_plot(
    SkyCoord(master_catalogue['ra'], master_catalogue['dec']),
    SkyCoord(legacy['legacy_ra'], legacy['legacy_dec'])
)
In [7]:
# Given the graph above, we use 0.8 arc-second radius
master_catalogue = merge_catalogues(master_catalogue, legacy, "legacy_ra", "legacy_dec", radius=0.8*u.arcsec)

Cleaning

When we merge the catalogues, astropy masks the non-existent values (e.g. when a row comes only from a catalogue and has no counterparts in the other, the columns from the latest are masked for that row). We indicate to use NaN for masked values for floats columns, False for flag columns and -1 for ID columns.

In [8]:
for col in master_catalogue.colnames:
    if "m_" in col or "merr_" in col or "f_" in col or "ferr_" in col or "stellarity" in col:
        master_catalogue[col].fill_value = np.nan
    elif "flag" in col:
        master_catalogue[col].fill_value = 0
    elif "id" in col:
        master_catalogue[col].fill_value = -1
        
master_catalogue = master_catalogue.filled()
In [9]:
master_catalogue[:10].show_in_notebook()
Out[9]:
<Table length=10>
idxuhs_idradecuhs_stellaritym_wfcam_jmerr_wfcam_jm_ap_wfcam_jmerr_ap_wfcam_jf_wfcam_jferr_wfcam_jflag_wfcam_jf_ap_wfcam_jferr_ap_wfcam_juhs_flag_cleaneduhs_flag_gaiaflag_mergedlegacy_idf_bass_gferr_bass_gf_ap_bass_gferr_ap_bass_gf_bass_rferr_bass_rf_ap_bass_rferr_ap_bass_rf_bass_zferr_bass_zf_ap_bass_zferr_ap_bass_zlegacy_stellaritym_bass_gmerr_bass_gflag_bass_gm_ap_bass_gmerr_ap_bass_gm_bass_rmerr_bass_rflag_bass_rm_ap_bass_rmerr_ap_bass_rm_bass_zmerr_bass_zflag_bass_zm_ap_bass_zmerr_ap_bass_zlegacy_flag_cleanedlegacy_flag_gaia
degdeguJyuJyuJyuJyuJyuJy
0459895332429197.84164599842.75622203420.99386510.95520.00044460711.00330.000441482150632.061.6836False144112.058.5986False3False-1nannannannannannannannannannannannannannannanFalsenannannannanFalsenannannannanFalsenannanFalse0
1459663645730198.28002524742.75506985990.99386510.940.00044769211.42380.000531673152763.062.9903False97828.447.9054False3True-1nannannannannannannannannannannannannannannanFalsenannannannanFalsenannannannanFalsenannanFalse0
2459937366039197.80943114442.67652707550.99386511.84290.00071227411.88230.00066883666505.743.6297False64132.739.5071False3False-1nannannannannannannannannannannannannannannanFalsenannannannanFalsenannannannanFalsenannanFalse0
3459663645731198.27961456842.75516113120.99386514.16970.0027491111.97210.0006884667800.4419.7509False59040.037.4373False2False-1nannannannannannannannannannannannannannannanFalsenannannannanFalsenannannannanFalsenannanFalse0
4459926982831198.30752900842.61167836610.99386513.05330.0013212213.04860.0011972321811.026.5415False21905.124.1545False3False-1nannannannannannannannannannannannannannannanFalsenannannannanFalsenannannannanFalsenannanFalse0
5459895331952197.83515934242.48105242950.99386513.25220.0014227413.24290.0012856118160.623.7975False18317.021.6891False3False-1nannannannannannannannannannannannannannannanFalsenannannannanFalsenannannannanFalsenannanFalse0
6459926982783198.34974670242.58675071470.99386513.2080.0014248213.20130.0012890618915.024.8223False19032.122.5961False3False-1nannannannannannannannannannannannannannannanFalsenannannannanFalsenannannannanFalsenannanFalse0
7459937366117197.83930154742.61236606480.99386513.33250.0015019913.32120.0013508916865.523.3315False17042.321.2044False0False-1nannannannannannannannannannannannannannannanFalsenannannannanFalsenannannannanFalsenannanFalse0
8459926982732198.39130100442.62247836690.99386513.68490.0017785513.71350.0016636312191.119.9703False11874.618.195False3False-1nannannannannannannannannannannannannannannanFalsenannannannanFalsenannannannanFalsenannanFalse0
9459937366153197.73674281542.58010179050.99386513.99930.0021250513.97840.001892399125.6217.861False9303.2116.2151False3False-1nannannannannannannannannannannannannannannanFalsenannannannanFalsenannannannanFalsenannanFalse0

III - Merging flags and stellarity

Each pristine catalogue contains a flag indicating if the source was associated to a another nearby source that was removed during the cleaning process. We merge these flags in a single one.

In [10]:
flag_cleaned_columns = [column for column in master_catalogue.colnames
                        if 'flag_cleaned' in column]

flag_column = np.zeros(len(master_catalogue), dtype=bool)
for column in flag_cleaned_columns:
    flag_column |= master_catalogue[column]
    
master_catalogue.add_column(Column(data=flag_column, name="flag_cleaned"))
master_catalogue.remove_columns(flag_cleaned_columns)

Each pristine catalogue contains a flag indicating the probability of a source being a Gaia object (0: not a Gaia object, 1: possibly, 2: probably, 3: definitely). We merge these flags taking the highest value.

In [11]:
flag_gaia_columns = [column for column in master_catalogue.colnames
                     if 'flag_gaia' in column]

master_catalogue.add_column(Column(
    data=np.max([master_catalogue[column] for column in flag_gaia_columns], axis=0),
    name="flag_gaia"
))
master_catalogue.remove_columns(flag_gaia_columns)

Each prisitine catalogue may contain one or several stellarity columns indicating the probability (0 to 1) of each source being a star. We merge these columns taking the highest value. We keep trace of the origin of the stellarity.

In [12]:
stellarity_columns = [column for column in master_catalogue.colnames
                      if 'stellarity' in column]

print(", ".join(stellarity_columns))
uhs_stellarity, legacy_stellarity
In [13]:
# We create an masked array with all the stellarities and get the maximum value, as well as its
# origin.  Some sources may not have an associated stellarity.
stellarity_array = np.array([master_catalogue[column] for column in stellarity_columns])
stellarity_array = np.ma.masked_array(stellarity_array, np.isnan(stellarity_array))

max_stellarity = np.max(stellarity_array, axis=0)
max_stellarity.fill_value = np.nan

no_stellarity_mask = max_stellarity.mask

master_catalogue.add_column(Column(data=max_stellarity.filled(), name="stellarity"))

stellarity_origin = np.full(len(master_catalogue), "NO_INFORMATION", dtype="S20")
stellarity_origin[~no_stellarity_mask] = np.array(stellarity_columns)[np.argmax(stellarity_array, axis=0)[~no_stellarity_mask]]

master_catalogue.add_column(Column(data=stellarity_origin, name="stellarity_origin"))

master_catalogue.remove_columns(stellarity_columns)

IV - Adding E(B-V) column

In [14]:
master_catalogue.add_column(
    ebv(master_catalogue['ra'], master_catalogue['dec'])
)

V - Adding HELP unique identifiers and field columns

In [15]:
master_catalogue.add_column(Column(gen_help_id(master_catalogue['ra'], master_catalogue['dec']),
                                   name="help_id"))
master_catalogue.add_column(Column(np.full(len(master_catalogue), "SA13", dtype='<U18'),
                                   name="field"))
In [16]:
# Check that the HELP Ids are unique
if len(master_catalogue) != len(np.unique(master_catalogue['help_id'])):
    print("The HELP IDs are not unique!!!")
else:
    print("OK!")
OK!

VI - Cross-matching with spec-z catalogue

There is currently no specz available

In [17]:
#specz =  Table.read("../../dmu23/dmu23_SA13/data/SA13-specz-v2.1.fits")
In [18]:
#nb_merge_dist_plot(
#    SkyCoord(master_catalogue['ra'], master_catalogue['dec']),
#    SkyCoord(specz['ra'] * u.deg, specz['dec'] * u.deg)
#)
In [19]:
#master_catalogue = specz_merge(master_catalogue, specz, radius=1. * u.arcsec)

VII - Choosing between multiple values for the same filter

There are no duplicate filers

VIII.a Wavelength domain coverage

We add a binary flag_optnir_obs indicating that a source was observed in a given wavelength domain:

  • 1 for observation in optical;
  • 2 for observation in near-infrared;
  • 4 for observation in mid-infrared (IRAC).

It's an integer binary flag, so a source observed both in optical and near-infrared by not in mid-infrared would have this flag at 1 + 2 = 3.

Note 1: The observation flag is based on the creation of multi-order coverage maps from the catalogues, this may not be accurate, especially on the edges of the coverage.

Note 2: Being on the observation coverage does not mean having fluxes in that wavelength domain. For sources observed in one domain but having no flux in it, one must take into consideration de different depths in the catalogue we are using.

In [20]:
uhs_moc = MOC(filename="../../dmu0/dmu0_UHS/data/UHS-DR1_SA13_MOC.fits")
legacy_moc = MOC(filename="../../dmu0/dmu0_LegacySurvey/data/LegacySurvey-dr4_SA13_MOC.fits")
In [21]:
was_observed_optical = inMoc(
    master_catalogue['ra'], master_catalogue['dec'],
    legacy_moc) 

was_observed_nir = inMoc(
    master_catalogue['ra'], master_catalogue['dec'],
    uhs_moc
)

was_observed_mir = np.zeros(len(master_catalogue), dtype=bool)
In [22]:
master_catalogue.add_column(
    Column(
        1 * was_observed_optical + 2 * was_observed_nir + 4 * was_observed_mir,
        name="flag_optnir_obs")
)

VIII.b Wavelength domain detection

We add a binary flag_optnir_det indicating that a source was detected in a given wavelength domain:

  • 1 for detection in optical;
  • 2 for detection in near-infrared;
  • 4 for detection in mid-infrared (IRAC).

It's an integer binary flag, so a source detected both in optical and near-infrared by not in mid-infrared would have this flag at 1 + 2 = 3.

Note 1: We use the total flux columns to know if the source has flux, in some catalogues, we may have aperture flux and no total flux.

To get rid of artefacts (chip edges, star flares, etc.) we consider that a source is detected in one wavelength domain when it has a flux value in at least two bands. That means that good sources will be excluded from this flag when they are on the coverage of only one band.

In [23]:
# SpARCS is a catalogue of sources detected in r (with fluxes measured at 
# this prior position in the other bands).  Thus, we are only using the r
# CFHT band.
# Check to use catalogue flags from HSC and PanSTARRS.
nb_optical_flux = (
    1 * ~np.isnan(master_catalogue['f_bass_g']) +
    1 * ~np.isnan(master_catalogue['f_bass_r']) +
    1 * ~np.isnan(master_catalogue['f_bass_z'])
)

nb_nir_flux = (
    1 * ~np.isnan(master_catalogue['f_wfcam_j']) 
)

nb_mir_flux = np.zeros(len(master_catalogue), dtype=float)
In [24]:
has_optical_flux = nb_optical_flux >= 2
has_nir_flux = nb_nir_flux >= 2
has_mir_flux = nb_mir_flux >= 2

master_catalogue.add_column(
    Column(
        1 * has_optical_flux + 2 * has_nir_flux + 4 * has_mir_flux,
        name="flag_optnir_det")
)

IX - Cross-identification table

We are producing a table associating to each HELP identifier, the identifiers of the sources in the pristine catalogues. This can be used to easily get additional information from them.

There is no SDSS on SA13.

In [25]:
id_names = []
for col in master_catalogue.colnames:
    if '_id' in col:
        id_names += [col]
    if '_intid' in col:
        id_names += [col]
        
print(id_names)
['uhs_id', 'legacy_id', 'help_id']
In [26]:
master_catalogue[id_names].write(
    "{}/master_list_cross_ident_sa13{}.fits".format(OUT_DIR, SUFFIX), overwrite=True)
id_names.remove('help_id')
master_catalogue.remove_columns(id_names)

X - Adding HEALPix index

We are adding a column with a HEALPix index at order 13 associated with each source.

In [27]:
master_catalogue.add_column(Column(
    data=coords_to_hpidx(master_catalogue['ra'], master_catalogue['dec'], order=13),
    name="hp_idx"
))

XI - Saving the catalogue

In [28]:
columns = ["help_id", "field", "ra", "dec", "hp_idx"]

bands = [column[5:] for column in master_catalogue.colnames if 'f_ap' in column]
for band in bands:
    columns += ["f_ap_{}".format(band), "ferr_ap_{}".format(band),
                "m_ap_{}".format(band), "merr_ap_{}".format(band),
                "f_{}".format(band), "ferr_{}".format(band),
                "m_{}".format(band), "merr_{}".format(band),
                "flag_{}".format(band)]    
    
columns += ["stellarity", "stellarity_origin", "flag_cleaned", "flag_merged", "flag_gaia", "flag_optnir_obs", 
            "flag_optnir_det", "ebv"]
In [29]:
# We check for columns in the master catalogue that we will not save to disk.
print("Missing columns: {}".format(set(master_catalogue.colnames) - set(columns)))
Missing columns: set()
In [30]:
master_catalogue[columns].write("{}/master_catalogue_sa13{}.fits".format(OUT_DIR, SUFFIX), overwrite=True)