GOC – Week 5

The code is so close to be working now; installing ATLAS it led us to a missing API package within the code. This package is responsible for finding the catalogues that the code needs to work when SWARP takes the scamp calibrations, this is believed to be because the CCD readings from the centre of the four CCDs as opposed to the top centre of the individual CCDs.  

The rest of the week has been spent analysing previously reduced images from Hubble. Emma has implemented the code that is used to plot the isochrones and has been testing the effects that different values of: distance, metallicity and age will have. This code is imported from Dartmouth library using the ‘isochrone’ package that Python has. This code required an initial value for: distance, age and metalicty in order for a plot to be produced, from this we could adjust the values until the isochrone could produce a perfect fit onto our data.  Using our results from the literature search for initial values of approximately 10e9 years, 10e3PC we implemented the code a produced a plot as shown below.

The isochrone was quite far off from our data so we continued to search for values testing different apertures and managed to get a close fit.

However, we came across a problem. Changing the value of metallicity should shift our isochrone horizontally but it seems to have no effect. Emma believes this could be due to the way that the code calls out the variable and hopes that this be resolved next week. To see how close our results were we took only the inner region of M3 so we could accurately see how well the turnoff point aligns. By performing this we managed to see that they nearly match. A shift in metallicity will produce a near perfect fit at an age of 11.4e9 years.

From performing this we can learn how the main sequence of clusters change with age, the turnoff point will correspond to the point the stars evolve off the main sequence line hence show the age.

Hamish has been working on plotting HR diagrams. In order to do this Anaconda needed to be downloaded Python. First we needed to run SExtractor on M71 in the violet filter, this filter was picked since it is further away from the IR for a greater shift for metallicity. Using Topcat we took the catalogues that were created by SExtractor and found one that matched up well with it. Hamish managed to find one that had 210 matches. From here we can take the difference from magnitude of zero point and magnitude in the catalogue. Plotting a histogram and writing a Python code to find the median gave us a Zero point of 28.89. Running the code with the new zero point and errors created a table of data with errors which can be used to produce HR diagrams

There is a notable about of noise present within the HR diagrams which we believe to be the results of background stars that are not part of our cluster but are still picked up be SExtractor. Hamish hopes to be able to filter out the noise.

Our result of the week is the isochrone plot of the outer region of M3 with 1arcsecond apertures at a distance of 9600PC with an age of 11.4e9yrs since it demonstrates how well the isochrone can fit over reduced data which can allow different properties of a cluster to be determined despite the metallicity not aligning properly due to a small issue with the code.

Thomas Harrison

GOC – Week 4

We are so close now to getting our catalogues from the reduced INT data of M16 and M67. We are able to reduce the data with bias subtraction and flat fielding and then fix the headers for use with SExtractor. We can run SExtractor on our images to create a catalogue of stars in the images, however our images in the different filters are still not aligned. We need to use Scamp in order to fix this but we are still having issues with this programme. Turns out the software was not installed properly on the computers in the astrolab and so we had spent a week trying to run something that wasn’t there! So, Harry started the install on Monday through installing atlas, which can take a long time! However, we are still not quite there and without this we cannot perform the astrometric calibrations needed in order to match the stars in the two filters for our colour magnitude diagrams.

While we are waiting for this, we decided to use already reduced data in the form of Hubble data to get some catalogues and some results! Since the INT data is going to be difficult to use for the globular clusters, M3 and M71, we decided to start looking at these with Hubble data instead. I started with looking at M3 and looked at images of this cluster in the infrared and visible filters. I am using two images of the cluster in each filter, shown above, one that looks at the cluster as a whole and one that looks at a zoomed in part of the central region. This will give me a better representation of stars in the cluster, as looking at the cluster as a whole we can see that is hard to resolve the central regions and when we run SExtractor on these images, it only picks up the outer regions of the cluster and cannot find sources in the core. I ran SExtractor on the two images in each filter, with the zero point of the images set to zero. The zero point needs to be determined in order to have a true catalogue to use for colour magnitude diagrams. This catalogue with ZP = 0 can be matched to a catalogue of known magnitudes using Topcat. We can then find the difference of the SExtractor measured magnitudes in our images to the true catalogue magnitude and plot a histogram of this difference in magnitude. From this histogram, we can find the zero point of each of our images by finding the median of this magnitude difference. Initially we took the zero point to be the peak of the histogram and assume that the two images of the cluster in the same filter have the same zero point as the peaks of the histogram appear very close. We didn’t realise the significance that a small change in magnitude for the point would have on our catalogue and upon reflection realise that we need to find the zero point by looking at the median of the magnitude difference as this should not be affected by outliers, as much. The catalogue I am using to match with measured magnitudes is ‘Stromgren photometry of M3 (NGC5272)’ (Massari+, 2016).

I have included the histogram for M3 in the visible filter. The diff_mag is calculated by comparing SExtractor magnitudes from the image with ZP = 0, with V magnitude from the Massari catalogue. The red shows the values for sources in the inner regions of M3 and the blue for sources in the outer regions. The peaks appear to line up and so we took the peak value to be 24.5 and set this as the zero point for our images of M3 in the visible filter. We now realise that looking at the median of the two different regions will give a better value for zero point, and will probably be different for each of the regions imaged. Initially we used the value of 24.5 for the zero-point calculated for both images in the visible and re ran SExtractor with this set as ZP to get a true catalogue for our image. The same was done with the infrared filtered images with the corresponding zero-point found for these. The two catalogues in the two filters, for both regions of M3, could then be matched in Topcat. You can see the grey points that correspond to sources found in both filters.

The large square corresponds to the image of the cluster as a whole and the smaller square in the centre corresponds to our second images that probe the core of the cluster on scales that mean SExtractor can resolve the different sources here. We then plot our final matched catalogue to get a colour magnitude diagram for this cluster. It is now apparent why the small difference between the two zero points of the two different regions is important, it can create a big difference when it comes to looking at our colour magnitude diagrams. The blue points show the catalogue for the outer regions of the cluster, using the image of the whole cluster, and the red points show the catalogue for the inner regions, using the second image that probes the clusters core. We can see a shift between the two main sequences which is probably due to assuming that the two images have the same zero point, hence why we now want to consider the median of our histogram (discussed earlier) in order to get more accurate measurements. We can see that the main sequence is very thin, stars don’t vary in v-I magnitude by more than 0.5 magnitudes and so if we have not measured the zero point to this level of accuracy we cannot be sure if the shift of main sequence between the two regions is due to this zero point measurement (most likely) or whether it is actually telling us about the different chemical compositions in the two regions of M3.

Another improvement for determining the zero point of these images is to, before matching the two catalogues, we can pick the stars we want to use so to make our measurements more accurate. We want to pick stars that are not in over dense regions in the image, otherwise when the stars are matched in Topcat with a tolerance of 1 arc second, we cannot be certain that the matched star is the same as the one we are looking at in our image. Also, we want to look at stars with an average magnitude, we don’t want to be looking at stars that are the brightest as they can be saturated or the lowest in magnitude, as these are not reliable. This should hopefully give us a better estimate for the zero point of our images so that we can produce better catalogues. We will implement and test these methods next week with M3 again and also looking at Hubble data for M71, then hopefully M16 and M67 INT data towards the end of the week.

One other aim for next week is to look at how we will analyse these colour magnitude diagrams. We will look into fitting isochrones to our data, either by eye or using models, time permitting. From this we will hopefully be able to determine an estimate for the clusters age and distance, as well as other parameters. We can convert our apparent magnitudes into absolute using distances to the clusters measured by the Gaia satellite. This shifts the main sequence vertically in our colour magnitude diagrams but results in no change horizontally, as long as all the stars belong to the cluster and thus are at the same distance away. Any stars that move horizontally can be discounted from the catalogue of cluster stars. We can use the isochrones fitting models, that work in absolute magnitudes, to try to get an estimate of the distance to the cluster from the main sequence shift between the isochrones and our measured apparent magnitudes. We plan then to compare this estimate to the values measured by Gaia for each cluster. We are looking into how we can learn about the metallicity of the clusters from their colour magnitude diagrams and know that it is best to look over a large wavelength range in filters to see these metal lines. We know there are more metal lines in bluer filters and so we could construct our colour magnitude diagrams using B-I magnitude instead of V-I to hopefully measure the metallicity of these clusters.

Our result of the week is the colour magnitude diagram of M3. The blue regions in this image shows the inner parts of the cluster sample and the green the outer parts of the cluster. We see that the two parts don’t align correctly due to our assumptions with the zero point that we will amend next week. However, although the plot is not strictly accurate it does produce a nice structure resembling a peacock and so we would like to rename the colour magnitude diagram a peacock diagram! See for yourself the likening between globular clusters and these beautiful birds.

Emma Dodd

Week 3 – GOC

This week has been very focused on getting the code finished and error free so that we can move onto the next stage of our project. Emma and Harry have been working on the code, with Emma managing to clear up some of the noise on the H-R diagram that she had produced the week before with data given to us (shown below).

On the left is the image from last week, and the one on the right is the image without as much noise. This noise is thought to be stars not in the cluster but still picked up by SExtractor. Currently, Emma is working on the code that gives the errors for each H-R Diagram, as the errors provided to us by SExtractor aren’t as accurate as we’d hope. These errors will allow us to find the errors on our calculated Age, Distance and Metallicity once we have gone through analysis.

Harry has also been working very hard on the code this week, carrying on from his successes the week before. SCAMP has been giving him some issues, to the point where our Monday time in the Astrolabs was mainly Harry fighting a MAC. SCAMP is some software which combines all 4 of our CCD’s back into one image post reduction. This was considered fairly easy until it was implanted into the code and it just hasn’t worked. However given some advice from David and the weekend to work over it, Harry should have it finished by Monday.

Coding has not been all the science that has been done this week, with the rest of us completing the Manual Photometry, with all of us finishing it and moving onto the next part of the project, Literature Search! We split the sections of interest into 6 topics, specifically looking into Metallicity, how to account for interstellar reddening from dust, and how to calculate the ages. Tom was successful in finding an article online which includes code to account for reddening, then plots an isochrone from which one can find the age and distance from. This can be done by shifting our data onto the isochrone then calculating our information from the amount shifted, however this needs to be investigated further.

Finally, Tom, Alex, Matthew and I have begun writing up parts of our theory to start putting into the report, with a plan to construct the bar bones of it on overleaf on Monday so we can begin to conclude our group project.


GOC: week 2 – data reduction & science is near

With the successful reduction of images in week 1, we thought this group project might be a breeze! As it turned out, we had hit upon a mountain of coding errors and problems. Emma and I had adapted a script to put our reduced images through SExtractor. This software uses catalogues of stars to determine sources in our images and convert their counts to magnitudes. However, the table it produced gave results of zero… oh dear! As it turned out, the Isaac Newton Telescope where our images come from, assigns incorrect RA and Dec co-ordinates to our images.

A quick aside on FITS files: each image is contained in a format known as a FITS file. It stores the images from each CCD (the ‘camera’ of a telescope) of the Wide Field Camera of the INT. Each CCD has a ‘header’ which includes information about the CCD such as the properties of the CCD pixels, etc. There is also the main header which includes the very important RA and Dec co-ordinates for where the telescope was pointing when it took the images. SExtractor can then read these values from the header and therefore know which part of the sky the images are from and can therefore know where the stars in the image are.

So without these corrected RA/DEC values, our reduced images weren’t much use as the main objective of this project is to use them to calculate magnitudes and thus plot HR diagrams of clusters so we can determine their properties. A such, we had to implement new code provided to us by Dr Sobral, into our reduce.py script that would fix these headers. To do this though required more code to split the 4 CCD images into separate FITS files etc. Some of this code was also depreciated so had to be rewritten using functions that were supported. When all this was finally done and the errors sorted, it produced reduced images for each of the 4 CCDs! Science had been done! Giddy with the prospect of finally fixing the code, we opened the new images to find:

Unfortuanly as you can see, it has managed to completely ruin the images. Closer inspection showed that the flats seemed to be superimposed onto the original images. This seemed perplexing as this new version of our reduction script actually used the same code to actually reduce the image. With our patience wearing dangerously thin, we desperately searched for a solution. By reading the actual count values of the images at each stage of the reduction we found that the reason was due to a correction being applied when the images are split into 4. This correction is to account for how the images were produced at the INT but should be applied to both the unreduced images, flats and biases… which I had foolishly not. By removing this correction for the time being, we re-ran the script:

I’d believe that when I see it, it had already lied to us once this week. We opened up the images to find:

Science had been done this time! However, this is not the end of our coding nightmare. The code will now have to be re-written again to implement this correction. Then we will have to also add code that combines the 4 images into one image using software known as SCAMP. Only then can we send these onto stage 2; SExtractor.

Luckily Emma had been grappling with SExtractor during the week and had managed to come up with a script that would process our reduced images and produce tables of data to plot on a HR diagram, including a colour magnitude! She has also created what are known as bad pixel maps. These are FITS files that define parts of an image that are too noisy and should not be analysed by SExtractor. This noise can occur in the gaps between CCDs that have been joined together to make one image.

Armed with this, Emma used the third step in our image processing code; a script which takes the tables made by SExtractor to produce HR diagrams such as our Result of the Week using reduced images given to us, shown below:

Coding was not all that occurred this week. The rest of the group has been tackling manual photometry in earnest. Essentially this is doing by hand what the SExtractor script does, so that we understand the principles. Hamish and Tom spent the week sifting through the teething problems. These include finding appropriate catalogues from software Topcat which is then loaded into gaia which is the software that displays the image of the cluster in question. The zero-point must be found first. Then by setting apertures around stars, the counts can be read off from the software. Care must be taken as to how large the aperture is set. Too small and most of the flux of the star is missed. Too large and the noise of the CCD and stray light from other stars will affect the measure counts.

We prepare to enter the third week of the lab stage hopeful that science will soon be done!