The first week was mainly getting to grips with what we plan to do with our project. Our theory lead, James, along with Paula and Joshua, hit the internet to search through the vastness of astrophysics papers in order to gain deeper understanding of what metal-poor stars are, why they form and what that implies about the early Universe. There were many papers available (some even referenced Sobral’s work!) and a lot of reading to do, but our theorists gathered the knowledge we would need to embark on our journey.
During our first lab session, Joshua and Paula got to work on our introduction for the final report (to lessen the workload later), whilst myself, Joe and James looked at how WARP cut the catalogue down to their potential candidates. This allowed us to get to grips with the software we will need, whilst familiarising ourselves with the contents of the COSMOS catalogue. We decided to use the WARP conditions and worked on TOPCAT to make the cuts. The first few cuts went without a hitch and we had the source numbers that WARP had gotten. However, we ran into some issues when working through the error conditions; we were ending with 628 sources, whereas WARP had 338. Although this wouldn’t be affecting any results that we would be reaching, it was interesting as the difference seemed so large. After consultation with David, we concluded that there may have been a mistake on either parties end. We reminded ourselves that the point wasn’t to get the same results as WARP, but to familiarise ourselves with the data and software. We produced an on-sky graph of the 628 sources we arrived at.
Figure 1: On-sky positions of the sources. White gaps are due to missing data and omission of very bright sources.
Joe, our code lead, then got to work on coding an algorithm that would produce stellar spectra for 81 candidates (identified by LAMPSS and WARP, of which they concluded 23 were metal-poor stars) and compare them to various model stellar spectra and give the best model for each candidate. There are 85 models, organised by metallicity and temperature , respectively. When looking at the low resolution spectra, we found that many appeared to be potentially galactic in nature(the spectra trended up, rather than down), meaning many of the 81 candidates may not be stars.
Our lab concluded with the algorithm partially complete. Joe went away and worked on it, but we ran into an issue with comparing the spectra to the models. We talked to David and he suggested we log the flux axes on both the model and the stars to make comparisons. More algorithm work was needed!
Welcome to ATMs Astrophysics Group. We’re not the money type, and surely not the richest, but instead we consider ‘Are They Metal-poor Stars?’
During this 10 week project we are going to study Population III stars, a hypothetical population of incredibly massive, luminous and hot stars with no metal content. We will bring you along with us during the process and each week we will post our most relevant discoveries.
Last week we mentioned how by using different filter combinations, regions could be defined and cuts could be applied to the catalogue so that we could restrict all light sources to the ones that are most metal poor. Before this, we provided insight into how theoretical spectra of stars with different metallicities looked on colour-colour diagrams, and how they seemed to follow a trend. With that being said, I would like to start the blog post of week 4 with a picture that demonstrates exactly how these regions look like, and how they relate and can be obtained from the theoretical spectra predictions.
This slideshow requires JavaScript.
In the first plot, linear fits thought theoretical magnitudes and colors of spectra with different metallicities are shown, one trend for each metallicity ranging from 0 to -5. From these, regions are selected around the lines as to select light sources that probably have metallicities similar to the theoretical ones used. Straight lines and linear fits were used in this process, as they appeared to have a good correlation with the data. Meanwhile the second image demonstrates the regions with different metallicities when applied to the catalogue we have, hence labeling each light source with its most probable metallicity in this filter combination choice.
After doing this for the colours G-I and B-V, we pythoned our way into creating a script that changes the original catalogue and labels each source with their metallicity. This makes a cut on the entire catalogue and selects light sources in these regions, hence allowing us to continue our work only with potential metal poor sources (stars and galaxies, for now). The next step was to further reduce the candidates by applying the J-K criteria explained on the previous week, and perfectioning it so it would not excluded good candidates. Joining this restriction to the previous one, we narrowed down our candidates even more, being now able to categorize them in galaxies, stars, and metal poor stars.
All the restrictions were applied to the catalogue, thus selecting the metal poor light sources and allowing the distinction between stars and galaxies. Also, the theoretical spectra is kept to see how it still all agrees and is derived from it. Most stars are indeed located in the region we were expecting them to.
After this, we also tried to rule out the white-dwarfs that may be included in the stars. Doing this took a few days, as it required that theoretical spectra were found, calibrated, magnitudes calculated and plotted on the catalogue, so as to determine which cut to perform on the data. Basing our choice of colors and filters on a paper on The Pristine Survey, we found only two potential white dwarfs and removed them from the data. This revealed that the amount of work dedicated to some ideas (like this one, about three days from Alice’s side) may not have the expected outcome, but is still a good step to take and is worth it afterall.
Using the colours U-G and G-R, theoretical spectra of white dwards (blure squares) is ploted on top of our catalogue ( Red points). Based on the somehow linear trend they follow, we decided to exclude the sources in green, but only two of these turned out to be potential metal poor stars and removed from our candidates.
As these processes were performed, we never forgot to think of the physics behind. While struggling to understand, for example, why M and K stars are not likely to be pristine metal poor stars, we had a few Physics Short Lectures with David Sobral , including one about Supermassive Black Holes! After this, we understand why each cut is performed and why some star types are excluded. We get that the more massive stars (e.g. O and B) have such short lifetimes that they could only have formed from a more recent – and therefore more metal-enriched – interstellar medium, so we would expect them to be metal-rich. (Not to mention the metallic products of nuclear fusion that might have been dredged up from their cores to their surfaces.) – credits to Tom. After remembering Jean’s Mass and star formation processes, we know now that M and K star types aren´t likely to be metal poor. Also, the J-K galaxy cut is physically a very smart thing to do, as galaxies suffer red-shift and therefore most of their spectra appears to have high fluxes on the J and K filter range, towards the red end of the spectra. On the other hand, stars do not reveal red-shifts high enough to redden their spectra, as to enable significant detection in these fitlers, and hence J and K clearly separate galaxies and stars ( shown in the figures in the last blog post). To help with visualizing and understanding this, we created theoretical plots of spectra of stars in the J and K filters, after dealing with quite a handfull of errors and corrections. These helped us predict that stars should be located bellow J-K magnitudes of 0.0, previously demonstrated when the entire catalogue was observed in this color. (Again, Tom really deserves that we set him up for a future canonization for his miracles in this project ).
Ploting K and J-K magnitudes for spectra of stars from ESO databse, their distribution can be studied and the trends extrapolated for future analise of real data. This helped us predict that stars should be located bellow J-K magnitudes of 0.0.
Reaching the end of the week, the criteria used were aproved and some numbers started to arise. Dividing the metal poor stars candidates according to their brightness in the NB392 filter ( that we now know has been named MURPHY , as in Murphy’s Law, after a quite a funny story), we obtained the following numbers of candidates:
The next step is to use Huble Space Telescope’s images to visualise each candidate individually and rule out sources that are not stars, such as galaxies that were not included in our cuts ( as we did not take into account errors in the measurements). To conclude, I leave you with the beautiful images that we’ve had access to so far and that personally make all the hard work worth it at the end of the day.
We decided to make the following plot to help visualise how the filters affect the spectral types and so how colour-colour diagrams really work, i.e. why some combinations of filters are better at showing metallicity than others. This was made by predicting the black body curves for each spectral type using Planck’s Law.
The black body curve of each spectral type and the filters used in order to better visualise how the filters affect each type of star.
Next we downloaded more spectra from POLLUX to gain a larger range of data. The result was the following plot, which is a relief because it looks somewhat similar to the one in the proposal we were given (Figure 1, McGee_metalpoor_INT_2014A). The stars with a metallicity value of 0.0 don’t really follow a trend and we’ve excluded the points for M stars as they are huge outliers. The reason M type stars are so different is because they can be white planet-like and have strong absorption bands because they allow molecules to form, and so the spectra can make them look more red or blue.
Our synthetic stars from the POLLUX database with set metallicities.
After this we continued making cuts on out whole catalogue to narrow down where we will find metal poor stars. We decided to start using b-v filters instead of g-i as we found that they showed our metallicities more easily. We created these cuts by plotting our synthetic spectra on top of our catalogue and creating a region around each metallicity. The catalogue was also cut down by filtering out the stars with absolute magnitudes of less than 24, which is a good limit to select bright stars from the sources in our data. We made the figure below to demonstrate how we made our cuts. It first shows all of the stars in the catalogue, then just the ‘bright’ stars (magnitude<24), and then shows our metallicity cuts.
A gif showing our entire catalogue of stars, then just the ones with a magnitude value lower than 24, then we applied our criteria created from the synthetic spectra to show stars with low metallicity. All stars in red, only stars with magnitude < 24 in blue, and then metallicity cuts.
Another task this week was to calculate the search volume of the data from the COSMOS survey. This is so we can obtain a source density and compare it with the stellar density of the halo. Of course, our data will contain distant galaxies that can appear as stars and so these need to be filtered out. The most effective way to do this is to use J and K filters, as they fall within the infrared range and so easily sort galaxies from stars. The problem with this however, is that by appealing this constraint we could be eliminating some dim stars that are still within our own galaxy. Next week we’re going to look for a solution to this problem, but for the mean time the plot below shows the result of applying this constraint. In Vega magnitudes (as opposed to AB magnitudes which is what we have been working in), if the J-K value is less than 1 the source is considered a star and more than 1 it’s considered a galaxy.
Our catalogue data plotted in J-K to separate which sources are likely to be galaxies (red) and which are stars (blue).
Hopefully next week will bring a lot more progress, and we can reduce our criteria for metal poor stars by making more cuts to the data, as well as finding a better way to distinguish between stars and galaxies in our data.
The previous two blog posts describe our work for the first two weeks of the project, where it’s easy to understand that so far we haven’t worked with the real data taken from the Cosmos Survey with the Isaac Newton Telescope. At first we were very confused by this, not really understanding the point or usefulness of all the simulations and scripts we were running. Also, since the beginning of the project that we knew it would be split in two: theoretical and observational; but we never really knew how both aspects were related. Two weeks later, we’ve started to realise the importance of all the work we’ve done so far, and that idea is what I’ll try to pass on with this post.
Remember our main goal for this internship? Finding metal poor stars from a Wide Field Survey using a Narrow band filter i.e. pointing a telescope to one of the biggest fields in the celestial sky- the Cosmos Field, and using a very specific and small wavelength range filter that looks at Ca H&K absorption lines to detect metal poor stars in the halo of our galaxy. The result of this survey is a catalogue with a bunch of numbers, that for a group of novices like us, mean almost nothing and do not tell us anything about their metal content. In a more scientific perspective, we do know these numbers represent magnitudes in several bands, taken from measurements of each light source’s spectrum. Therefore, the most logical step to follow next is identify the characteristics of a metal poor star’s spectrum and what distinguishes it from a regular star’s, and use these to predict how the magnitudes of such star would look like. In this way, we can look at the data and compare it with the predicted magnitudes, in order to identify which light sources could potentially be metal poor stars.
Here is where our work goes in: we took advantage of different spectra provided online and used our almost non-existent python skills to recreate the steps that lead to the calculation of magnitudes for different filters and estimation of colours. From online databases such as the Pollux one, which uses MARCS atmosphere models and Turbospectrum (Plez, 2008; Alvarez & Plez, 1998), we had the chance to produce artificial spectra of stars with whichever characteristics we pleased. Logically, we opted for choosing different metallicities for all spectra types ( the metallicities used were 0.0, -1.0, -2.0, -3.0, -4.0 and -5.0). After python started to work on our side (or us on its), the steps I mentioned were performed and are explained in the previous blog posts, leaving the following plot as a final outcome.
Absolute Magnitudes (at 10 pc from the earth) of stars of several spectral types and different metallicities are plotted in a colour-colour graph- (g-i) on the x axis and (NB392-g)-(g-i) on the y axis; Graphs like this allows to understand how magnitudes vary for different stars, especially for the most metal poor ones, and to know what colours to expect real data to have.
Plots like this are called colour-colour or colour-magnitude graphs, and by using certain filter colour combinations, one can differentiate several characteristics of stellar spectra. Here we had another important goal, figure out which colour combinations make the characteristic we want to study -metallicity- stand out the most . After a few experiments with filter combinations on TopCat, we concluded that, at least for now, the G-I and the U-B colours were the most suitable for the task. As can be seen from the graph bellow, each metallicity and spectral type was labelled, and a pattern for metallicity scattering is visible: it falls perfectly on straight lines for low metallicities and is well fitted, but with a larger standard deviation, for higher metallicities (i.e 0.0 category).
Performing a fitting on the magnitudes for each metallicity allows us to distinguish regions where each metallicity should be located in theory. Here we used Topcat to plot the colour (g-i) against (NB392-g)-(g-i).
Current Situation: we have an idea of how to predict magnitudes and colours for different spectra and metallicity. Furthermore, we realised that the scatter is independent of source distance, when interstellar extinction is not considered, after plotting the above graph for light sources at a half halo distance (200 000 pc) and obtaining an exact replica of this graph. Here is where the link between observational and theoretical spectra takes place, because now we can take these regions where metal poor magnitudes are located and place them on top of real data, hence providing a cut on entire catalogues and restricting all light sources to the ones that are most likely to be Metal poor Stars. That was our job for the 3rd week, and for a first try we obtained the following cut.
All the light sources from the catalogue are scattered in red, the metallicity fits are shown by the straight lines, and the main region where these are located was intersected with the real data and plotted in blue. Here it’s visible how we were able to restrict our entire catalogue to a smaller region.
Next steps to follow are to analyse theoretically more spectra with more detail, in order to perform bigger cuts and restrict even more the data and hence make it easier to locate Metal Poor Stars. Just to give an idea, we started with around 123,505 light sources and so far, with the cut shown above, we have 23880 potential stars (we know that most of them are likely to be galaxies). If we keep on studying characteristics of metal poor stars, we hope we’ll be able to reach a more reasonable number and analyse each potential star individually.
On a personal note, progress has been slow. Each day it looks like we are one step closer, or two, but with some struggle. We’ve got the hang of most programmes that we need to handle, but sometimes we hit a wall and it looks like we got lost. On the following day though, we surpass the problem and keep going. Again, progress is slow, but there is progress, and from seeing other people on the department we realise that maybe this is how its supposed to be like, this is how progress in science is made, and that is actually okay. Also, it is pretty mesmerising seeing theoretical astrophysics come to live in our own hands, thus making all the struggles and difficulties worth it.
Having struggled with Python for pretty much the entire first week of the project, this second week has gone (for the most part) a lot more smoothly – our understanding of Python continues to grow, reaching a level that would make even a herpetologist jealous. That said, we are, by no means, experts in the language, but it’s safe to say we’re feeling a lot more confident than we did a week ago.
Picking up from where we left off in the last blog post, we first endeavoured to calculate the AB magnitudes of a sun-like star (i.e. a star whose spectral class is G2v) once each of the bandpass filters have been applied. AB magnitudes are a measure of a star’s apparent brightness, defined in terms of its flux density, where lower magnitudes correspond to brighter stars because Astronomy is just weird sometimes. In order to calculate these magnitudes, we had to first take the convolved fluxes and find the average flux densities through each filter, and then we had to convert the units to get them in terms of ‘per frequency’ rather than in terms of ‘per wavelength’. These average flux densities are shown in the graph below, along with the convolved G2v spectra (shown at the bottom) for a better visualisation of what’s going on.
A graph showing the average flux densities (in so-called F_ν units, being in terms of ‘per frequency’) of a sun-like star, at the central wavelengths of each of the filters. The coloured areas at the bottom show the spectra of the star after being convolved through each of the filters (not to scale).
From these average flux densities, the AB magnitudes of the star through each filter were calculated as if the star were at a distance of 10 parsecs (this is generally the standard in Astronomy) and plotted against wavelength, giving the following graph:
A graph showing the AB magnitudes of a sun-like star when viewed through each of the filters at a distance of 10pc, again with the convolved spectra shown (not to scale) at the bottom.
Now that we had a means of calculating AB magnitudes of a star for each filter, we then went on to see how these magnitudes varied with distance form the star. Since the narrow band filter (NB392) generally imposes the biggest limit on what we can observe (on account of the allowed wavelengths covering such a small range), we decided to focus on that, which resulted in this graph:
A graph showing how the AB magnitude of a sun-like star varies with distance, when the star is being viewed through the NB392 filter. Vertical lines mark the distances to various astronomical milestones, while the horizontal red line marks the highest magnitude that we can generally observe. The x-axis is logarithmic to allow for the huge range of distances being considered.
For the sake of the graph, we selected distances of: 1 astronomical unit (AU), which is the distance from the Earth to the Sun; 10 parsecs; 100 parsecs; 100,000 parsecs; 200,000 parsecs. The limiting magnitude is approximately 25.5 – above this, stars generally become too dim to detect.
We then wanted to see how this relationship between AB magnitude and distance changes for each spectral type (still focusing on the NB392 filter). While we were at it, we decided to provide a small link between our project and the MUSE-VLT project by calculating the magnitudes that each spectral type would have if they were as far away as the galaxy VR7, which is the most distant galaxy in their data at a whopping 64,457.8 MILLION parsecs away. After doing the necessary calculations, interpolating our plots, and subsequently spending far too long on colour-coordination, the resulting graph was this:
A graph showing how the magnitudes of stars with different spectral types varies with distance, when viewed through the NB392 filter.
From this graph, we can see that even for some of the most luminous stars (best represented by the O5v line) in the VR7 galaxy, their magnitudes (when convolved with the NB392 filter) will be as high as ~50. This is too dim to observe even with the James Webb Space Telescope, which is expected to detect stars with AB magnitudes up to ~33.
So, we have a method of calculating the magnitude of a star when viewed from any distance through different bandpass filters, starting from the very beginning with the raw data from the star’s original spectrum. Now, if you’re anything like us then you might initially be wondering what the point of that is. The point is that some of these magnitudes can be used to determine the observational criteria which confirm whether or not a star is metal-poor, and so being able to calculate them from a star’s original absorption spectrum is an extremely useful tool.
The hunt for metal-poor stars in our home galaxy is yet to begin, but that’s okay. I’m no hunter, but I imagine most will generally feel a lot more confident if they know they have all the right tools at the ready. So while it sometimes feels like we don’t have much to show for our work so far (except a plethora of pretty plots), these last two weeks have certainly not been in vain, and we’re all looking forward to seeing what the next week will bring.
The first week for the team looking at Metal Poor Stars in the halo of the Milky Way has mainly been getting to grips with the coding language Python, as the minimal coding experience we have is for another programming language. Safe to say this week has been a very steep learning curve but we are improving our coding skills significantly.
The first step of this project was to take a more general approach and look at the spectra of a sample of stars from our own galaxy, but not from the halo. By doing this we could get to grips with analysing spectra and the coding involved. Below is the first plot we made showing the radiative flux of several different stars from the original data (from the European Southern Observatory). (Stars are classified by spectral types; O, B, A , F, G, K, M with O type being the hottest and brightest.)
The spectra from the raw data of different types of star. Wavelength is measured in Angstroms (Å, 10e-10m).
We then created the same plot but with the typical (if slightly annoying) radiative flux units of ergs/s/cm^2/Å (1 erg is equal to 10e-7 Joules), and used a logarithmic scale to observe the peaks on the spectra more easily (below).
The raw spectra converted to correct units and scaled logarithmically.
The next step was to download both broad and narrow band filter profiles that allow us to basically ignore the data we don’t want to see and focus on the spectra in the visual wavelengths. A total of 8 filters were used (taken from the COSMOS project), including a narrow band filter NB392, which will be incredibly useful later on in the project when identifying metal poor stars as it focuses on the wavelengths surround the Calcium lines of a spectrum. We had to normalise and interpolate the filters before we could combine them with the spectra of a chosen star (type G2v – a Sun-like star), which effectively maps the spectra of the filters onto the spectrum of the star.
The spectrum of a Sun-like star (G2v) with the spectra of each filter convolved onto it.
The filters are shaded as we then integrated (found the area under the curve) each filter with respect to the wavelength in order to start calculating the magnitude of the star. From there we could find the average flux density and the central wavelength of each filter to convert the flux into units of per frequency rather than per wavelength, as they have been in throughout. We did this in order to artificially place the star as being 10pc away, which is the distance at which absolute magnitude is taken. The flux is the energy the star is radiating and so we can use this to calculate the magnitude of the star in each filter. We used a Sun-like star in order to be able to test our method and code as we know the magnitude of the Sun.
Our method worked and next week we’ll be moving on to determining the distances of the stars from their apparent magnitudes (absolute magnitudes are from 10pc away but apparent magnitudes are as seen from Earth). We can use this to determine how far into the halo of the Milky Way the metal poor stars we will be looking for are.
The first week of this internship has definitely been more about struggling our way through the basics of Python programming rather than the actual stars, but, while sometimes exhausting and frustrating, it’s necessary in order to be able to analyse stellar spectra and find some metal poor stars.