SAVS: Week 3

image showing Emily's laptop which is displaying a polar projection image plot using Python code
Emily successfully plotting a polar projection image using Python code

Python.

All my homies hate Python.

(Not actually – Emily and Michelle seem to have it working, so our group is saved… for now at least…)

On a more serious note, we’ve spent this week practising our Python coding so that we could create images from raw data that we have sourced. Emily and Jude have focused on this, and are compiling the information so that the rest of the team can make observations and comparisons. They also attended our Physics department’s Space and Planetary Physics discussion about Gas Giants, in the hopes that we might learn something to help us with our own gas giant. Not only did they introduce our project to some researchers but they also gained some contacts who may be able to help us with the coding and overall direction of our project.

Meanwhile, Michelle & Diyura have started making a skeleton for our report paper so that we know what we want to include within it and in what sections. They were also able to sort and organize some journal papers that we’ve found will be useful for our project.

We’ve decided that we can potentially examine how the intensity of the aurora changes with local time, for a full rotation in each season (Summer and Winter), likely within each hemisphere. We can also compare the morphological changes over time, and any detectable variations in the magnetic field during that time period.

Leah has been able to determine the optimal time periods to focus on, based on the investigations we are aiming to carry out on the aurora. She recorded which images were clearly displaying the whole aurora, rather than a fraction.

She noticed that there is an overlap of data from Cassini and Hubble Space Telescope (HST) over Northern Summers only. The lack of Southern Summer data means that we may need to do some approximations using earlier HST Southern projections and later Cassini data on the Southern Hemisphere. The position of HST prevents it from viewing any of the Winter seasonal events on either hemisphere, so we can only use Cassini’s data for this period. With all of this in mind, we have concluded that we will focus on Cassini’s data for 2004, 2013-14, 2017 and Hubble’s data for 2008, 2013, 2016, 2017.

FUN FACT: The Cassini orbiter spacecraft carried 12 different science instruments. One of them was the Ultraviolet Imaging Spectrograph (UVIS), an optical remote sensing device acting like our eyes and ears. This allowed us to get information from remote objects without actually being in direct contact with them. It had two spectrographic channels, which observed light over wavelengths from 56 to 118 nm (extreme ultraviolet) and 110 to 190 nm (far ultraviolet). UVIS created images using these measurements.

The mission continues next week but until then “Goodbye, for now, until you read again!”

FROGDAB Week 2 – The Big Bois – 07/02/2022

Hello again friends!

April here, back again to tell you all about what the Frog Dabbers have been up to this week. We’ve got some good stuff to report so I hope your all seat belted in. First off we’re going to look in on Fenn who’s found some interesting methods of calculating the masses of black holes:

When the speed separates idk – Velocity Dispersion

So Fenn, tell me all about your method.

Whilst I was doing some research into methods of calculating black hole masses I came across this very interesting method of relating the velocity dispersion to the mass of a black hole.

The velocity dispersion? What’s that??

Its the dispersion (differing values) of the velocities around the mean velocity for an astronomical object! In our case its for the galaxy surrounding the black hole.

Interesting stuff! So how can we use that to calculate the masses of black holes?

So by using [Equation 1 or 2] we can use the values of sigma (the velocity dispersion) to calculate the mass!

Thanks Fenn! So the equations we can use are as follows:

Equation 1 – Relation between the mass of a black hole and the velocity dispersion of the galaxy around it, taken from https://iopscience.iop.org/article/10.1088/0004-637X/698/1/198/pdf
Equation 2 – Secondary Relation between the mass of a black hole and the velocity dispersion of the galaxy surrounding it, taken from https://arxiv.org/pdf/1112.1078.pdf

This is a super convenient equation because as it happens we have a whole column in our data set full of the sigma values for the black holes. All we have to do is plug in the values and we get the masses of all the black holes which have available velocity dispersion data!

Meanwhile – Luminosity Masses

Finding the masses of black holes from their luminosity in specific bands however is not as easy. Our first step is to actually be able to find the different flux (and thereby luminosity) values of the spectra from the black holes, once again by pure coincidence someone has already done half of the work for us:

This is the ESO Archive website (https://archive.eso.org/scienceportal/home?data_collection=LEGA-C):

Its a very convenient tool of applying the Lega-C data we already have to actual spots in the sky. All we do is input the coordinates of a datapoint from our data set into the search bar at the top. ESO then finds available spectral data from near the point (usually within hundredths or even thousandths of an arcsecond), which means that we can obtain the flux values for all the datapoints we have information on.

Thanks to the excellent work of Adrien and Jacob we were able to attach the names of the spectrum data files to the coordinates of the data points. This is very useful in coding terms as it means we can easily iterate through all the data points and find the corresponding spectral data. Speaking of which:

That spectra looks pretty cool!

Jonathan Head – 2022
(This would end up being the last time Jonathan found spectra cool)

Figure 3 shows what one of the spectra looks like, its a big ol’ mess of different wavelengths having different fluxes (this is one of the spectra with a high sound to noise ratio, imagine what one with low looks like). It doesn’t mean much to us in its current state as the redshift hasn’t been taken into account. As astronomical objects get closer or further away their emitted wavelengths get either compressed or stretched respectively. This leads to the spectra we collect from our satellites being at the wrong values compared to what’s actually being emitted. We can fix this though, through equation 3:

So if we divide all of the observed wavelengths by z+1 and then zoom in on whichever peak we want to look at (in this case its Magnesium), we get this:

But what do we do with this information? You’ll have to tune in next week to find out!

Auf Wiedersehen ma dudes 🙂

FROGDAB Week 1 – Astronomical – 31/01/2022

When we first began our task, I do not think we understood the astronomical task in front of us. I mean we understood the astronomy behind it but once we opened the data files in TOPCAT, hundreds upon thousands of data points stared back at us, astronomical in size. Only a supercomputer would be able to analyze all of this data in the time we had, and my laptop with its difficulty opening the file explorer wasn’t going to cut it. We needed a solution; conveniently, we are geniuses B).

“Space is really big”

April Dalby

“It sure is”

Atlas Patrick

There are many different ways of calculating the values of different attributes of black holes. But rather than list them all here so that you forget I’ll just show you which equation we were using at the time and how we went about calculating the variables.

Overcomplicated – Luminosity Calculations

First is the following equation to calculate the mass of a black hole from its Luminosity and a FWHM of a peak in its spectra:

Broad Spectral Lines in AGNs and supermassive black hole mass measurements – Luca C Popovic

Okay seems easy enough so we just find those values from our datasets and calculate us some masses, easy peasy.

Hang on a second. But wh- th- there’s how many?!?!

THERE’S A HUNDRED AND NINETY FIVE THOUSAND DATA POINTS?!?!?

Our first job was clear, reduce this data set to the usable data points and create a much easier path for us to clear our overall hypothesis, whilst also preventing a few fires starting in the Astro Lab at the same time. There are a few ways we found to do this:

  1. Match up data from both the Lega-C and COSMOS data sets, that way we only have data points for points we have all the data for. This can be done by matching up the Right Ascension (RA) and Declination (DEC) coordinates from the different datasets, TOPCAT can do this with its match function, and save the resulting file. This reduced our data from over 195,000 points, to just over 3,700 data points. Whilst this is significantly less data, it isn’t very reliable so…
  2. Further cut down the data via their sound to noise ratio (SNR). The useful data coming from an astronomical object is called sound, whilst the not so useful data is called noise, the ratio of these gives an idea as to how reliable the data being analyzed is. Typically we want a SNR value greater than 1 as this means that there is more sound than noise. By running a quick python script to remove any data points with a SNR value less than 1 we can cut down the data to about 2,400 points. Again this is great, but the data could still be more useful to us…
  3. Remove any values which have no flux in the F Band. For our equations we need the spectra in the X-Ray range, this corresponds to Chandra’s F Band filter. The data is much more useful to us if it has a significant flux in the F band, hence we can use this to remove any data points with little to no flux. This finally cuts down our data to just 143 points. Which is a much more manageable and also reliable data set.

And so those were our tasks for the week, python scripts take a while to write (and April and Jonathan had to learn an entire new module of python). Conveniently there are 6 people in our group however so this isn’t all we got up to…

Get outta here – Accretion Rates

Another important quality of black holes is their ability to disperse materials in their environments. Whilst their high density leads to most of the materials being pulled into it, its also possible for black holes to perform a gravity assist (aka gravity slingshot) to propel material in other directions away from the black hole.

Gravity assists are very convenient things in astrophysics, they are used practically all the time in astronautics in order to give objects more speed and direction. Famously one was used to correct the course of the Voyager 1 via Jupiter towards Saturn.

Gif of the Voyager 1’s path, as it slingshots past Jupiter (blue) towards Saturn (green)

The same can be done for matter around black holes, they accelerate via the black holes gravity and are flung back out into the environment surrounding it. This is something we want to investigate as well, if the amount of matter being flung back out is having an affect on the star formation rate within the galaxy. Conveniently there are equations we can use to calculate the accretion rates of black holes, such as this one:

Equation showing the link between Accretion rate and the bolometric luminosity, taken from https://arxiv.org/pdf/1909.11672.pdf

The Bolometric luminosity is related to the luminosity in the x-ray band, this is a value we can find in our Combined data sets and such we can calculate the Accretion Rates:

You’re probably thinking: “Wow guys those correlate so well” and you are right but the reason for that is because one is calculated from the other, with the discrepancy in linearity coming from the changes in redshift.

You can make the graph rainbow

David Sobral – 2022

So there we go, thats what the frog dabbers got up to this week! Join us next week where we’ll make some code to calculate the masses of black holes from both the H-Beta and Magnesium line. Also Fenn will drop in to tell us all about their method of calculating masses.

Farewell ’till then

April

FROG DAB: the Formation Rate Of stars within Galaxies Due to the Aftermath of Blackholes

Hello there, we’re the Frog Dabbers, and this is our blog!!

My name is April, and over the next few weeks I’ll be taking you on an adventure in physics. To show you the incredible work my team and I have investigated; one of the most interesting phenomena of the known universe … Black Holes.

“This is where the fun begins”

Anakin Skywalker
Star Wars: Episode III
Revenge of the Sith

But who are we? And why should you listen to a word we say? Well this introductory post will hopefully inform and convince you of the answers to those questions. We are the Frog Dabbers, a group of genius physicists with only one goal in mind, to answer all the questions currently stumping physicists around the world, or at least some of them idk depends how much time we have left.

From Left to Right – Fenn, Adrien, Jonathan, April, Atlas, and Jacob

Look there we are, the coolest kids in school if ever there were some, but let’s put a name to those faces:

  • Jonathan Head – Learner of the Theory, Calculator of the Maths
  • Adrien Vitart – Checker of the Errors, Aider to the Analyzer
  • Atlas Patrick – Thinker of the Knowledge, Aider to the Code
  • April Dalby – Scriber of the Code, Blogger of the Blogs
  • Jacob Hughes – Reader of the Data, Knower of the Sky
  • Fenn Leppard – Writer of Reports, Coder of the Code

Our task at hand is a lengthy one, we need to analyze the many different attributes of black holes which lie at the center of galaxies and answer some important questions. Do black holes have a positive, negative or no effect on star formation in galaxies? What aspect of black holes, if any, causes these effects? Just how many errors in April’s work is Adrien going to find? Only time will tell…

Conveniently we have many tools at hand. There are hundreds of thousands of data points for black holes in both the Lega-C and Z-Cosmos data sets and thanks to our theory experts we already have some methods laid out for us. Those Black Holes don’t stand a chance against the Frog Dabbers, and now our journey begins.

“These guys are the smartest people I have worked with!”

David Sobral in 10 weeks

Do stay tuned dear reader, as my weekly updates on our research blows the limit of human knowledge sky high.

Fare thee well

Week 1 – Astronomical – https://xgalweb.wordpress.com/2022/03/01/frogdab-week-1-astronomical-31-01-2022/

Week 2 – The Big Bois – https://xgalweb.wordpress.com/2022/03/01/frogdab-week-2-the-big-bois-07-02-2022/

SIMPS: Search Into Metal-Poor Stars

Week 1

The first week was mainly getting to grips with what we plan to do with our project. Our theory lead, James, along with Paula and Joshua, hit the internet to search through the vastness of astrophysics papers in order to gain deeper understanding of what metal-poor stars are, why they form and what that implies about the early Universe. There were many papers available (some even referenced Sobral’s work!) and a lot of reading to do, but our theorists gathered the knowledge we would need to embark on our journey.

During our first lab session, Joshua and Paula got to work on our introduction for the final report (to lessen the workload later), whilst myself, Joe and James looked at how WARP cut the catalogue down to their potential candidates. This allowed us to get to grips with the software we will need, whilst familiarising ourselves with the contents of the COSMOS catalogue. We decided to use the WARP conditions and worked on TOPCAT to make the cuts. The first few cuts went without a hitch and we had the source numbers that WARP had gotten. However, we ran into some issues when working through the error conditions; we were ending with 628 sources, whereas WARP had 338. Although this wouldn’t be affecting any results that we would be reaching, it was interesting as the difference seemed so large. After consultation with David, we concluded that there may have been a mistake on either parties end. We reminded ourselves that the point wasn’t to get the same results as WARP, but to familiarise ourselves with the data and software. We produced an on-sky graph of the 628 sources we arrived at.

Figure 1: On-sky positions of the sources. White gaps are due to missing data and omission of very bright sources.

Joe, our code lead, then got to work on coding an algorithm that would produce stellar spectra for 81 candidates (identified by LAMPSS and WARP, of which they concluded 23 were metal-poor stars) and compare them to various model stellar spectra and give the best model for each candidate. There are 85 models, organised by metallicity and temperature , respectively. When looking at the low resolution spectra, we found that many appeared to be potentially galactic in nature(the spectra trended up, rather than down), meaning many of the 81 candidates may not be stars.

Our lab concluded with the algorithm partially complete. Joe went away and worked on it, but we ran into an issue with comparing the spectra to the models. We talked to David and he suggested we log the flux axes on both the model and the stars to make comparisons. More algorithm work was needed!

SAVS: Week 2

Our presentation went well! We’ve received some useful feedback, and had a private meeting with our Supervisor. It has been suggested that we utilise Auroral Planetary Imaging and Spectroscopy (APIS), so we have registered for access to this data so that Emily and Jude could practice/better understand everything relating to APIS in time for our Week 3 project plans.

FUN FACT: The Cassini-Huygens mission was the first to orbit Saturn. It spent 13, of its 20 years in space, exploring Saturn and its environments. On 15th September 2017, 294 orbits later, Cassini plunged into Saturn's atmosphere while transmitting new and unique data to scientists the entire time. It's "Grand Finale" consisted of 22 elliptical orbits between the rings of Saturn. Over 4000 science papers have been written so far, using the data Cassini collected, and ours will be next.

We have decided to use Cassini’s Ultraviolet Imaging Spectrum (UVIS) data on Saturn and data collected by the Hubble’s Space Telescope Imaging Spectrum (STIS) to aid our investigation since the seasons change very slowly on Saturn. Both Cassini and Hubble have been able to take images of Saturn’s auroras, so we will have data available to us that has been collected over a broad time period. Since Hubble first saw Saturn’s aurora in 1994, we should have enough data for a decent comparison of auroral differences for at least 3 seasons on Saturn.

Specifically, we’re looking for morphological differences such as size (by modelling our data as circles) and visual appearance (brightness, etc.). We also intend to do this quantitatively. We want to see what causes these differences and determine whether it may be influenced by temperature or some other local source.

Although both UVIS and STIS capture ultraviolet (UV) and infrared (IR) imaging, the UV data is more abundant so this is primarily what we will be considering. The recent launch of the James Webb Space Telescope (JWST) would have offered more data on the IR imaging of Saturn’s auroras. Unfortunately, it has not gathered such information as yet.

We are also considering acquiring direct data ourselves. If this cannot be collected from our University’s lab, then we will use a simulation to collect data of the aurora based on Saturn’s axis tilt, the solar wind speed, the position of Saturn’s moons, the intensity of weather conditions and any other relevant factors which may be discovered as we continue our research.

It is common that polar projection images are used in Physics research papers. This includes ‘local time’ (LT) based on the planet’s position relative to the Sun (this allows for consistent comparisons since features fixed relative to the Sun will always appear at the same location). The direction of the Sun (12 LT) is toward the bottom of the image while dusk (18 LT) is to the right, as seen in our diagram below. Here, we can see a Southern polar projection image (right) which has been created using original data gathered by HST (left).

Comparison of original data from HST and a Northern polar projection image derived from it (source: APIS/LESIA/ESA/NASA-HST)
Comparison of original data from HST and a Northern polar projection image derived from it (Credit: APIS/LESIA/ESA/NASA HST)

Within our research so far, Leah has also found a very useful image comparing HST images and solar wind conditions over a specific time period on Saturn. We will be using this concept to model our own images, using data from Cassini and Hubble, to compare how the morphology of the auroras change with the dynamics taking place such as changes in magnetic field strength, temperature, solar wind levels and brightness/luminosity. We may also include infrared imagine to allow for a potential investigation of temperature differences.

It may even be possible to examine different aurora regions such as the:

  1. Main Emission Ring
  2. Emission Polarward
  3. Emission Equatorward
  4. Enceladus Footprint (less likely since it is so difficult to detect)

We’ll make these decisions as our research progresses, and keep you updated on the process.

The team is logging off for now, so look out for our post next week!

We thank the APIS service at LESIA/Paris Astronomical Data Centre (Observatoire de Paris, CNRS) for providing value-added data derived from UV observations of the ESA/NASA Hubble Space Telescope.

SAVS: Week 1

Just in case you didn’t know what an aurora is — it is a result of the emission of photons, due to interactions in a planet’s upper atmosphere. Variations in the plasma environment release trapped electrons, which then stream along the magnetic field lines into the upper atmosphere. They then collide with atoms and molecules, exciting them to higher energies. The atoms and molecules release this extra energy by radiating light at particular characteristic colours and wavelengths. On Earth, we frequently see a green colour, a result of the green Oxygen line. On Saturn, emissions are from molecular and atomic Hydrogen. From our research, we know that the photons released on Saturn are into the UV spectrum. This means they can best be observed using special filters, not our naked eyes.

FUN FACT: Saturn has an axial tilt of 27°, meaning that one hemisphere will be tilted toward the sun (experiencing summer) and the other away from it (experiencing winter).
This means that Saturn has seasons. One summer on Saturn lasts more than seven Earth years!
This is why we'll be looking at the auroras' seasonal changes.

Now that you know a little more about auroras and the planet we’ll be investigating, let’s get you up to speed.

SAVS research includes sources from NASA ADS and Google Scholar

This week, we’ve focused on finding information. Each member of our team has been scouring the World Wide Web for everything that could possibly help us along our journey. We’ve looked at many sources, including Google Scholar and NASA ADS (Astrophysics Data System), so that we could devise a more structured method for research, analysis and report writing.

So far we’ve found many articles, including some based on the changing seasons on Saturn and a comparison between planets and their aurorae. We’ve also found that imaging is typically done in the UV spectrum, so this could aid us in structuring our project.

We were also able to gain access to our assigned lab room so that we can meet and continue working easily. Next week, we will be presenting our project idea to the rest of the PHYS369 group (and our module supervisors — Sarah Badman and David Sobral). We’ll be able to get some feedback on our direction, as well as answer any general questions our audience might have.

Wish us luck!

—-== Just Examining AstroNomical Systems

“JEANS? The legendary super-team? Of course I’ve heard of them. Who hasn’t?”

— Everybody, 2027

This day will forever be known as the landmark of the first ever blog post by the legendary super-team JEANS. Bards will sing of our legacy, and our names will be met with glorious fanfare, a hardly worthy testimony to the immense scientific contribution we have made for the betterment of mankind for the centuries that are so blessed to follow. My name is Fal, and welcome to our blog.

–= Members of the Criterion

I am christened the unflattering title of Errors and Communications lead, and serve a wide range of supporting roles within the team including public outreach, coding and data processing. My direct superior is Iestyn, the esteemed Project Lead, and my colleagues are Gethin, Nathan, Anton and Nick.

Gethin is responsible for acquisitions and the technological aspect of our affairs, and hence operates as our Coding Lead.

Nathan is the Administrator, and oversees the co-ordination of meetings, and keeps stringent records of our activities.

Anton is the “Theory Lead”

And finally, Nick is the logician in our ranks, taking charge of the processes involving data handling, earning him the title of Data Lead.

–= Initiation

Galaxies are large scale, gravitationally bound systems of stars, dust and gas that form a significant part of the universe. They are subject to a multitude of properties that characterise them, as well as provide indication of their evolution paths and constituents. The twines between the colour of a galaxy and its morphology have been studied since the 1930s, whereupon the works of Morgan & Mayall and E.P. Hubble, astronomers such as I. Strateva have built a model describing the colour dependence on dominant stellar populations. It now falls upon us to answer a further question, concerning the variation in star formation rate across cosmic time.

In lieu of our discussions commencing Week 11, we decided to aim towards the route of connecting morphology with star formation rate, and observe this relation over a cosmic timescale. This, I would like to emphasize, is definitely not because almost all of us had just finished a 363 Lab Report on the same topic in the leading days. It wasn’t. We all decided to purvey the blogs of former years (As you, dear reader, are likely doing too), as well as read one paper on this idea in advance of Week 12’s team meeting.

This meeting was incredibly fruitful. We emerged with an overall scope of the project, as well as discrete tasks that could be done within labs. The conclusions we wanted to draw, the hypotheses we wanted to pose. The summary of the project was decided as follows:

Hypothesis: Star formation rate increases over redshift independent of morphology, metallicity, and density.

Study: Split into three overarching questions:

  • Morphology binaries (Spiral/elliptical, Featured/Featureless, Barred Spiral/Natural Spiral) plotted on axes of SFR and Redshift
  • Metallicity and Density extremes (Extremely low/Extremely high) plotted on axes of SFR and Redshift
  • Tidally formed barred spirals and Isolation formed barred spirals plotted on axes of SFR and Redshift

Conclusions: Expect to disprove hypothesis by showing clear distinctions between all trend dichotomies. Aim to explain increases of SFR qualitatively by considering the processes in, for example, metal poor/rich stars – extend this idea over all categories.

With this initial plan in mind, I organised a presentation, and we showed off our ideas on the Wednesday of Week 12. Our feedback told us to reduce our scope in some areas, but the principles remained unchanged. And thus, after two more meetings in which we discussed papers in detail and allocated discrete tasks for all members, we tackled the first lab on the Tuesday of Week 13.

–= Execution

“The greatest contributions to astrophysics and fashion since sliced bread.”

— Darren Jean, The Gethin Hughes Observatory, Stanford University 2033

From this point, we split into two sub-teams. The Low-RS team comprised of Gethin and Nathan tackled low-redshift data sourced from the SDSS. Moreover, they took charge of processing Galaxy Zoo data in order to match morphology with the data points from other catalogues. Team High-RS  was formed of Nick and Anton, and had the task of gleaning results in the high-redshift limit of z=0.6 to 1. They used data from COSMOS and LEGA-C. Iestyn made sure everyone knows what to do. My role was to determine necessary uncertainties for both teams, produce a program to filter unwanted LEGA-C rows, and then scout out the expected data results for the first task to determine if it was worth pursuing.

In the Galaxy Zoo data, we came across a problem – how to determine which weighting fraction to use. The Galaxy Zoo is a project whereupon the public is called to contribute their own time to help categorise galaxies in a database. In this respect, there is plenty of scope for errors; different voting policies between individuals. Therefore, weightings are used in order to minimise bias and get the best representation for the correct categorisation of things. It falls upon us to determine the correct weightings to use, but we had no leads.

Gethin then had the idea of doing the experiment ourselves, and comparing with Galaxy Zoo results to determine the weighting that matches our views with the public opinion. In this way, by collecting data with N=50 from all JEANS members, we can essentially choose from the public sample the weighting that our team (on average) deems to be correct. A random sample of galaxies were selected from the database, and each member sat down to classify them over the course of the lab. Nathan did the entire experiment wrong and ruined everything. Sorry, I meant, Nathan decided to represent the very likely case that a fraction of the public would misinterpret the instructions and acted accordingly. In any case, the data we collected was wonderfully interpreted by Gethin, who then produced the accurate weighting fraction to be used in our processing.

Galaxy Zoo only accounted for low redshift unfortunately. Therefore, in order to progress with the plan, Team High-RS needed to use Galaxy Zoo CANDELS. Nick matched LEGA-C and COSMOS data, but found that only 70 data points were present in both GZC and LEGA-C. We chose to use that over COSMOS anyway because it had more documentation. 70 data points, we decided, were enough for our purposes.

C: Our first graph! We just wanted to see what trends are to be expected.

The final hours of this first lab were spent producing our first graph, at least to a preliminary level. Myself and Nick independently worked on graphs of SFR in featured and featureless galaxies over galactic time. Nick made the distinction of the morphologies at the cost of only having 70 data points, whereas I kept in all the data points, including those past the desired redshift range, in order to scout our future results. Together, we found that, it is indeed the case that there is a difference in trend between featured and featureless galaxies, and also that there is a clear SFR exponent in the high redshift region.

Well, that’s all for this update. Thanks for sticking with us thus far and I hope I haven’t bored any of you to death. Next week’s blog post will feature an actual banner and group photo! And some astrophysics.

See you lovely fellas in the next one!

JEANS Blog – 05/FEB/22

–= Update the First

“I wish for my very good looking daughter to marry one of those lovely gentlemen from that super-team JEANS”

— An incredibly wealthy family, ASAP please

Reporting for Just Examining AstroNomical Systems, Fal here!

This week, we’re proud to report we made great progress. A general overview goes as follows; the first result is drawing to a close, meaning that next week, our loyal readers will have a final graph to behold/worship. This graph concerns the variation of morphology and mass with SFR over cosmic time. Our software framework is complete for this task, meaning that future data is expected to be simply plug & play with minimal changes. We have finally gleaned an adequate method for determining uncertainties in SFR, at the blood cost of ~8 months off poor Gethin’s lifespan. And finally, the initial pages of our report has come to sustain the tender touch of the JEANS pen; the structure is complete and the introduction, data and methods sections are completed to a perfunctory level. We deem ourselves to be ahead of the planned schedule, with optimistic outlooks for the coming weeks.

It’s been a productive, at times hectic, week. The biggest problem for us thus far has been the SFR errors, which after so long we’ve finally come to a resolution about. Dr. Sobral helped up find a paper that outlined a calculation for SFR, which we’re going to take as gospel. (We will lose members in an accident if we have to go through SFR uncertainties any more). We also found a processed dataset from SDSS made by the “Portsmouth Group”, who are quite usefully anonymous in citings and references. But, this data set does include errors in SFR in the bands we need, therefore, for the sake of progress and our own sweet skin, we shall move on.

Nick’s code for long range redshift has been leveraged for the Low-RS team’s use, for the sake of consistency in format. I worked with him to determine parameter limits so we could shave off the less useful data points; quite representative of this notion was how we filtered out all data points with magnitude in any band (that we are working with) above 22 mag. This is because fainter galaxies have greater flux uncertainties, and we don’t like that.

I was responsible for starting the report in preparation for the first wave of results. To this end, I purveyed the works of previous years to make sure what we’re doing doesn’t stand out like a sore thumb. Then, I designed the structure of our report accordingly and spent the lab writing precursory paragraphs.

C: The fruits of labour of Team High-RS. It is apparent to see that this graph has never known love.
C: The fruits of labour of Team Low-RS. Note how love has been meticulously applied to achieve this beautiful graph.

Some issues came up in the plotting for both teams – firstly, since we were advancing science at an unprecedented rate, God himself felt threatened and decided to crash Nick’s python. Moreover, team High-RS had three graphs with roughly 40 data points between them, quite contrasting with Team Low-RS’ corresponding single graph with 3000 data points. We’re working on this, and will fix it before it’s time to start the second and third projects.

We’ve also begun scouting out the next tasks – Iestyn identified the next route as determining the density and overdensity relations with SFR over cosmic time. I’m on errors for the overdensity and on a completely unrelated note I need counselling.

The expectation for next week is that our first and second projects are done, the report has quality writing and citations for the introduction and data sections, and Anton becomes the CEO of Cisco Telecommunications. Please look forward to our next update!

JEANS Update 1 – 11/FEB/22

–= Update the Second

Reporting for Just Examining AstroNomical Systems, Fal here!

Blog post is late? What happened to the routine Saturday updates that I’d hammered into the rituals of all JEANS apostles? Well, there’s a good reason for that and its because we finished the project just now and I wanted to stretch this update to include the ending.

As predicted in the first weeks, we’d done all the heavy lifting and the rest of our project has been for the most part, plug and play. The data hashing, reduction pipeline and analysis were completed this past week-and-a-half for the remaining two projects. Indeed quite the testimony to our incredible efficiency, efficacy and variegation.

So, before I present the results, let’s review what’s happened in the macroscopic context of the project.

–= The Ascent

“Why have all the other teams taken their group photos in front of the telescope and we got ours at Greggs”

– someone in my team

Initially, we’d identified three conjectures to investigate. These were;

  • Morphology binaries (Spiral/elliptical, Featured/Featureless, Barred Spiral/Natural Spiral)
  • Metallicity and Density extremes (Extremely low/Extremely high)
  • Tidally formed barred spirals and Isolation formed barred spirals

These did not all survive the test of time. Along the way, the march of progress did discriminate between the viabilities, the insightfulness and the amount of available documentation, screw you COSMOS, and as a result, we had to adapt as we went. Our final setup was:

  • Morphology binaries (Spiral/Barred/Elliptical, Featured/Featureless)
  • Overdensity extremes
  • Mass bins

So, let’s review our process.

For those that have forgotten (This was definitely mentioned above already, you forgot, don’t scroll up) our data was split into two samples. Sample A was a cross-pollination of SDSS and Galaxy Zoo, and Sample B was made up of CANDELS and LEGA-C.

C: Flowchart of the Criterion’s data processing protocol. Not to scale.

The way we combined the data sets was by creating a script to match by co-ordinates. Then, we worked our magic and filtered them with our amazing Deity-level spells to remove rows any with high magnitude or lots of zeros. After that, we just plotted it.

Aaaaaaaaand boom! This is where we get to this week’s progress. Four of us have had nothing to do but do lab report, which is incredibly boring to write about and presumably even more so to read. Gethin and Nick, the cheap labour of the grouppeople most suited for the coding processes, have adapted their code to incorporate new columns and produced the graphs. Anton, the noble and almighty Theory Lead (lol) did the analysis of the graphs, and the rest of us, hungry for jobs, scrabbled to transcribe it into the body of the report.

That pretty much sums up the actual work we did. Now, as for the results…

–= The Zenith

A David, some PCs and wireless earphones. Paraphernalia of formidable agency that carved their names in the annals of history. They held the knowledge to unlock the very heart of the universe and throw open the gates to Heaven, but still they remained in the service of mankind.”

Lord Jean, Chronicler of the JEANS legacy, 2094

With over 200 years of combined research experience, the six members of the Criterion are proud to present our findings. Gaze and be awed.

Firstly, looking at the first task, we present the relations between the redshift and SFR in various mass bins. Check out our cool graphs!

C: Final result for first task – Low Redshift data
C: Final result for first task – High Redshift data

Remember a few weeks back, when I talked about a “scouting plot” that I made just to test out my cool filtering script? Well, surprisingly, the little bugger made it into the big leagues, and now features in the final report! We even gave him a proper makeover, some polishing and his own name, Task 0!

Well, little Task 0 actually is quite important, and serves as kind of control case. For example, in looking at our Task 1 results, we see that the Task 0 trend is observed in all of them. A closer analysis shows that over all mass bins, barred spirals are the best star-formers. There is evidence in both high and low redshifts that spiral-type or featured galaxies are on average, better star-formers than ellipticals/smooth galaxies.

Though, it’s interesting to observe that elliptical galaxies at low redshift have far steeper gradients than the spiral-types. This implies that as we head into the higher redshift regime, then ellipticals far outperform spirals! However, when we look at our high-redshift results, we actually find that it’s certainly not the case – ellipticals, corresponding with “smooth” galaxies are still outperformed by their featured counterparts.

We think that this conflict can be explained by how elliptical galaxies at younger stages are far more active, and over time die down into quiescent stages. Sprials, on the other hand, stay active for longer but are less active in their early stages. This draws an interesting parallel with the temperature-lifetime relation for stars; the hottest stars have the shortest lifetime, and the cooler ones burn at lower luminosities for much, much longer.

For the second task, here are our results for stellar mass against SSFR at low redshift;

C: Final result for second task at redshifts 0 – 0.2

Here, we plot the mass against specific star formation rate (SSFR). SSFR is simply defined as SFR normalised over mass. Again, this was 100% mentioned above earlier, and you forgot, don’t scroll up. What this means is that there are diminishing returns on SFR as we look at more massive galaxies. Having normalised, it also becomes apparent that there is very little distinguishment between the three categories, with barred spirals slightly outperforming the others over all masses. We think that an explanation for this is likely the bar of the galaxy – as it rotates, the space immediately ahead of the bar’s rotation gets compressed, and therefore is more likely to meet the Jeans Criterion. In this sense, the bar is a galaxy’s own manual star forming mechanism, which is super cool. This may also help to explain the results in task one!

Finally, our findings for the third task, Overdensity’s relation to SFR over high-redshift bins.

C: Final results for third task at redshifts 0.6 – 1

Brave little Task 0 makes yet another presence here, confirming there’s absolutely no relationship between overdensity and redshift in this graph. The relative skew of the lines is inconsequential, and the shifts in Y are the doings of Task 0. What we do find out, though, is that lower overdensities correspond to greater star formation. This can be explained in that lower overdensities correspond to less concentration of mass within a galaxy’s bounds, and therefore more uniform distribution. This means that there’s a lot more potential area for star formation, where space may be the limiting factor. If we consider a similar galaxy with very high overdensity at the same redshift, then we should find that there are clusters with huge amounts of star formation, and the rest of the galaxy finds it hard to ever satisfy the Jeans criterion, and therefore is more or less quiescent. The net effect is that more overdensity = less star formation.

–= Epilogue

Well, that’s it from me. The next weeks are just doing writeups of all our work to this date, so I won’t bother to talk about any of it. Thanks for sticking with us. It’s been quite the ride, and I hope you’ve enjoyed reading this blog at least somewhat.

—-== The JEANS team

JEANS Update 2 – 22/FEB/22

HELP: Weeks 8-9 (The end is in sight)

Over the last couple of weeks, we have mainly focused on wrapping up loose ends, with filling out data that we’d already got equations for, fixing some error parts, and some other little bits here and there. But the end is in sight! Report writing is now well underway, hoping to get it done with some days to spare, for reading over and checking everything. A slight spanner in the works has been thrown in that we are actually not expected to have a theory section and instead it should be integrated into other sections of the report. After having thought about it for a little while, Harry and I discussed it and came to the conclusion that just continuing what we were doing was best and then copy pasting bits and bobs from the theory and putting it in other sections instead once everything was finished was what would work best. This would allow us all to continue working on our separate sections without getting in each other’s way. This does mean that some parts may need rewording at the end, but this is again part of the reason we hope to finish with enough time before the deadline.  

We also have created a presentation with some plots and results and a summary of our project to present to our peers at the end of week 18, which took up a lot of our time.  It was a good opportunity to prepare for the PLACE mini-conference, which takes place later this year.

Monte Carlo method 

Other than finishing off loose ends, another aspect that we have been focused on, but particularly Harry is using the Monte Carlo method to get our final number of planets that fit within these parameters and their error. This method fits Gaussian’s over all parameters within two standard deviations and after running it 100 times, an average is found and it gives a realistic estimate with errors for the amount of planets that fit into our parameter limitations, accounting for errors.  

The results found for different limits of our parameters can be seen here: 

Text

Description automatically generated

We have spent the last few weeks writing the scientific report that explains the background and methodology of our investigation in more detail and further analyses and discusses the data above. If you are interested in reading it, it will be published in this Summer’s edition of NLUAstro, with the other PHYS 369 Group Projects from this year, so please keep an eye out for it.

HELP: Weeks 6-7 (Applying physical laws)

This post is a short summary of the last few weeks’ work.

Habitable Zone Research 

Owen did some research into calculating the habitable zone for exoplanets. He found a pair of very simple equations corresponding to the inner and outer edge of the habitable zone: 

The inner Edge of the HZ (r_i) is the distance where runaway greenhouse conditions vaporize the whole water reservoir and, as a second effect, induce the photodissociation of water vapor and the loss of hydrogen to space.  The outer edge of the HZ (r_o) is the distance from the star where a maximum greenhouse effect fails to keep the surface of the planet above the freezing point, or the distance from the star where CO2 starts condensing. 

The article Owen found these equations on backs using the Stellar Flux over equilibrium temperature which cancels any dependence on albedo. This makes determining values for Stellar Flux’s easier as albedo differs only slightly in every spectral type of star. This article came to the conclusion of using 0.53W/m^2 and 1.1W/m^2 for the outer and inner stellar flux’s, respectively, by using the bolometric correction. These values were clarified in [Kasting et al., 1993, cited below; Whitmire et al., 1996].  

We wanted to create our own formula, with different assumptions, to compare with equations Owen had found. Amaia did some research into this and found an equation that we could manipulate: 

We rearranged this equation to make the orbital radius D the subject and then subbed in temperature limits to find the inner and outer limits of the habitable zone radius. For our temperature limits we used: 

  • 647K – critical temperature of water 
  • 273K – freezing point of water 

When creating our own formula for the habitable zone, we made a few assumptions on the way. We started off by assuming the planets are black bodies, meaning the albedo a is 0 and emissivity ε is 1. We set the ratio of the area of the planet that absorbs power Aabs and the area of the planet that radiates power Arad to ½, which assumes a slow-rotating planet and makes sense as only about half of the planet will be facing the star at a time. We have assumed a circular orbit as we use a sphere radius of D. We have also assumed no greenhouse effect and an even temperature around the planet, which is not the case but is taken as an average. 

Determining Parameters to Define our ‘Habitable Planet’ 

Amaia and Harry selected which parameters we would use as our definition of a habitable planet and have determined value ranges for each parameter.  

  • Max Gravity: (3-4)g 
  • Minimum mass: 0.3 Earth Masses  
  • Stellar star classification: F, G, K 
  • Temperatures associated with these spectral types: F(6000K-7600K), G(5000K-6000K), K(3500K-5000K) 
  • Habitable Zone: We will use the equation Owen found and the one we create ourselves 
  • Planet Density: >2000kg/m^3 (anything less is probably a gaseous planet) 

Kepler’s Law 

Harry found that Kepler’s law does hold for our dataset (with very circular orbits of single-star systems). Nevertheless, we couldn’t just decide to dismiss all eccentric orbits as this would eliminate a huge amount of our data.  

Amaia suggested to use the circular, single star orbits to prove that Kepler’s law fits the dataset, and then use the specific relationship for the whole dataset to fill in the values. We spoke to David about this issue, and he suggested that orbital periods longer than the time we’ve been observing exoplanets for can skew the data. This might be contributing to the issue too. The periods longer than we’ve been studying exoplanets will have large errors and therefore give large errors. David also pointed out that Kepler’s Law should use the average distance not just “a”. The distance measured as “a” for some planets may simply be the detected distance, not an actual calculation of “a”. This is more to do with the way the database is written. If there is an accurate period given, it should always give you the average distance “a”. So, we just used Kepler’s law to fill in the gaps and propagated the errors.  

Applying Kepler’s law to our dataset proved to be difficult and so Davids feedback was to not bother checking specifics with weird periods and eccentricities. We Just assumed Kepler’s law works for all because they are all within the same order of magnitude, so this is a reasonable assumption for astrophysics. Plus, the number of stars in the system doesn’t have much of an effect. In the end we concluded that it’s hard to prove that Kepler’s law holds for the data set we have, but we can assume it does for every star because it’s a geometric law. Also applies to semi major axis of course.  

Calculating errors for our data 

Amaia has calculated all the errors with a lengthy code that contains a lot of separate functions. Amaia found the difference between actual radius and calculated radius (and for mass) for the ones we have both for. Using bins instead of plotting all of them, she found the average deviation for the values in that bin. Too many small bins lead to empty bins, so she widened them enough that it worked. She added a linear relationship on either end of the mass-radius ranges rather than extrapolating outside the range to fit higher and lower values, because extrapolating didn’t return sensible values. Extrapolating for the other values were however successful.  

Multiple Star Systems 

In our dataset, we realise that a portion of these exoplanets orbit star systems of more than one star. Owen did some research into multiple star systems to see whether this would affect our results.  

When a planet orbits a multiple star system, it can orbit the stars in several ways. For example, in a binary star system, the planet can either orbit one star (S-type orbit) or orbit both stars (P-type orbit). This qualitive information is not available in our dataset. So before proceeding into further research, we knew that this research wouldn’t be changing our results but would be good to include in our discussion section for the report.  

Multiple-star systems can perturb a planet’s orbit, precluding any chance for life as we know it to survive. But even for planets in stable orbits, these stars can produce habitable zones that change dramatically as the stars move around each other. Habitable planets can dip out of the HZ for a small amount of time, and the resilience of the planets habitability strongly depends on its climate inertia. Combining orbital dynamics with simple climate models we demonstrate that the size of circumstellar habitable zones depends on a planet’s climate inertia. The higher a climate’s resilience to variations in the incident light, the higher the chances for planets to remain in a habitable state. In systems like α Centauri, a low climate inertia shrinks the habitable zone by 50%