FROGDAB Week 2 – The Big Bois – 07/02/2022

Hello again friends!

April here, back again to tell you all about what the Frog Dabbers have been up to this week. We’ve got some good stuff to report so I hope your all seat belted in. First off we’re going to look in on Fenn who’s found some interesting methods of calculating the masses of black holes:

When the speed separates idk – Velocity Dispersion

So Fenn, tell me all about your method.

Whilst I was doing some research into methods of calculating black hole masses I came across this very interesting method of relating the velocity dispersion to the mass of a black hole.

The velocity dispersion? What’s that??

Its the dispersion (differing values) of the velocities around the mean velocity for an astronomical object! In our case its for the galaxy surrounding the black hole.

Interesting stuff! So how can we use that to calculate the masses of black holes?

So by using [Equation 1 or 2] we can use the values of sigma (the velocity dispersion) to calculate the mass!

Thanks Fenn! So the equations we can use are as follows:

Equation 1 – Relation between the mass of a black hole and the velocity dispersion of the galaxy around it, taken from https://iopscience.iop.org/article/10.1088/0004-637X/698/1/198/pdf
Equation 2 – Secondary Relation between the mass of a black hole and the velocity dispersion of the galaxy surrounding it, taken from https://arxiv.org/pdf/1112.1078.pdf

This is a super convenient equation because as it happens we have a whole column in our data set full of the sigma values for the black holes. All we have to do is plug in the values and we get the masses of all the black holes which have available velocity dispersion data!

Meanwhile – Luminosity Masses

Finding the masses of black holes from their luminosity in specific bands however is not as easy. Our first step is to actually be able to find the different flux (and thereby luminosity) values of the spectra from the black holes, once again by pure coincidence someone has already done half of the work for us:

This is the ESO Archive website (https://archive.eso.org/scienceportal/home?data_collection=LEGA-C):

Its a very convenient tool of applying the Lega-C data we already have to actual spots in the sky. All we do is input the coordinates of a datapoint from our data set into the search bar at the top. ESO then finds available spectral data from near the point (usually within hundredths or even thousandths of an arcsecond), which means that we can obtain the flux values for all the datapoints we have information on.

Thanks to the excellent work of Adrien and Jacob we were able to attach the names of the spectrum data files to the coordinates of the data points. This is very useful in coding terms as it means we can easily iterate through all the data points and find the corresponding spectral data. Speaking of which:

That spectra looks pretty cool!

Jonathan Head – 2022
(This would end up being the last time Jonathan found spectra cool)

Figure 3 shows what one of the spectra looks like, its a big ol’ mess of different wavelengths having different fluxes (this is one of the spectra with a high sound to noise ratio, imagine what one with low looks like). It doesn’t mean much to us in its current state as the redshift hasn’t been taken into account. As astronomical objects get closer or further away their emitted wavelengths get either compressed or stretched respectively. This leads to the spectra we collect from our satellites being at the wrong values compared to what’s actually being emitted. We can fix this though, through equation 3:

So if we divide all of the observed wavelengths by z+1 and then zoom in on whichever peak we want to look at (in this case its Magnesium), we get this:

But what do we do with this information? You’ll have to tune in next week to find out!

Auf Wiedersehen ma dudes 🙂

FROGDAB Week 1 – Astronomical – 31/01/2022

When we first began our task, I do not think we understood the astronomical task in front of us. I mean we understood the astronomy behind it but once we opened the data files in TOPCAT, hundreds upon thousands of data points stared back at us, astronomical in size. Only a supercomputer would be able to analyze all of this data in the time we had, and my laptop with its difficulty opening the file explorer wasn’t going to cut it. We needed a solution; conveniently, we are geniuses B).

“Space is really big”

April Dalby

“It sure is”

Atlas Patrick

There are many different ways of calculating the values of different attributes of black holes. But rather than list them all here so that you forget I’ll just show you which equation we were using at the time and how we went about calculating the variables.

Overcomplicated – Luminosity Calculations

First is the following equation to calculate the mass of a black hole from its Luminosity and a FWHM of a peak in its spectra:

Broad Spectral Lines in AGNs and supermassive black hole mass measurements – Luca C Popovic

Okay seems easy enough so we just find those values from our datasets and calculate us some masses, easy peasy.

Hang on a second. But wh- th- there’s how many?!?!

THERE’S A HUNDRED AND NINETY FIVE THOUSAND DATA POINTS?!?!?

Our first job was clear, reduce this data set to the usable data points and create a much easier path for us to clear our overall hypothesis, whilst also preventing a few fires starting in the Astro Lab at the same time. There are a few ways we found to do this:

  1. Match up data from both the Lega-C and COSMOS data sets, that way we only have data points for points we have all the data for. This can be done by matching up the Right Ascension (RA) and Declination (DEC) coordinates from the different datasets, TOPCAT can do this with its match function, and save the resulting file. This reduced our data from over 195,000 points, to just over 3,700 data points. Whilst this is significantly less data, it isn’t very reliable so…
  2. Further cut down the data via their sound to noise ratio (SNR). The useful data coming from an astronomical object is called sound, whilst the not so useful data is called noise, the ratio of these gives an idea as to how reliable the data being analyzed is. Typically we want a SNR value greater than 1 as this means that there is more sound than noise. By running a quick python script to remove any data points with a SNR value less than 1 we can cut down the data to about 2,400 points. Again this is great, but the data could still be more useful to us…
  3. Remove any values which have no flux in the F Band. For our equations we need the spectra in the X-Ray range, this corresponds to Chandra’s F Band filter. The data is much more useful to us if it has a significant flux in the F band, hence we can use this to remove any data points with little to no flux. This finally cuts down our data to just 143 points. Which is a much more manageable and also reliable data set.

And so those were our tasks for the week, python scripts take a while to write (and April and Jonathan had to learn an entire new module of python). Conveniently there are 6 people in our group however so this isn’t all we got up to…

Get outta here – Accretion Rates

Another important quality of black holes is their ability to disperse materials in their environments. Whilst their high density leads to most of the materials being pulled into it, its also possible for black holes to perform a gravity assist (aka gravity slingshot) to propel material in other directions away from the black hole.

Gravity assists are very convenient things in astrophysics, they are used practically all the time in astronautics in order to give objects more speed and direction. Famously one was used to correct the course of the Voyager 1 via Jupiter towards Saturn.

Gif of the Voyager 1’s path, as it slingshots past Jupiter (blue) towards Saturn (green)

The same can be done for matter around black holes, they accelerate via the black holes gravity and are flung back out into the environment surrounding it. This is something we want to investigate as well, if the amount of matter being flung back out is having an affect on the star formation rate within the galaxy. Conveniently there are equations we can use to calculate the accretion rates of black holes, such as this one:

Equation showing the link between Accretion rate and the bolometric luminosity, taken from https://arxiv.org/pdf/1909.11672.pdf

The Bolometric luminosity is related to the luminosity in the x-ray band, this is a value we can find in our Combined data sets and such we can calculate the Accretion Rates:

You’re probably thinking: “Wow guys those correlate so well” and you are right but the reason for that is because one is calculated from the other, with the discrepancy in linearity coming from the changes in redshift.

You can make the graph rainbow

David Sobral – 2022

So there we go, thats what the frog dabbers got up to this week! Join us next week where we’ll make some code to calculate the masses of black holes from both the H-Beta and Magnesium line. Also Fenn will drop in to tell us all about their method of calculating masses.

Farewell ’till then

April

FROG DAB: the Formation Rate Of stars within Galaxies Due to the Aftermath of Blackholes

Hello there, we’re the Frog Dabbers, and this is our blog!!

My name is April, and over the next few weeks I’ll be taking you on an adventure in physics. To show you the incredible work my team and I have investigated; one of the most interesting phenomena of the known universe … Black Holes.

“This is where the fun begins”

Anakin Skywalker
Star Wars: Episode III
Revenge of the Sith

But who are we? And why should you listen to a word we say? Well this introductory post will hopefully inform and convince you of the answers to those questions. We are the Frog Dabbers, a group of genius physicists with only one goal in mind, to answer all the questions currently stumping physicists around the world, or at least some of them idk depends how much time we have left.

From Left to Right – Fenn, Adrien, Jonathan, April, Atlas, and Jacob

Look there we are, the coolest kids in school if ever there were some, but let’s put a name to those faces:

  • Jonathan Head – Learner of the Theory, Calculator of the Maths
  • Adrien Vitart – Checker of the Errors, Aider to the Analyzer
  • Atlas Patrick – Thinker of the Knowledge, Aider to the Code
  • April Dalby – Scriber of the Code, Blogger of the Blogs
  • Jacob Hughes – Reader of the Data, Knower of the Sky
  • Fenn Leppard – Writer of Reports, Coder of the Code

Our task at hand is a lengthy one, we need to analyze the many different attributes of black holes which lie at the center of galaxies and answer some important questions. Do black holes have a positive, negative or no effect on star formation in galaxies? What aspect of black holes, if any, causes these effects? Just how many errors in April’s work is Adrien going to find? Only time will tell…

Conveniently we have many tools at hand. There are hundreds of thousands of data points for black holes in both the Lega-C and Z-Cosmos data sets and thanks to our theory experts we already have some methods laid out for us. Those Black Holes don’t stand a chance against the Frog Dabbers, and now our journey begins.

“These guys are the smartest people I have worked with!”

David Sobral in 10 weeks

Do stay tuned dear reader, as my weekly updates on our research blows the limit of human knowledge sky high.

Fare thee well

Week 1 – Astronomical – https://xgalweb.wordpress.com/2022/03/01/frogdab-week-1-astronomical-31-01-2022/

Week 2 – The Big Bois – https://xgalweb.wordpress.com/2022/03/01/frogdab-week-2-the-big-bois-07-02-2022/