We recently published an article on the confirmation of two large-scale filaments along the sight-line of the blazar 1ES 1553+113, which correspond to Warm-Hot Intergalactic Medium (WHIM) absorption features in the X-ray and far ultraviolet. We use the WISE-SuperCOSMOS photometric catalog to map the cosmic-web in the direction of the blazar, and find significant filaments at redshifts of z=0.23 ±0.02 and z=0.31 ± 0.02, which roughly align with the absorption redshifts. A third X-ray absorption feature at z=0.133 did not have any corresponding structures in the photometric catalog.
We are putting the cosmic-web maps and code online at Github, and hope that people can make use of them for there own research. Keep an eye out for several projects using these data in the coming months!
A draft of our Faraday Complexity paper that I posted about previously, where we use a convolutional neural network (CNN) to classify complex polarized sources, is finally finished and can be found here (faraday_ML). We used an inception model to with 1-D convolutional layers to classify Faraday spectra as either simple or complex. The best network is shown below, and more details can be found in the draft.
Full Convolutional Network
Our paper on the cosmic-web has now been accepted to the Monthly Notices of the Royal Astronomical Society (MNRAS). Although we didn’t detect any significant emission from the cosmic-web, we were able to set primordial magnetic field limits comparable to the state-of-the-art CMB limits. You can find the paper on arXiv here.
I’m excited to say that the paper describing the cross-correlation of the low-frequency sky with tracers of the cosmic-web has now been accepted for publication! Tessa Vernstrom did a wonderful job; the tightest constraints on the synchrotron cosmic-web yet.
Can we teach a machine to recognize what we can’t? This may sound like an obvious yes to astronomers, because astronomers are constantly working in regimes where the signal is the same order of magnitude as the noise, and we often need to manipulate the data to extract a measurement that was not apparent with our own eyes. To someone working in machine-learning, where to goal is often to teach (or train) a machine to perform simple recognition tasks that human do effortlessly, the answer to this question would probably be not without a lot of data, and they’d also be right. What happens, however, when we don’t have enough examples of a type of object to train on, or even scarier, when the experts have a difficult time recognizing the objects even when they know they are there?
My colleagues and I are facing this problem right now. To discuss the problem, let me give a little background.
The newly constructed Australia Square Kilometre Array Pathfinder Telescope (ASKAP) will be one of the most powerful survey radio telescopes in the work, operating between 700-1800 MHz in both continuum (total brightness) and spectral-line (brightness as a function of frequency) modes. The telescope will conduct a dedicated polarization survey (called POSSUM), with the primary goal of detecting Faraday rotation measures (RM) from background radio sources. The RM, which measures the integrated amount of thermal electrons and magnetic field along the line of sight, can hold the key to unraveling several mysteries of cosmic magnetism. One problem that we have in preparing a catalog of RMs for polarized sources, is distinguishing between those that are a simple “Faraday thin” source or a more complex source with multiple components or “Faraday thick”. Below I show an example of a simple Faraday thin source, where the polarization angle of the (radio) light has been rotated by a simple cloud of material.
The problem is, it has been shown that polarized sources with two, closely spaced Faraday thin components (and are thus complex) can look like a Faraday thin source, and further, the RM that is measured will be different than the two individual RMs (and it might not be the average of them either). I’ll describe next how I think we can approach this problem with deep convolutional neural networks.
I’m excited to say that I’ll be teaching an impromptu course this spring on “Astrophysical Machine Learning”. It’s impromptu because I didn’t expect to teach it, it just worked out that there were a lot of students in my engineering physics course this Fall that got interested when I showed my work in class. Right now there are only about six students enrolled, with several more sitting in. I’m putting the course webpage online here, and the students will be sharing there work (with each other at first) on Github. I haven’t used Github for any of my work yet, so I’m excited to learn as we progress.
A side benefit for me (and the students) is that we’ll be quickly breaking into groups and working on real research problems, many of them centered around ML applications for source-finding and classification of radio sources. I’m the chair of the “Cosmic-Web” Key Science Project for the Evolutionary Map of the Universe survey to be conducted with the ASKAP telescope. Of particular interest to me are source-finding algorithms for diffuse sources (see below), were it is often difficult to find and characterize them when there are imbedded compact sources. Below is an example of a diffuse source (a simple cluster radio halo) with background point-sources imbedded within, taken from a simulation of what the EMU survey will be capable of. Early science for ASKAP is happening right now so the time is right to test some of this out on real data!