Tuesday, May 5, 2009

Compressed Sensing and Wiener Filtering

Today I'm trying to expand my understanding of how we can best remove contaminant signals from the data we take with the Precision Array for Probing the Epoch of Reionization (PAPER). There is a specific problem I want to make sure we can solve for PAPER. Foregrounds to our signal, particularly synchrotron radiation, are expected to be very smooth with frequency. The idea put forth by the MWA and LOFAR groups is that by observing the same spatial harmonics at multiple frequencies, we should be able to remove such smooth components to suppress them relative to the cosmic reionization signal we are looking for. However, generating overlapping coverage of spatial harmonics as a function of frequency is expensive. My intuition is that since foregrounds do not have a spatial structure that changes dramatically with frequency, we shouldn't need to sample a given spatial harmonic very finely in frequency to get the suppression we want. This would allow us to spread our antennas out a little more and get measurements of the sky at a variety of spatial modes.

In many ways, our problem is analogous to what was done with the Cosmic Microwave Background (CMB). For foreground removal in CMB work, Tegmark and Efstathiou (1996) begin with an assumption that foregrounds can be described as the product of a spatial term and a spectral frequency term. This allows them to construct Wiener filters that use the internal degrees of freedom of their data, together with a model of their foreground and a weighting factor based on the noisiness of their data, to construct a filter for removing that foreground. For the most part, this is standard Wiener filtering, except they have to be careful about what they do to their power spectrum, so they apply a normalization factor to correct for a deficiency in Wiener filters. Tegmark (1998) goes on to generalize this technique for foregrounds that vary slowly with frequency. I'm in the process of wading through these papers, but they seem to be directly applicable to what we are doing, and seem to confirm my suspicions that synchrotron emission should be well-enough behaved to require only sparse frequency coverage of a wavemode in order to be suppressed.

Another tactic that I am investigating is that of compressed sensing which I was alerted to in talks by Scaife and Schwardt at the SKA Imaging Workshop in Socorro this last April. The landmark paper on this principle seems to be Donoho (2006), where it is shown that the compressibility of a signal (being sparse for some choice of coordinates) is a sufficient regularization criterion to faithfully reconstruct signals using a small number of samples. In a way, this technique has an element of Occam's Razor in it--it tries to find a solution, in some optimal basis, that needs the fewest non-zero numbers to agree with the measured data. At least, that's my take on it without having finished the paper.

The relevance of compressed sensing to image deconvolution is explored in Wiaux et al (2009), and it seems to be powerful. I'm excited by this deconvolution approach because it meshes well with the intuitive approach I've been taking to deconvolution, which was to use wavelets and a Markov Chain Monte Carlo optimizer to find the model with the fewest number of components that reproduces our data to within the noise. Compressed sensing seems to be exactly this idea, but is agnostic about the basis chosen, instead of mandating one like wavelets. Anyway, this technique may also be relevant to our foreground removal problem because we might be able to use it to construct the minimal foreground model implied by our data. For synchrotron emission, which should have smoothly varying spatial structure with frequency, I envision that this could construct a maximally smooth model that would allow us to use sparse frequency coverage to remove the foreground emission to the extent that it is possible to do so.

No comments:

Post a Comment