Electronic – use digital filters rather than simply manipulate signals in the frequency domain and then recover them into the time domain

digital filterfftfilter

I'm quite a novice in signal processing and I know this question may be too broad. But I would still like to hear hints from experts.

I was taught to use butter (to design Butterworth filter aka the maximally flat magnitude filter) and filtfilt (Zero-phase digital filtering) functions for bandpass filtering of EEG (electroencephalogram) signals in MATLAB offline (i.e. after the completion of recording). This way you can avoid inevitable "delay" caused by the digital filter (i.e. zero phase filtering).

Then, someone asked me why we cannot use fft (Fast Fourier transform) to get the frequency-domain representation of the signal, and then set the power of unwanted frequencies to zero, followed by ifft (Inverse fast Fourier transform) to recover the filtered data in the time domain for the same purpose. This manipulation in frequency domain sounded simpler and reasonable to me, and I couldn't really answer why.

What are the advantages and disadvantages of using the simple fft/ifft method for bandpass filtering? Why do people prefer to use FIR or IIR digital filters?

For example, is the fft/ifft method more prone to spectral leakage or ripples compared to the established digital filters? Does the method also suffer from phase delay? Is there a way to visualize the impulse response for this filtering method for comparison?

Best Answer

The main reason that frequency-domain processing isn't done directly is the latency involved. In order to do, say, an FFT on a signal, you have to first record the entire time-domain signal, beginning to end, before you can convert it to frequency domain. Then you can do your processing, convert it back to time domain and play the result. Even if the two conversions and the signal processing in the middle are effectively instantaneous, you don't get the first result sample until the last input sample has been recorded. But you can get "ideal" frequency-domain results if you're willing to put up with this. For example, a 3-minute song recorded at 44100 samples/second would require you to do 8 million point transforms, but that's not a big deal on a modern CPU.

You might be tempted to break the time-domain signal into smaller, fixed-size blocks of data and process them individually, reducing the latency to the length of a block. However, this doesn't work because of "edge effects" — the samples at either end of a given block won't line up properly with the corresponding samples of the adjacent blocks, creating objectionable artifacts in the results.

This happens because of assumptions that are implicit in the process that converts between time domain and frequency domain (and vice-versa). For example, the FFT and IFFT "assume" that the data is cyclic; in other words, that blocks of identical time-domain data come before and after the block being processed. Since this is in general not true, you get the artifacts.

Time-domain processing may have its issues, but the fact that you can control the latency and it doesn't produce periodic artifacts make it a clear winner in most real-time signal-processing applications.

(This is an expanded version of my previous answer.)