1

I completed an on-line Digital Signal Processing Course without doing the optional programming assignments which I am attempting now after the course is over. Several test inputs are provided as well as the expected outputs. My problem is that for all test cases my output figures match the expected values exactly up to several decimal places. After that they don't match! For the first value for example:

9.065305906994007046e+01
I get:
9.065305906729875574e+01

And for the last (257th):

1.717718363464842923e+01
I get:
1.717715652594918652e+01

The correspondence at the beginning of the list is 9 decimal places. By the end of the list it is down to 5 places. I have tried different versions of fft implementations without improvement. It looks so close that there must be a cumulative error trick I am missing? This is the function I used (python, numpy, scipy).:

    def scaled_fft_db(x):
        # a) a 512-point Hann window - weigh the input data.
        N = x.size
        window = numpy.hanning(N)
        xx = x*window
        # b) Compute the DFT of the weighed input - normalized
        A = numpy.fft(xx)/N
        # c) Return the first 257 values of the normalized spectrum
        XX = abs(A[:257])
        # take the magnitude in dBs and
        # normalize so that the maximum value is 96dB.
        X = 20*numpy.log10(XX)
        X = X + (96 - max(X))
        return X

Any suggestions please.

8
  • 1
    Those values are well within the useful tolerances for practical applications. I could imagine that this is just roundoff error differences between different implementations. May 17, 2016 at 5:07
  • 1
    Are you comparing to a theoretical value or to a different implementation? If a different implementation, did it use the same numerical precision? For example, did you create x using np.asarray(data, 'float32'), while the other was 'float64' or vice versa? Are you reading the input data from a file, and was it truncated to 8 digits, but calculated with the full 16 digits?
    – Neapolitan
    May 17, 2016 at 5:22
  • 1
    fft is pretty accurate. Given white noise, x = np.random.rand(512), then np.linalg.norm(np.fft.ifft(np.fft.fft(x)) - x) == 7.1510584008774975e-15. Gaussian white noise x = np.random.randn(512) is slightly worse with norm == 2.3548886213717641e-14.
    – Neapolitan
    May 17, 2016 at 5:29
  • Thanks Mark and Neapolitan May 17, 2016 at 20:59
  • I called my function like this: # Read the wave file provided rate, data = scipy.io.wavfile.read('./data/testinput1.wav') # Process the data scaled_output1 = scaled_fft_db(data) #print scaled_output1[:16] # Write the output to a text file np.savetxt('my_testOutput1.txt', scaled_output1) Three sets of input signal and testOutput signals are provided and all have the same problem described above. I am comparing ‘'my_testOutput1.txt'’ with “testOutput1.txt” for example. May 17, 2016 at 21:19

0

Your Answer

By clicking “Post Your Answer”, you agree to our terms of service, privacy policy and cookie policy

Browse other questions tagged or ask your own question.