Signals and Systems with solved problemsSignals and Systems with solved problemsSignals and Systems with solved problems

  • Published on
    31-Oct-2015

  • View
    46

  • Download
    10

Embed Size (px)

DESCRIPTION

Signals and Systems with solved problemsSignals and Systems with solved problemsSignals and Systems with solved problems

Transcript

  • Signals and Systems

    Problem Set:Power Spectral Density and White Noise

  • Problem Set

    For all questions, time signals x [n] are real-valued and wide sense stationary.

    Problem 1

    Show that the auto-correlation function is even, that is

    Rxx [k] = Rxx [k] .

    Problem 2

    Show that the power spectral density function is even, that is

    Sxx () = Sxx () .

    Problem 3

    Show that the power spectral density is real.

    Problem 4

    Show that the power spectral density is nonnegative, that is

    Sxx () 0 .

    Note: This is not an easy question!

    Problem 5

    Let

    y [n] =x [n] + x [n 1]

    2,

    where x [n] is white noise. Calculate the auto-correlation function Ryy [k] and the power spectraldensity Syy ().

    Problem 6

    Let

    y [n] = x1 [n] + x2 [n] ,

    where x1 [n] and x2 [n] are independent and zero mean, implying that

    E (x1 [n1]x2 [n2]) = 0 n1, n2 .

    Show that

    Syy () = Sx1x1 () + Sx2x2 () .

    2

  • Problem 7

    Show that the cross-correlation function satisfies

    Rxy [k] = Ryx [k]

    and that

    Sxy () = Syx () .

    Problem 8

    Come up with a simple example where the power spectral density Sxy () is complex.

    Problem 9

    Using Matlab, calculate the auto-correlation function and the power spectral density of thesignal

    x [n] = sin

    (10pin

    N

    )+N (0, 1) ,

    where N = 1024 is the number of elements of x, and N (0, 1) is Gaussian noise with mean zeroand variance one.

    Note: When numerically calculating the auto-correlation function of an N -sample signal, we canuse the following definition:

    1

    N

    N1n=0

    x [n]x [n k]

    which assumes that x[n] is periodic. An example of code that performs this summation appearsin the file white noise.m, available on the course website. When calculating the power spectraldensity, use the Discrete Fourier Transform. (i.e. The Matlab command fft)

    Plot the signal, its auto-correlation, and its power spectral density. Confirm that the powerspectral density is real and positive. (Hint: numerical errors may result in small imaginaryvalues, so you may wish to use the command real.) Also calculate the signals DFT andcompare the square of the absolute value of the Fourier coefficients with the power spectraldensity. By what factor do they differ? Can you explain this?

    3

  • Sample Solutions

    Problem 1 (Solution)

    The auto-correlation function is defined as

    Rxx [k] = E (x [n]x [n k])

    which gives us

    Rxx [k] = E (x [n]x [n+ k]) .

    Substituting m = n+ k we get

    Rxx [k] = E (x [m k]x [m]) ,

    which is exactly Rxx [k].

    Problem 2 (Solution)

    The power spectral density is the Fourier Transform of the auto-correlation function:

    Sxx () =+

    k=

    Rxx [k] ejk

    which gives us

    Sxx () =+

    k=

    Rxx [k] e+jk

    Substituting l = k we get

    Sxx () =+l=

    Rxx [l] ejl

    Using the result of Problem 1 (Rxx [k] = Rxx [k]) this is identical to

    Sxx () =+l=

    Rxx [l] ejl ,

    which is Sxx ().

    Problem 3 (Solution)

    We can rewrite the power spectral density as

    Sxx () =

    1k=

    Rxx [k] ejk +Rxx [0] +

    k=1

    Rxx [k] ejk .

    Again using the result that Rxx [k] = Rxx [k], this is identical to

    Sxx () = Rxx [0] +

    k=1

    Rxx [k](ejk + e+jk

    ),

    where the complex conjugates can be rewritten as

    Sxx () = Rxx [0] +

    k=1

    Rxx [k] (2 cos (k)) .

    4

  • Problem 4 (Solution)

    First proof

    We may consider the autocorrelation as an infinite sum

    Rxx [k] = E (x [n]x [n k])

    = limN

    1

    2N + 1

    Nn=N

    x [n]x [n k] .

    Using the definition of power spectral density we may then write

    Sxx () =

    k=

    Rxx [k] ejk

    =

    k=

    limN

    1

    2N + 1

    Nn=N

    x [n]x [n k] ejk

    = limN

    1

    2N + 1

    Nn=N

    x [n]

    k=

    x [n k] ejk

    = limN

    1

    2N + 1

    Nn=N

    x [n]

    m=

    x [m] ej(nm)

    = limN

    1

    2N + 1

    Nn=N

    x [n] ejn

    m=

    x [m] ejm.

    Now consider that since for N 0,

    1

    2N + 1> 0.

    Using this, and the fact that x[n] is a real-valued sequence, we find that

    sgn (Sxx ()) = sgn

    (

    n=

    x [n] ejn

    m=

    x [m] ejm

    )

    = sgn (X ()X ())

    = sgn(X ()X ()

    )= sgn

    (|X () |2

    )= 1.

    Second proof

    Assume that the signal x [n] is the input signal into a system. Now suppose our system is anideal bandpass filter H () of the following form:

    5

  • 10

    w

    0

    w

    |H ()|

    When applying an linear time-invariant system to our input signal, the power spectral densityof the output is

    Syy () = |H ()|2Sxx () .

    We now apply the inverse Fourier Transform to calculate the auto-correlation of the outputsignal Ryy [0] of the bandpass filter:

    Ryy [0] =1

    2pi

    pipi

    |H ()|2 Sxx () d

    =1

    pi

    0+w20

    w

    2

    Sxx () d .

    For the output signal to be real, Ryy [0] must be nonnegative. If we let w go to zero, the limitcase can be expressed as

    Ryy [0] limw0

    w

    piSxx (0) .

    For Ryy [0] to be nonnegative, Sxx (0) must be nonnegative. Since 0 was arbitrarily chosen,this must hold for all . Therefore,

    Sxx () 0 .

    Problem 5 (Solution)

    The auto-correlation function is

    Ryy [k] = E (y [n] y [n k])

    = E

    (x [n] + x [n 1]

    2

    x [n k] + x [n k 1]

    2

    )

    =1

    4E (x [n]x [n k] + x [n]x [n k 1] + x [n 1] x [n k] + x [n 1] x [n k 1]) .

    Since Ryy [k] only depends on the difference of the indices, this can be rewritten as

    Ryy [k] =1

    4(2E (x [n]x [n k]) +E (x [n]x [n k 1]) + E (x [n]x [n k + 1])) ,

    which, assuming x to be white noise, gives

    Ryy [k] =

    0 for |k| 2 ,14 for |k| = 1 ,12 for k = 0 .

    6

  • The Fourier Transform is then easy to apply:

    Syy () =+

    k=

    Ryy [k] ejk

    Syy () =1

    4ej +

    1

    2+

    1

    4ej

    =1

    2+

    1

    2cos ()

    Problem 6 (Solution)

    The auto-correlation function is

    Ryy [k] = E (y [n] y [n k])

    = E ((x1[n] + x2[n]) (x1[n k] + x2[n k]))

    = E (x1[n]x1[n k] + x1[n]x2[n k] + x2[n]x1[n k] + x2[n]x2[n k]) .

    Because of the independance of x1 and x2, the middle two terms are zero and we can rewritethe correlation function to

    Ryy [k] = E (x1[n]x1[n k]) + E (x2[n]x2[n k])

    = Rx1x1 [k] +Rx2x2 [k]

    Due to the linearity of the Fourier Transform,

    Syy () = Sx1x1 () + Sx2x2 ()

    Problem 7 (Solution)

    We write down the definition of cross-correlation, then use the fact that reindexing does notchange it, and see that this immediately produces the desired result:

    Rxy [k] = E (x [n] y [n+ k])

    = E (x [n k] y [n])

    = Ryx [k]

    We use the above result to prove the cross-power spectral densities relationship:

    Sxy () =

    k=

    Rxy [k] ejk

    =

    k=

    Ryx [k] ejk

    =

    m=

    Ryx [m] ejm

    = Syx ()

    7

  • Problem 8 (Solution)

    Assuming real signals x and y, Rxy [k] is always real. Therefore, signals resulting in a complexpower spectral density must fulfill

    Rxy [k] 6= Rxy [k] ,

    meaning that the decomposition used in Problem 3 does not eliminate the complex components.A simple example of this can be constructed assuming x [n] to be white noise and

    y [n] = x [n 1] ,

    resulting in

    Rxy [1] = E (x[n]x[n 2]) = 0 ,

    Rxy [1] = E (x[n]x[n]) = 1 .

    It is easy to show that Rxy [k] is zero for all k 6= 1. The power spectral density is then

    Sxy () =+

    k=

    Rxy [k] ejk

    Sxy () = 1ej

    = cos () + j sin ()

    Problem 9 (Solution)

    The following are Matlab commands. We begin by creating the signal and plotting it:

    >> N = 1024;

    >> n = 1:N; n = n(:);

    >> x = sin(10*pi*n/N) + randn(N,1);

    >> figure; plot(x);

    Now we use some code from white noise.m (see announcement of 12 Nov on the class website)to estimate the signals auto-correlation function. We then plot it:

    >> xx = [x;x];

    >> r = zeros(N,1);

    >> for k = 1:N r(k) = sum(x.*xx(k:N+k-1) ); end

    >> r = r/N;

    >> figure; plot(r);

    We now calculate its power spectral density, and check that it is real positive:

    >> s = fft(r);

    >> max(abs(imag(s)))

    >> min(real(s))

    The second command above returned 2.1839e-11. We assume that this is small enough to beexplained by numerical error, and that the power spectral density is therefore real. The lastcommand returned 6.2887, which confirms that the power spectral density is positive.To compare to square of the absolute value of the Fourier coefficients with the power spectraldensity, we do:

    8

  • >> plot(abs(fft(x)).^2./real(s))

    We see from this that the square of the absolute value of the Fourier coefficients are exactly Ntimes larger than the power spectral density. When Matlab calculates the DFT with fft, itdoes not use the divisor N . (This may be seen by typing help fft at the Matlab commandprompt.) An inspection of the first proof of Problem 4 shows that using this definition of theDFT, the only difference between the power spectral density and the square of the absolutevalue of the Fourier coefficients, is the factor of N calculated above.

    9