Advanced Process Control Tutorial Problem Set 2 - Process Control Tutorial Problem Set 2 ... the relationship between br u;u(˝) and br ... gis a zero mean white noise process with variance 2:

  • Published on
    21-Apr-2018

  • View
    214

  • Download
    2

Embed Size (px)

Transcript

<ul><li><p>Advanced Process ControlTutorial Problem Set 2</p><p>Development of Control Relevant Models through System Identification</p><p>1. Consider the time series</p><p>x(k) = 1 + 2k + w(k)</p><p>where 1 and 2 are known constants and w(k) is a white noise process with variance</p><p>2.</p><p>(a) Show that the mean of the moving average process</p><p>y(k) =1</p><p>2p+ 1</p><p>pj=p</p><p>x(k j)</p><p>is 1 + 2k. Is x(k) a stationary process?</p><p>(b) Find a transformation that produces a stationary process starting from x(k).</p><p>(Hint: Consider transformation using backward difference operator, i.e. z(k) =</p><p>(1 q1)x(k))</p><p>2. Show that autocovariance function</p><p>r(s, t) = E [(v(s) v(s)) (v(k) v(k))] = E [v(s)v(k)] v(s)v(k)</p><p>where E [v(s)] = v(s).</p><p>3. For a moving average process of the form</p><p>x(k) = (1/2)w(k 2) + w(k 1) + 2w(k) (1/2)w(k + 1)</p><p>where w(k) are independent with zero means and variance 2w, determine the autoco-</p><p>variance and autocorrelation functions as a function of lag = s k.</p><p>4. Estimate the autocorrelation of the finite sequence u = {1, 2, 3, 4, 5, 6}. Comment onthe relationship between ru,u() and ru,u().</p><p>5. If h = {1, 2, 3, 4} and u = {5, 6, 7, 8}, estimate the cross-correlation rhu.</p><p>6. Consider two series</p><p>x(k) = w(k)</p><p>y(k) = w(k) w(k 1) + u(k)</p><p>where w(k) and u(k) are independent zero mean white noise sequences with variances</p><p>2 and 2, respectively, and is a unspecified constant.</p><p>1</p></li><li><p>(a) Express the autocorrelation function y() of sequence {y(k)} for = 1,2, ..asa function of 2,2, and .</p><p>(b) Determine cross-correlation function x,y() relating {x(k)} and {y(k)} .</p><p>(c) Show that {x(k)} and {y(k)} are jointly stationary. (Series with constant meansand aucovariance and cross covariance functions depending only on are said to</p><p>be jointly stationary).</p><p>7. Consider a moving average process</p><p>v(k) = e(k) + c1e(k 1) + c2e(k 2) (1)</p><p>where {e(k)} is a zero mean white noise process with variance 2. Show that stochasticprocess {v(k)} has zero mean and auto-correlation</p><p>Rv(0) = E [v(k), v(k)] = (1 + c21 + c</p><p>22)</p><p>2 (2)</p><p>Rv(1) = E [v(k), v(k 1)] = (c1 + c1c2)2 (3)Rv(2) = E [v(k), v(k 2)] = c22 (4)Rv(k) = 0 for k &gt; 2 (5)</p><p>Note that {v(k)} is a typical example of colored noise.</p><p>8. Consider ARX model of the form</p><p>y(k) = ay(k 1) + bu(k 1) + e(k) (6)</p><p>It is desired to estimate the model parameters (a, b) using measurement data set {y(k) :k = 0, 1, ....N} collected from an experiment in which input sequence {u(k) : k =0, 1, ....N} was injected into a system.</p><p>(a) Show that the least square estimate of parameters generated from input-output</p><p>data is given by[ y(k 1)2 </p><p>y(k 1)u(k 1)</p><p>y(k 1)u(k 1)</p><p>u(k 1)2</p><p>][a</p><p>b</p><p>]=y(k)y(k 1)</p><p>y(k)u(k 1)(7)</p><p>where all summations are from k = 1 to N.</p><p>(b) When data length is large (i.e. N ), show that equation (7) is equivalent to[E [y(k 1)2] E [y(k 1)u(k 1)]E [y(k 1)u(k 1)] E [u(k 1)2]</p><p>][a</p><p>b</p><p>]=E [y(k)y(k 1)]E [y(k)u(k 1)]</p><p>(8)</p><p>2</p></li><li><p>or [Ry(0) Ryu(0)Ryu(0) Ru(0)</p><p>][a</p><p>b</p><p>]=Ry(1)Ryu(1)</p><p>(9)</p><p>where Ry() represents auto-correlation function and Ryu() represents cross-</p><p>correlation function.</p><p>(c) Defining regressor vector</p><p>(k) =[y(k 1) u(k 1)</p><p>]T(10)</p><p> =[a b</p><p>]T(11)</p><p>show that equation (7) can be written as</p><p>E[(k)(k)T</p><p>] = E [(k)y(k)] (12)</p><p>Hint: Show that</p><p>T = E[(k)(k)T</p><p>]TY = E [(k)y(k)]</p><p> =</p><p>(1)T</p><p>(2)T</p><p>....</p><p>(N)T</p><p> Y =</p><p>y(1)</p><p>y(2)</p><p>....</p><p>y(N)</p><p>9. Generalize the results of the previous for a general ARX model of the form</p><p>y(k) = a1y(k 1)..... any(k 1) + b1u(k 1) + ...+ bnu(k n) + e(k) (13)</p><p>10. Model conversions</p><p>(a) Consider OE model of the form</p><p>y(k) =2q1</p><p>1 0.6q1u(k) + v(k)</p><p>Using long division, convert the model into the following form</p><p>y(k) = h1u(k 1) + ...+ hnu(k n) + v(k)</p><p>where n is selected such that hi &lt; 0.01 are neglected. How many terms are</p><p>required and what can you say about |hn| as n increases? The resulting model iscalled finite impulse response model (FIR) and hi are called as impulse response</p><p>coeffi cients (why?).</p><p>3</p></li><li><p>(b) Consider OE model of the form</p><p>y(k) =2q1</p><p>1 1.5q1u(k) + v(k)</p><p>Can you find FIR model for this system? Justify your answer.</p><p>(c) Consider AR model of the form</p><p>v(k) =1</p><p>1 0.5q1 e(k)</p><p>where {e(k)} is a zero mean white noise signal with unit variance. Using long</p><p>division, convert the model into moving average (MA) form</p><p>y(k) = e(k) + h1e(k 1) + ...+ hne(k n)</p><p>n is selected such that hi &lt; 0.01 are neglected.</p><p>(d) Consider AR model of the form</p><p>v(k) =1</p><p>(1 0.5q1)(1 0.25q1)e(k)</p><p>Using long division, convert the model into moving average (MA) form.</p><p>(e) Consider AR model of the form</p><p>v(k) =1</p><p>(1 q1)e(k)</p><p>Using long division, is it possible to convert the model into moving average (MA)</p><p>form?</p><p>11. Consider process governed by FIR equation of the form</p><p>y(k) = h1u(k 1) + h2u(k 2) + e(k) (14)</p><p>where {e(k)} is a sequence of independent normal N(0, ) random variables.</p><p>(a) Determine estimates of (h1, h2) when input signal {u(k)} is step input introducedat k = 0.</p><p>(b) Make same investigation as part(a) when the input signal {u(k)} is a white noisewith unit variance.</p><p>4</p></li><li><p>12. Consider data generated by the discrete time system</p><p>System : y(k) = h1u(k 1) + h2u(k 2) + e(k) (15)</p><p>where {e(k)} is a sequence of independent normal N(0, 1) random variables. Assumethat parameter h of the model</p><p>Model : y(k) = hu(k) (16)</p><p>is determined by least square.</p><p>(a) Determine estimates obtained for large observation sets when the input u(k) is a</p><p>step function. (This is a simple illustration of the problem of fitting a low order</p><p>model to a data generated by a complex system. The result obtained will critically</p><p>depend on the character of the input signal.)</p><p>(b) Make same investigation as part (a) when the input signal is white noise with</p><p>unit variance.</p><p>13. Consider FIR model of the form</p><p>y(k) = h1u(k 1) + ...+ hNu(k N) + v(k) (17)</p><p>show that least square estimates of impulse response coeffi cients are given by equation</p><p>(12) where</p><p>(k) =[u(k 1)....... u(k N)</p><p>]T(18)</p><p> =[h1........ hN</p><p>]T(19)</p><p>In other words, generalize results of Problem 8 to a general FIR model</p><p>14. If it is desired to identify parameters of FIR model (17), taking clues from the previous</p><p>problem, what is the requirement on rank of matrix E[(k)(k)T</p><p>]? This condition is</p><p>called as persistency of excitation.</p><p>15. For a FIR model, show that parameter estimates are unbiased if {v(k)} is a zero meansequence.</p><p>16. Consider discrete time system given by equation (6) where the input signal {u(k)}and noise {e(k)} are sequences on independent random variables with zero mean andstandard deviations and , respectively.. Determine the covariance of parameter</p><p>estimates obtained for large observation sets.</p><p>5</p></li><li><p>17. Consider discrete time system given by equation</p><p>y(k) = a0y(k 1) + b0u(k 1) + e(k) + c0e(k 1) (20)</p><p>where the input signal {u(k)} and noise {e(k)} are sequences on independent randomvariables with zero mean and standard deviations and , respectively. Assume that</p><p>a model of the form</p><p>y(k) = ay(k 1) + bu(k 1) + (k) (21)</p><p>are estimated by least squares. Determine the asymptotic values of the estimates when</p><p>(a) {u(k)} is a zero mean white noise process with standard deviation </p><p>(b) {u(k)} is step input of magnitude </p><p>(c) In particular, compare the estimated values (a, b) with the true values (a0, b0) for</p><p>the following system</p><p>a0 = 0.8 ; b0 = 1 ; c0 = 0.5 (22)</p><p>for the cases (a) = 1, = 0.1 (b) = 1, = . By comparing the estimates</p><p>for cases (a) and (b) with true values, what can you conclude about the effect of</p><p>signal to noise ration (2/2) on the parameter estimates?</p><p>18. Consider a discrete time model</p><p>v(k) = a+ b k + e(k) (23)</p><p>where {e(k)} is a sequence of independent normalN(0, ) random variables. Determineleast square estimates of model parameters and covariance of the estimates. Discuss</p><p>behavior of the estimates as the number of data points increases.</p><p>19. Consider data generated by</p><p>y(k) = b+ e(k) ; k = 1, 2, .....N (24)</p><p>where {e(k) : k = 1, 3, 4, ....} is a sequence of independent random variables. Further-more, assume that there is a large error at k = 2, i.e., e(2) = A where A is a large</p><p>number. Determine the estimate obtained and discuss how it depends on A.(This is a</p><p>simple example that shows how sensitive the least square estimate is with respect to</p><p>occasional large errors.)</p><p>6</p></li><li><p>20. Suppose that we wish to identify a plant that is operating in closed loop as follows</p><p>Plant dynamics : y(k) = ay(k 1) + bu(k 1) + (k) (25)Feedback control law : u(k) = y(k) (26)</p><p>where {e(k)} is a sequence of independent normal N(0, ) random variables.</p><p>(a) Show that we cannot identify parameters (a, b) from observations of y and u, even</p><p>when is known.</p><p>(b) Assume that an external independent perturbation was introduced in input signal</p><p>u(k) = y(k) + r(k) (27)</p><p>where {r(k)} is a sequence of independent normalN(0, ) random variables. Showthat it is now possible to recover estimates of open loop model parameters using</p><p>the closed loop data. (Note: Here {r(k)} has been taken as a zero mean whitenoise sequence to simplify the analysis. In practice, an independent PRBS signal</p><p>is added to manipulated input to make the model parameters identifiable in closed</p><p>loop conditions.)</p><p>21. The English mathematician Richardson has proposed the following simple model for</p><p>the arms race between two countries</p><p>x(k + 1) = ax(k) + by(k) + f (28)</p><p>y(k + 1) = cx(k) + dy(k) + g (29)</p><p>where x(k) and y(k) are yearly expenditures on arms of the two nations and (a, b, c, d, f, g)</p><p>are model parameters. The following data has been obtained from World Armaments</p><p>and Disarmaments Year Book 1982Determine the parameters of the model by least</p><p>squares and investigate the stability of the model.</p><p>22. Consider an ARMA model of the form</p><p>y(k) = ay(k 1) + e(k) + ce(k 1) (30)</p><p>which is equivalent to</p><p>y(k) = H(q)e(k) =1 + cq1</p><p>1 + aq1e(k) (31)</p><p>{e(k)} is a sequence of independent normal N(0, ) random variables. Develop 1 stepahead predictor for y(k+ 1|k), which uses only the current and the past measurementsof y.</p><p>7</p></li><li><p>23. Consider an ARMAX model of the form</p><p>y(k) = ay(k 1) + bu(k 1) + e(k) + ce(k 1) (32)</p><p>which is equivalent to</p><p>y(k) = G(q)u(k) +H(q)e(k) =bq1</p><p>1 + aq1u(k) +</p><p>1 + cq1</p><p>1 + aq1e(k) (33)</p><p>{e(k)} is a sequence of independent normal N(0, ) random variables. Develop 1-stepahead predictor for y(k+ 1|k), which uses only the current and the past measurementsof y.</p><p>24. Consider Box-Jenkins model</p><p>y(k) = G(q)u(k) +H(q)e(k)</p><p>G(q) =q + b</p><p>q + aH(q) =</p><p>q + c</p><p>q + d</p><p>Derive one step prediction</p><p>y(k|k 1) = [H(q)]1G(q)u(k) +[1 (H(q))1</p><p>]y(k)</p><p>y(k) = y(k|k 1) + e(k)</p><p>and express dynamics of y(k|k 1) as a time domain difference equation.</p><p>25. Consider moving average (MA) process</p><p>y(k) = H(q)e(k) (34)</p><p>H(q) = 1 1.1q1 + 0.3q2 (35)</p><p>8</p></li><li><p>ComputeH1(q) as an infinite expansion by long division and develop an auto-regressive</p><p>model of the form</p><p>e(k) = H1(q)y(k) (36)</p><p>This model facilitates estimation of noise e(k) based on current and past measurements</p><p>of y(k).</p><p>26. Given an ARMAX model of the form</p><p>y(k) =B(q)</p><p>A(q)u(k) +</p><p>C(q)</p><p>A(q)e(k) =</p><p>0.1q1</p><p>1 0.9q1u(k) +1 0.2q11 0.9q1 e(k) (37)</p><p>Rearrange this model as</p><p>y(k) =C1(q)B(q)</p><p>C1(q)A(q)u(k) +</p><p>1</p><p>C1(q)A(q)e(k) (38)</p><p>Compute C1(q) as an infinite expansion by long division and truncate the expansion</p><p>after finite number of terms when coeffi cients become small, i.e.</p><p>C1T (q) 1 + c1q1 + ......+ cnqn (39)</p><p>Using this truncated C1T (q), express the model in ARX form</p><p>y(k) =B(q)</p><p>A(q)u(k) +</p><p>1</p><p>A(q)e(k) (40)</p><p>A(q) = C1T (q)A(q) ; B(q) = C1T (q)B(q) (41)</p><p>This simple calculation will illustrate how a low order ARMAX model can be approx-</p><p>imated by a high order ARX model.</p><p>27. Consider transfer functions</p><p>G1(q) =(q 0.5)(q + 0.5)</p><p>(q 1)(q2 1.5q + 0.7)</p><p>G2(q) =(q 0.2)(q + 0.2)</p><p>(q 1)(q2 1.5q + 0.7)</p><p>H(q) =(q 0.8)</p><p>(q2 1.5q + 0.7)</p><p>Derive state-space realization using observable canonical form for the following systems</p><p>(cases (a) to (d))</p><p>(a) y(k) = G1(q)u1(k) + v(k)</p><p>9</p></li><li><p>(b) y(k) = G1(q)u1(k) +G2(q)u2(k) + v(k)</p><p>(c) y(k) = G1(q)u1(k) +H(q)e(k)</p><p>(d) y(k) = G1(q)u(k) +G2(q)u(k) +H(q)e(k)</p><p>(e) Given that sequence {e(k)} is a zero mean white noise sequence with standarddeviation equal to 0.5, express the resulting state space models for cases (c) and</p><p>(d) in the form</p><p>x(k + 1) = x(k) + u(k) + w(k)</p><p>y(k) = Cx(k) + e(k)</p><p>and estimate covariance of white noise sequence {w(k)}.(f) Derive state-space realization using controllable canonical form for case (a).</p><p>28. Derive a state realization for[y1(k)</p><p>y2(k)</p><p>]=</p><p>1</p><p>(q2 1.5q + 0.8)</p><p>[q + 0.5 q 1.5q 0.5 q + 1.5</p><p>][u1(k)</p><p>u2(k)</p><p>]+</p><p>[v1(k)</p><p>v2(k)</p><p>]in controllable and observable canonical forms.</p><p>29. A system is represented by</p><p>G(s) =3</p><p>(s+ 4)(s+ 1)</p><p>(a) Derive continuous time state-space realizations</p><p>dx</p><p>dt= Ax+Bu ; y = Cx</p><p>in (a) Controllable canonical form (b) Observable canonical form</p><p>(b) Convert each of the continuous time state space models into discrete state space</p><p>form</p><p>x(k + 1) = x(k) +Bu(k) ; y(k) = Cx(k)</p><p>Is canonical structure in continuous time preserved after discretization? Show</p><p>that both the discrete realizations have identical transfer function G(q).</p><p>(c) If canonical structures are not preserved after discretization, derive discrete state</p><p>realizations in (a) Controllable canonical form (b) Observable canonical form</p><p>starting from G(q).</p><p>30. Derive a state realizations for[y1(t)</p><p>y2(t)</p><p>]=</p><p>1</p><p>(s2 + 3s+ 2)</p><p>[s+ 1.5 s 2s 3 s+ 2</p><p>][u1(t)</p><p>u2(t)</p><p>]in controllable and observable canonical forms.</p><p>10</p></li></ul>