Top Ad 728x90

Thursday, January 22, 2015

, ,

Laplace Transform

From Continuous Fourier Transform to Laplace Transform Region of Convergence (ROC) Zeros and Poles of the Laplace Transform Properties of ROC Properties of Laplace Transform Laplace Transform of Typical Signals Representation of LTI Systems by Laplace Transform LTI Systems Characterized by LCCDEs laplace transform

The Laplace transform is a widely used integral transform in mathematics and electrical
engineering named after Pierre-Simon Laplace that transforms a function of time into a function of complex frequency. The inverse Laplace transform takes a complex frequency domain function and yields a function defined in the time domain. The Laplace transform is related to the Fourier transform, but whereas the Fourier transform expresses a function or signal as a superposition of sinusoids, the Laplace transform expresses a function, more generally, as a superposition of moments. Given a simple mathematical or functional description of an input or output to a system, the Laplace transform provides an alternative functional description that often simplifies the process of analyzing the behavior of the system, or in synthesizing a new system based on a set of specifications

    • Region of Convergence (ROC)

    Whether the Laplace transform $X(s)$ of a signal $x(t)$ exists or not depends on the complex variable $s$ as well as the signal itself. All complex values of $s$ for which the integral in the definition converges form a region of convergence (ROC) in the s-plane. $X(s)$ exists if and only if the argument $s$ is inside the ROC. As the imaginary part $\omega=Im[s]$ of the complex variable $s=\sigma+j\omega$ has no effect in terms of the convergence, the ROC is determined solely by the real part $\sigma=Re[s]$.
    Example 1: The Laplace transform of $x(t)=e^{-at}u(t)$ is: 
    $\displaystyle X(s)$$\textstyle =$$\displaystyle {\cal L}[x(t)]=\int_0^\infty e^{-at} e^{-st} dt
=\int_0^\infty e^{-at} e^{-(\sigma+j\omega)t} dt$
    $\textstyle =$$\displaystyle -\frac{1}{a+\sigma+j\omega}\; e^{-(a+\sigma+j\omega)t} \bigg\vert _0^\infty$

    For this integral to converge, we need to have 
    \begin{displaymath}a+\sigma > 0 \;\;\;\;\mbox{ or }\;\;\;\; \sigma=Re[s] > -a \end{displaymath}

    and the Laplace transform is 
    \begin{displaymath}X(s)=\frac{1}{(\sigma+a)+j\omega}=\frac{1}{s+a} \end{displaymath}

    As a special case where $a=0$$x(t)=u(t)$ and we have 
    \begin{displaymath}{\cal L}[u(t)]=\frac{1}{s},\;\;\;\;\;\;\sigma=Re[s]>0 \end{displaymath}

    Example 2: The Laplace transform of a signal $x(t)=-e^{-at}u(-t)$ is: 
    \begin{displaymath}X(s)=-\int_{-\infty}^0 e^{-at} e^{-st} dt
=-\int_{-\infty}^0...
...gma+j\omega}\;e^{-(a+\sigma+j\omega)t} \bigg\vert _{-\infty}^0 \end{displaymath}

    Only when 
    \begin{displaymath}a+\sigma < 0 \;\;\;\;\;\mbox{ or }\;\;\;\;\; \sigma=Re[s] < -a \end{displaymath}

    will the integral converge, and Laplace transform $X(s)$ is 
    \begin{displaymath}X(s)=\frac{1}{a+\sigma+j\omega}=\frac{1}{a+s} \end{displaymath}

    Again as a special case when $a=0$$x(t)=-u(-t)$ we have 
    \begin{displaymath}{\cal L}[-u(-t)]=\frac{1}{s},\;\;\;\;\;\sigma=Re[s]<0 \end{displaymath}

    Comparing the two examples above we see that two different signals may have identical Laplace transform $X(s)$, but different ROC. In the first case above, the ROC is $Re[s]>0$, and in the second case, the ROC is $Re[s]<0$. To determine the time signal $x(t)$ by the inverse Laplace transform, we need the ROC as well as $X(s)$.
    Example 3: 
    \begin{displaymath}x(t)=e^{-a\vert t\vert}=e^{-at}u(t)+e^{at}u(-t) \end{displaymath}

    The Laplace transform is linear, and $X(s)$ is the sum of the transforms for the two terms: 
    \begin{displaymath}{\cal L}[e^{-at}u(t)]=\frac{1}{s+a},\;\;\;\;\;(\sigma>-a),\;\...
...;\;
{\cal L}[e^{at}u(-t)]=\frac{-1}{s-a},\;\;\;\;\;(\sigma<a) \end{displaymath}

    If $a>0$, i.e., $x(t)$ decays when $\vert t\vert \rightarrow \infty$, the intersection of the two ROCs is $-a<\sigma<a$, and we have: 
    \begin{displaymath}{\cal L}[x(t)]=\frac{1}{s+a}-\frac{1}{s-a}=\frac{-2a}{s^2-a^2} \end{displaymath}

    However, if $a<0$, i.e., $x(t)$ grows without a bound when $\vert t\vert \rightarrow \infty$, the intersection of the two ROCs is a empty set, the Laplace transform does not exist.
    Example 4: 
    \begin{displaymath}x(t)=[e^{-2t}+e^t cos(3t)]u(t)
=[e^{-2t}+\frac{1}{2}e^{-(1-j3)t}+\frac{1}{2}e^{-(1+j3)t}]u(t) \end{displaymath}

    The Laplace transform of this signal is 
    $\displaystyle X(s)$$\textstyle =$$\displaystyle \int_0^\infty [e^{-2t}+\frac{1}{2}e^{-(1-j3)t}
+\frac{1}{2}e^{-(1+j3)t}]e^{-st} dt$
    $\textstyle =$$\displaystyle \int_0^\infty e^{-2t}e^{-st} dt
+\frac{1}{2}\int_0^\infty e^{-(1-j3)t}e^{-st} dt
+\frac{1}{2}\int_0^\infty e^{-(1+j3)t}e^{-st} dt$
    $\textstyle =$$\displaystyle \frac{1}{s+2}+\frac{1/2}{s+(1-j3)}+\frac{1/2}{s+(1+j3)}
=\frac{2s^2+5s+12}{(s^2+2s+10)(s+2)}$

    This $X(s)$ exists only if the Laplace transforms of all three individual terms exist, i.e, the conditions for the three integrals to converge are simultaneously satisfied: 
    \begin{displaymath}Re[s]>-2,\;\;\;\;\;Re[s]>-1,\;\;\;\;\;Re[s]>-1 \end{displaymath}

    i.e., $Re[s]>-1$.
    Example 5: 
    \begin{displaymath}{\cal L}[\delta(t-T)]=\int_{-\infty}^\infty \delta(t-T) e^{-st} dt=e^{-sT} \end{displaymath}

    As the Laplace integration converges independent of $s$, the ROC is the entire s-plane. In particular, when $T=0$, we have 
    \begin{displaymath}{\cal L}[\delta(t)]=1 \end{displaymath}
    • Zeros and Poles of the Laplace transform
    All Laplace transforms in the above examples are rational, i.e., they can be written as a ratio of polynomials of variable $s$ in the general form 
    \begin{displaymath}
X(s)=\frac{N(s)}{D(s)}=\frac{\sum_{k=0}^M b_k s^k}{\sum_{k=0...
...
=\frac{\prod_{k=1}^M (s-s_{z_k})}{\prod_{k=1}^N (s-s_{p_k})}
\end{displaymath}

    • $N(s)$ is the numerator polynomial of order $M$ with roots $s_{z_k}, (k=1,2, \cdots, M)$,
    • $D(s)$ is the denominator polynomial of order $N$ with roots $s_{p_k}, (k=1,2, \cdots, N)$.
    In general, we assume the order of the numerator polynomial is always lower than that of the denominator polynomial, i.e., $M < N$. If this is not the case, we can always expand $X(s)$ into multiple terms so that $M < N$ is true for each of terms.
    Example 1: 
    \begin{displaymath}X(s)=\frac{s^2-2s+2}{s^3+5 s^2+12 s+8}
=\frac{s^2-2s+2}{(s+1)...
...8)}
=\frac{[s-(1+j)][s-(1-j)]}{[s-(-1)][s-(-2+2j)][s-(-2-2j)]} \end{displaymath}

    Two zeros: $s_{z1}=1+j$$s_{z2}=1-j$
    (Three poles: $s_{p1}=-1$$s_{p2}=-2+2j$ and $s_{p3}=-2-2j$.)
    Example 2: 
    \begin{displaymath}X(s)=\frac{s^2-3}{s+2} \end{displaymath}

    As the order of the numerator $M=2$ is higher than that of the denominator $N=1$, we expand it into the following terms 
    \begin{displaymath}X(s)=\frac{s^2-3}{s+2}=A+Bs+\frac{C}{s+2} \end{displaymath}

    and get 
    \begin{displaymath}s^2-3=(A+Bs)(s+2)+C=Bs^2+(A+2B)s+(2A+C) \end{displaymath}

    Equating the coefficients for terms $s^k$ $(k=0, 1, \cdots, M)$ on both sides, we get 
    \begin{displaymath}B=1,\;\;\;A+2B=0, \;\;\; 2A+C=-3 \end{displaymath}

    Solving this equation system, we get coefficients 
    \begin{displaymath}A=-2; \;\;\; B=1, \;\;\; C=1 \end{displaymath}

    and 
    \begin{displaymath}X(s)=s-2+\frac{1}{s+2} \end{displaymath}

    Alternatively, the same result can be obtained more easily by a long division $(s^2-3) \div (s+2)$.
    The zeros and poles of a rational $X(s)=N(s)/D(s)$ are defined as
    • Zero: Each of the roots of the numerator polynomial $s_z$ for which $X(s)\bigg\vert _{s=s_z}=X(s_z)=0$ is a zero of $X(s)$;If the order of $D(s)$ exceeds that of $N(s)$ (i.e., $N>M$), then $X(\infty)=0$, i.e., there is a zero at infinity:

      \begin{displaymath}\frac{b_1s+b_0}{a_2s^2+a_1s+a_0} \bigg\vert _{s \rightarrow \infty} =0 \end{displaymath}


    • Pole: Each of the roots of the denominator polynomial $s_p$ for which $X(s)\bigg\vert _{s=s_p}=X(s_p)=\infty$ is a pole of $X(s)$;If the order of $N(s)$ exceeds that of $D(s)$ (i.e., $M>N$), then $X(\infty)=\infty$, i.e, there is a pole at infinity:

      \begin{displaymath}\frac{b_2s^2+b_1s+b_0}{a_1s+a_0} \bigg\vert _{s \rightarrow \infty} \rightarrow \infty \end{displaymath}


    On the s-plane zeros and poles are indicated by o and x respectively. Obviously all poles are outside the ROC. Essential properties of an LTI system can be obtained graphically from the ROC and the zeros and poles of its transfer function $X(s)$ on the s-plane.

    • Properties of ROC

    The existence of Laplace transform $X(s)$ of a given $x(t)$ depends on whether the transform integral converges 
    \begin{displaymath}X(s)=\int_{-\infty}^\infty x(t)e^{-st} dt
=\int_{-\infty}^\infty x(t)e^{-\sigma t} e^{-j\omega t} dt < \infty \end{displaymath}

    which in turn depends on the duration and magnitude of $x(t)$ as well as the real part of $s$ $Re[s]=\sigma$ (the imaginary part of $s$ $Im[s]=j\omega$ determines the frequency of a sinusoid which is bounded and has no effect on the convergence of the integral).
    Right sided signals: $x(t)=x(t)u(t-t_0)$ may have infinite duration for $t>0$, and a positive $\sigma>0$ tends to attenuate $x(t)e^{-\sigma t}$ as $t \rightarrow \infty$.
    Left sided signals: $x(t)=x(t)u(t_0-t)$ may have infinite duration for $t<0$, and a negative $\sigma<0$ tends to attenuate $x(t)e^{-\sigma t}$ as $t \rightarrow -\infty$.
    Based on these observations, we can get the following properties for the ROC:
    • If $x(t)$ is absolutely integrable and of finite duration, then the ROC is the entire s-plane (the Laplace transform integral is finite, i.e., $X(s)$ exists, for any $s$).
    • The ROC of $X(s)$ consists of strips parallel to the $j\omega$-axis in the s-plane.
    • If $x(t)$ is right sided and $Re[s]=\sigma_0$ is in the ROC, then any $s$ to the right of $\sigma_0$ (i.e., $Re[s]>\sigma_0$) is also in the ROC, i.e., ROC is a right sided half plane.
    • If $x(t)$ is left sided and $Re[s]=\sigma_0$ is in the ROC, then any $s$ to the left of $\sigma_0$ (i.e., $Re[s]<\sigma_0$) is also in the ROC, i.e., ROC is a left sided half plane.
    • If $x(t)$ is two-sided, then the ROC is the intersection of the two one-sided ROCs corresponding to the two one-sided components of $x(t)$. This intersection can be either a vertical strip or an empty set.
    • If $X(s)$ is rational, then its ROC does not contain any poles (by definition $X(s)\bigg\vert _{s=s_p}=\infty$ dose not exist). The ROC is bounded by the poles or extends to infinity.
    • If $X(s)$ is a rational Laplace transform of a right sided function $x(t)$, then the ROC is the half plane to the right of the rightmost pole; if $X(s)$ is a rational Laplace transform of a left sided function $x(t)$, then the ROC is the half plane to the left of the leftmost pole.
    • A signal $x(t)$ is absolutely integrable, i.e., its Fourier transform $X(j\omega)$ exists (first Dirichlet condition, assuming the other two are satisfied), if and only if the ROC of the corresponding Laplace transform $X(s)$ contains the imaginary axis $Re[s]=0$ or $s=j\omega$.
    Example 1: Consider the Laplace transform of a two-sided signal $x(t)=e^{-b\vert t\vert}$
    \begin{displaymath}X(s)={\cal L}[x(t)]={\cal L}[e^{-b\vert t\vert}]
={\cal L}[e^{-bt}u(t)]+{\cal L}[e^{bt}u(-t)] \end{displaymath}

    The Laplace transform of the two components can be obtained from the two examples discussed above. From example 1, we get 
    \begin{displaymath}{\cal L}[e^{-bt}u(t)]=\frac{1}{s+b},\;\;\;\;\;Re[s]>-b \end{displaymath}

    and let $b=-a$ in example 2, we have 
    \begin{displaymath}{\cal L}[e^{bt}u(-t)]={\cal L}[e^{-at}u(-t)]=-\frac{1}{s+a}=-\frac{1}{s-b},
\;\;\;\;\;Re[s]<-a=b \end{displaymath}

    Combining the two components, we have 
    \begin{displaymath}{\cal L}[e^{-b\vert t\vert}]=\frac{1}{s+b}-\frac{1}{s-b}=\frac{-2b}{s^2-b^2},
\;\;\;\;\;-b<Re[s]<b \end{displaymath}

    Whether $X(s)$ exists or not depends on $b$. If $b>0$, i.e., $x(t)$ decays exponentially as $\vert t\vert \rightarrow \infty$, then the ROC is the strip between $-b$ and $b$ and $X(s)$ exists. But if $b<0$, i.e., $x(t)$ grows exponentially as $\vert t\vert \rightarrow \infty$, then the ROC is an empty set and $X(s)$ does not exist.
    Example 2: Given the following Laplace transform, find the corresponding signal: 
    \begin{displaymath}X(s)=\frac{1}{(s+1)(s+2)}=\frac{1}{s+1}-\frac{1}{s+2} \end{displaymath}

    There are three possible ROCs determined by the two poles $s_{p_1}=-1$ and $s_{p_2}=-2$:
    • The half plane to the right of the rightmost pole $s_{p_2}=-1$, with the corresponding right sided time function

      \begin{displaymath}x(t)=[e^{-t}-e^{-2t}] u(t) \end{displaymath}
    • The half plane to the left of the leftmost pole $s_{p_1}=-2$, with the corresponding left sided time function

      \begin{displaymath}x(t)=[-e^{-t}+e^{-2t}] u(-t) \end{displaymath}
    • The vertical strip between the two poles $-2 < Re[s] < -1$, with the corresponding two sided time function

      \begin{displaymath}x(t)=-e^{-t}u(-t)-e^{-2t}u(t) \end{displaymath}
    In particular, note that only the first ROC includes the $j\omega$-axis and the corresponding time function has a Fourier transform. Fourier transform does not exist in the other two cases.

    • Properties of Laplace Transform

    The Laplace transform has a set of properties in parallel with that of the Fourier transform. The difference is that we need to pay special attention to the ROCs. In the following, we always assume 
    \begin{displaymath}{\cal L}[x(t)]=X(s),\;\;\;\;ROC=R_x,\;\;\;\;\;\mbox{and}\;\;\;\;\;\;
{\cal L}[y(t)]=Y(s),\;\;\;\;ROC=R_y \end{displaymath}

    • Linearity

      \begin{displaymath}{\cal L}[a x(t)+b y(t)]=aX(s)+bY(s), \;\;\;\;ROC \supseteq (R_x \cap R_y) \end{displaymath}


      $ A \supseteq B$ means set $A$ contains or equals to set $B$, i.e,. $A$ is a subset of $B$, or $B$ is a superset of $A$.)It is obvious that the ROC of the linear combination of $x(t)$ and $y(t)$ should be the intersection of the their individual ROCs $R_x \cap R_y$ in which both $X(s)$ and $Y(s)$ exist. But also note that in some cases when zero-pole cancellation occurs, the ROC of the linear combination could be larger than $R_x \cap R_y$, as shown in the example below.
      Example: Let

      \begin{displaymath}X(s)={\cal L}[x(t)]=\frac{1}{s+1},\;\;\;\;Re[s]>-1,\;\;\;\;\;\;\;\;
Y(s)={\cal L}[y(t)]=\frac{1}{(s+1)(s+2)},\;\;\;\;Re[s]>-1 \end{displaymath}


      then

      \begin{displaymath}{\cal L}[x(t)-y(t)]=\frac{1}{s+1}-\frac{1}{(s+1)(s+2)}
=\frac{s+1}{(s+1)(s+2)}=\frac{1}{s+2}, \;\;\;\;Re[s]>-2 \end{displaymath}


      We see that the ROC of the combination is larger than the intersection of the ROCs of the two individual terms.
    • Time Shifting

      \begin{displaymath}{\cal L}[x(t-t_0)]=e^{-t_0s} X(s),\;\;\;\;ROC=R_x \end{displaymath}


    • Shifting in s-Domain

      \begin{displaymath}{\cal L}[e^{s_0t}x(t)]=X(s-s_0),\;\;\;\;ROC=R_x+Re[s_0] \end{displaymath}


      Note that the ROC is shifted by $s_0$, i.e., it is shifted vertically by $Im[s_0]$ (with no effect to ROC) and horizontally by $Re[s_0]$.
    • Time Scaling

      \begin{displaymath}{\cal L}[x(at)]=\frac{1}{\vert a\vert} X(\frac{s}{a}),\;\;\;\;ROC=\frac{R_x}{a} \end{displaymath}


      Note that the ROC is horizontally scaled by $1/a$, which could be either positive ($a>0$) or negative ($a<0$) in which case both the signal $x(t)$ and the ROC of its Laplace transform are horizontally flipped.
    • Conjugation

      \begin{displaymath}{\cal L}[x^*(t)]=X^*(s^*), \;\;\;\;\;ROC=R_x \end{displaymath}


      Proof: 

      \begin{displaymath}X^*(s^*)=[\int_{-\infty}^\infty x(t)e^{-s^*t} dt ]^*
=\int_{-\infty}^\infty x^*(t)e^{-st} dt={\cal L}[x^*(t)] \end{displaymath}


    • Convolution

      \begin{displaymath}{\cal L}[x(t)*y(t)]=X(s) Y(s), \;\;\;\;ROC \supseteq (R_x \cap R_y) \end{displaymath}


      Note that the ROC of the convolution could be larger than the intersection of $R_x$ and $R_y$, due to the possible pole-zero cancellation caused by the convolution, similar to the linearity property.Example Assume

      \begin{displaymath}X(s)={\cal L}[x(t)]=\frac{s+1}{s+2},\;\;\;Re[s]>-2\;\;\;\mbox{and}\;\;\;\;
Y(s)={\cal L}[y(t)]=\frac{s+2}{s+1},\;\;\;Re[s]>-1 \end{displaymath}


      then

      \begin{displaymath}{\cal L}[x(t)*y(t)]=X(s) Y(s)=1,\;\;\;\;\mbox{ROC is the entire s-plane} \end{displaymath}


    • Differentiation in Time Domain

      \begin{displaymath}{\cal L}[\frac{d}{dt}x(t)]=sX(s), \;\;\;\;\;ROC \supseteq R_x \end{displaymath}


      This can be proven by differentiating the inverse Laplace transform:

      \begin{displaymath}
\frac{d}{dt}x(t)=\frac{1}{j2\pi}\int_{\sigma-j\infty}^{\sigm...
...{1}{j2\pi}\int_{\sigma-j\infty}^{\sigma+j\infty}sX(s)e^{st}ds
\end{displaymath}


      In general, we have

      \begin{displaymath}{\cal L}[ \frac{d^n}{dt^n}x(t)]=s^n X(s) \end{displaymath}


      Again, multiplying $X(s)$ by $s$ may cause pole-zero cancellation and therefore the resulting ROC may be larger than $R_x$.Example: Given

      \begin{displaymath}{\cal L}[u(t)]=1/s,\;\;\;\; Re[s]>0 \end{displaymath}


      we have:

      \begin{displaymath}{\cal L}[\frac{d}{dt} u(t)]={\cal L}[\delta(t)]=1,\;\;\;\;\;\...
...elta(t)]=s^n, \;\;\;\;\;\;\;
\mbox{ROC is the entire s-plane} \end{displaymath}


    • Differentiation in s-Domain

      \begin{displaymath}{\cal L}[t x(t)]=-\frac{d}{ds}X(s),\;\;\;\; ROC=R_x \end{displaymath}


      This can be proven by differentiating the Laplace transform:

      \begin{displaymath}\frac{d}{ds}X(s)=\int_{-\infty}^\infty x(t) \frac{d}{ds} e^{-st} dt
=\int_{-\infty}^\infty (-t) x(t) e^{-st} dt \end{displaymath}


      Repeat this process we get

      \begin{displaymath}{\cal L}[t^n x(t)]=(-1)^n\frac{d^n}{ds^n}X(s),\;\;\;\; ROC=R_x \end{displaymath}


    • Integration in Time Domain

      \begin{displaymath}{\cal L}[\int_{-\infty}^t x(\tau) d\tau ]=\frac{X(s)}{s},
\;\;\;\;ROC \supseteq (R_x\cap \{Re[s]>0\}) \end{displaymath}


      This can be proven by realizing that

      \begin{displaymath}x(t)*u(t)=\int_{-\infty}^\infty x(\tau) u(t-\tau) d\tau
=\int_{-\infty}^t x(\tau) d\tau \end{displaymath}


      and therefore by convolution property we have

      \begin{displaymath}{\cal L}[x(t)*u(t)]=X(s)\frac{1}{s} \end{displaymath}


      Also note that as the ROC of ${\cal L}[u(t)]=1/s$ is the right half plane $Re[s]>0$, the ROC of $X(s)/s$ is the intersection of the two individual ROCs $R_x \cap \{Re[s]>0\}$, except if pole-zero cancellation occurs (when $x(t)=d\delta(t)/dt$ with $X(s)=s$) in which case the ROC is the entire s-pane.
    • Laplace Transform of Typical Signals

    • $\delta(t)$$\delta(t-\tau)$

      \begin{displaymath}{\cal L}[\delta(t)]=\int_{-\infty}^\infty \delta(t)e^{-st}dt=e^0=1,
\;\;\;\;\mbox{all $s$} \end{displaymath}


      Moreover, due to time shifting property, we have

      \begin{displaymath}{\cal L}[\delta(t-\tau)]=e^{-s\tau},\;\;\;\;\mbox{all $s$} \end{displaymath}


    • $u(t)$$t\;u(t)$$t^n\;u(t)$Due to the property of time domain integration, we have

      \begin{displaymath}{\cal L}[u(t)]={\cal L}[\int_{-\infty}^t \delta(\tau) d\tau]=\frac{1}{s},
\;\;\;\;Re[s]>0 \end{displaymath}


      Applying the s-domain differentiation property to the above, we have

      \begin{displaymath}{\cal L}[tu(t)]=-\frac{d}{ds}[\frac{1}{s}]=\frac{1}{s^2},
\;\;\;\;Re[s]>0 \end{displaymath}


      and in general

      \begin{displaymath}{\cal L}[t^n u(t)]=\frac{n!}{s^{n+1}},\;\;\;\;Re[s]>0 \end{displaymath}


    • $e^{-at}u(t)$$te^{-at}u(t)$Applying the s-domain shifting property to

      \begin{displaymath}{\cal L}[u(t)]=\frac{1}{s},\;\;\;\;Re[s]>0 \end{displaymath}


      we have

      \begin{displaymath}{\cal L}[e^{-at}u(t)]=\frac{1}{s+a},\;\;\;\;Re[s]>-a \end{displaymath}


      Applying the same property to

      \begin{displaymath}{\cal L}[t^n u(t)]=\frac{n!}{s^{n+1}},\;\;\;\;Re[s]>0 \end{displaymath}


      we have

      \begin{displaymath}{\cal L}[t^n e^{-at}u(t)]=\frac{n!}{(s+a)^{n+1}},\;\;\;\;Re[s]>-a \end{displaymath}


    • $e^{-j\omega_0 t}u(t)$$sin(\omega_0 t)u(t)$$cos(\omega_0 t)u(t)$Replacing $a$ in the known transform

      \begin{displaymath}{\cal L}[e^{-at}u(t)]=\frac{1}{s+a},\;\;\;\;Re[s]>-Re[a] \end{displaymath}


      by $a=\pm j\omega_0$, we get

      \begin{displaymath}{\cal L}[e^{-j\omega_0 t}u(t)]=\frac{1}{s+j\omega_0} \;\;\;\m...
...l L}[e^{j\omega_0 t}u(t)]=\frac{1}{s-j\omega_0}\;\;\;\;Re[s]>0 \end{displaymath}


      and therefore

      \begin{displaymath}{\cal L}[cos(\omega_0 t)u(t)]=\frac{1}{2}{\cal L}[e^{j\omega_...
...}{s-j\omega_0}+\frac{1}{s+j\omega_0}]=\frac{s}{s^2+\omega_0^2} \end{displaymath}


      and

      \begin{displaymath}{\cal L}[sin(\omega_0 t)u(t)]=\frac{1}{2j}{\cal L}[e^{j\omega...
...mega_0}-\frac{1}{s+j\omega_0}]=\frac{\omega_0}{s^2+\omega_0^2} \end{displaymath}


    • $t\;cos(\omega_0 t)u(t)$$t\;sin(\omega_0 t)u(t)$Replacing $a$ in the known transform

      \begin{displaymath}{\cal L}[te^{-at}u(t)]=\frac{1}{(s+a)^2},\;\;\;Re[s]>-a \end{displaymath}


      by $a=\pm j\omega_0$, we get

      \begin{displaymath}{\cal L}[te^{-j\omega_0 t}u(t)]=\frac{1}{(s+j\omega_0)^2},\;\...
...^{j\omega_0 t}u(t)]=\frac{1}{(s-j\omega_0)^2},
\;\;\;Re[s]>-a \end{displaymath}


      Further more we have

      \begin{displaymath}
{\cal L}[t\;cos(\omega_0 t) u(t)]=\frac{1}{2}{\cal L}[t\;(e^...
...(s+j\omega_0)^2}]
=\frac{s^2-\omega_0^2}{(s^2+\omega_0^2)^2}
\end{displaymath}


      and

      \begin{displaymath}
{\cal L}[t\;sin(\omega_0 t) u(t)]=\frac{1}{2j}{\cal L}[t\;(e...
...{1}{(s+j\omega_0)^2}]
=\frac{2s\omega_0}{(s^2+\omega_0^2)^2}
\end{displaymath}


    • $e^{-at}cos(\omega_0 t) u(t)$$e^{-at}sin(\omega_0 t) u(t)$Applying s-domain shifting property to

      \begin{displaymath}{\cal L}[cos(\omega_0 t)u(t)]=\frac{s}{s^2+\omega_0^2} \end{displaymath}


      and

      \begin{displaymath}{\cal L}[sin(\omega_0 t)u(t)]=\frac{\omega_0}{s^2+\omega_0^2} \end{displaymath}


      we get, respectively

      \begin{displaymath}{\cal L}[e^{-at}cos(\omega_0 t)u(t)]=\frac{s+a}{(s+a)^2+\omega_0^2} \end{displaymath}


      and

      \begin{displaymath}{\cal L}[e^{-at}sin(\omega_0 t)u(t)]=\frac{\omega_0}{(s+a)^2+\omega_0^2} \end{displaymath}
    • Representation of LTI Systems by Laplace Transform

    Due to its convolution property, the Laplace transform is a powerful tool for analyzing LTI systems: 
    \begin{displaymath}y(t)=h(t)*x(t) \stackrel{{\cal L}}{\longleftrightarrow} Y(s)=H(s)X(s) \end{displaymath}

    Also, if an LTI system can be described by a linear constant coefficient differential equation (LCCDE), the Laplace transform can convert the differential equation to an algebraic equation due to the time derivative property: 
    \begin{displaymath}{\cal L}[ \frac{d^n}{dt^n}x(t)]=s^n X(s) \end{displaymath}

    We first consider how an LTI system can be represented in the Laplace domain.
    • Causality of LTI systemsAn LTI system is causal if its output $y(t)$ depends only on the current and past input $x(t)$ (but not the future). Assuming the system is initially at rest with zero output $y(t)\bigg\vert _{t<0}=0$, then its response $y(t)=h(t)$ to an impulse $x(t)=\delta(t)$ at $t=0$ is at rest for $t<0$, i.e., $h(t)=h(t) u(t)$. Its response to a general input $x(t)$ is:

      \begin{displaymath}y(t)=h(t)*x(t)=\int_{-\infty}^\infty h(\tau) x(t-\tau) d\tau
=\int_0^\infty h(\tau) x(t-\tau) d\tau
\end{displaymath}


      Due to the properties of the ROC, we have:If an LTI system is causal, then the ROC of $H(s)$ is a right-sided half plane. In particular, If $H(s)$ is rational $H(s)=N(s)/D(s)$, then the system is causal if and only if its ROC is the right-sided half plane to the right of the rightmost pole, and the order of numerator $N(s)$ is no greater than that of the denominator $D(s)$, so that the ROC is a right-sided plane without any poles (even at $s \rightarrow \infty$).
      Example 0: Given $h(t)=\delta(t\pm 1) $ of an LTI, find $H(s)$:

      \begin{displaymath}H(s)={\cal L}[\delta(t\pm 1)]=\int_{-\infty}^\infty \delta(t\pm 1)e^{-st}dt = e^{\pm s} \end{displaymath}


      Consider each of the two cases:
      • When $h(t)=\delta(t+1)$$H(s)=e^s$ can be considered as a special polynomial (Taylor series expansion):

        \begin{displaymath}X(s)=e^s=1+s+\frac{1}{2}s^2+\cdots+\frac{1}{n!}s^n+\cdots \end{displaymath}


        As this numerator polynomial has infinite order, greater than that of the denominator (zero), there is a pole at $s=\infty$, ROC is not a right-sided plane, $h(t)$ is not causal.
      • When $h(t)=\delta(t-1)$, we have:

        \begin{displaymath}H(s)=e^{-s}=\frac{1}{e^s}=\frac{1}{1+s+\frac{1}{2}s^2+\cdots+\frac{1}{n!}s^n+\cdots} \end{displaymath}


        As the order of the denominator polynomial is infinite, greater than that of the numerator (zero), there is no pole at $s=\infty$, ROC is a right-sided plane, $h(t)$ is causal.
    • Stability of LTI systemsAn LTI system is stable if its response to any bounded input is also bounded for all $t$:

      \begin{displaymath}\mbox{if}\;\;\vert x(t)\vert<B_x\;\;\;\mbox{then}\;\;\;\vert y(t)\vert<\infty \end{displaymath}


      As the output and input of an LTI is related by convolution, we have:

      \begin{displaymath}y(t)=h(t)*x(t)=\int_{-\infty}^\infty h(\tau) x(t-\tau) d\tau<\infty \end{displaymath}


      and

      \begin{displaymath}\vert y(t)\vert &=& \vert\int_{-\infty}^\infty h(\tau) x(t-\t...
...
<B_x\;\int_{-\infty}^\infty \vert h(\tau)\vert d\tau <\infty \end{displaymath}


      which obviously requires:

      \begin{displaymath}
\int_{-\infty}^\infty \vert h(\tau)\vert d\tau <\infty
\end{displaymath}


      In other words, if the impulse response function $h(t)$ of an LTI system is absolutely integrable, then the system is stable. We can show that this condition is also necessary, i.e., all stable LTI systems' impulse response functions are absolutely integrable. Now we have:An LTI system is stable if and only if its impulse response is absolutely integrable, i.e., the frequency response function $H(j\omega)$ exists, i.e., the ROC of its transfer function $H(s)$ contains $j\omega$-axis:

      \begin{displaymath}H(s)\bigg\vert _{s=j\omega}=H(j\omega)={\cal F}[h(t)] \end{displaymath}


    • Causal and stable LTI systemsCombining the two properties above, we have:
      A causal LTI system with a rational transfer function $H(s)$ is stable if and only if all poles of $H(s)$ are in the left half of the s-plane, i.e., the real parts of all poles are negative:

      \begin{displaymath}Re[s_p]<0\;\;\;\;\;\;\;\mbox{(for all $s_p$)} \end{displaymath}


    Example 1: The transfer function of an LTI is 
    \begin{displaymath}H(s)=\frac{1}{a+s} \end{displaymath}

    As shown before, without specifying the ROC, this $H(s)$ could be the Laplace transform of one of the two possible time signals $h(t)$.
    $a>0$$a<0$
    $Re[s]>-a$$j\omega$-axis inside ROC$j\omega$-axis outside ROC,
    $h(t)=e^{-at}u(t)$causal, stablecausal, unstable
    $Re[s]<-a$$j\omega$-axis outside ROC$j\omega$-axis inside ROC,
    $h(t)=-e^{-at}u(-t)$anti-causal, unstableanti-causal, stable
    Example 2: The transfer function of an LTI is 
    \begin{displaymath}H(s)=\frac{e^{s\tau}}{s+1},\;\;\;\;\;\; Re[s]>-1 \end{displaymath}

    This is a time-shifted version of ${\cal L}[e^{-t}u(t)]=1/(s+1)$, and the corresponding impulse response is: 
    \begin{displaymath}h(t)=e^{-(t+\tau)}u(t+\tau) \end{displaymath}

    If $\tau>0$, then $h(t) \ne 0$ during the interval $-\tau < t <0$, the system is not causal, although its ROC is a right half plane. This example serves as a counter example showing it is not true that any right half plane ROC corresponds to a causal system, while all causal systems' ROCs are right half planes. However, if $X(s)$ is rational, then the system is causal if and only if its ROC is a right half plane.
    Alternatively, as shown in Example 0, we have: 
    \begin{displaymath}e^{s\tau}=1+s\tau+\frac{1}{2}(s\tau)^2+\cdots \end{displaymath}

    Now $H(s)$ can still be consider as a rational function of $s$ with a numerator polynomial of order $M=\infty$, which is greater than that of the denominator $N=1$, i.e., $H(s)$ has a pole at $s=\infty$, i.e., its ROC cannot be a right-sided half plane, therefore the system is not causal. On the other hand, if $\tau<0$, then this polynomial appears in denominator, there is no pole at $s=\infty$, the ROC is a right-sided half plane, the system is causal.
    • LTI Systems Characterized by Differential Equations

    If an LTI system can be described by an LCCDE in time domain 
    \begin{displaymath}
\sum_{k=0}^N a_k \frac{d^k y(t)}{dt^k} =\sum_{k=0}^M b_k \frac{d^k x(t)}{dt^k}
\end{displaymath}

    then after taking Laplace transform of the LCCDE, it can be represented as an algebraic equation in the $s$ domain 
    \begin{displaymath}Y(s)[\sum_{k=0}^N a_k s^k]=X(s)[\sum_{k=0}^M b_k s^k] \end{displaymath}

    and its transfer function is rational 
    \begin{displaymath}
H(s)=\frac{Y(s)}{X(s)}=\frac{\sum_{k=0}^M b_k s^k}{\sum_{k=0...
...-s_{z_k})}{\prod_{k=1}^N (s-s_{z_k})}
=K \; \frac{N(s)}{D(s)}
\end{displaymath}

    where $K=b_M/a_N$ is a constant and $s_{z_k}, (k=1,2, \cdots, M)$ are the zeros of $H(s)$ (roots of the numerator polynomial $Y(s)$) and $s_{p_k}, (k=1,2, \cdots, N)$ are the poles of $H(s)$ (roots of the denominator polynomial $X(s)$). The LCCDE alone does not completely specify the relationship between $x(t)$ and $y(t)$, as additional information such as the initial conditions is needed. Similarly, the transfer function $H(s)$ does not completely specify the system. For example, the same $H(s)$ with different ROCs will represent different systems (e.g., causal or anti-causal).
    Example 1: A circuit consisting an inductor $L$ and a resistor $R$ with input voltage $x(t)=v(t)$ applied to the two element in series can be described by an LCCDE: 
    \begin{displaymath}v(t)=v_R(t)+v_L(t)=R\;i(t)+L\frac{d}{dt} i(t) \end{displaymath}

    Taking Laplace transform of this equation, we get 
    \begin{displaymath}V(s)=V_R(s)+V_L(s)=RI(s)+LsI(s)=(R+sL) I(s) \end{displaymath}

    • If the output is the current $i(t)$ through the RL circuit, then the ratio between the input $V(s)$ and output $I(s)$ is defined as the conductance of the circuit:

      \begin{displaymath}H_I(s)=G(s)=\frac{1}{Z(s)}=\frac{I(s)}{V(s)}=\frac{1}{R+sL}
=\frac{1}{L}\;\frac{1}{s+1/\tau}\end{displaymath}


      where $\tau=L/R$ and $G(s)=1/Z(s)$ is the impedance of the circuit. In time domain, we have:

      \begin{displaymath}h_I(t)=\frac{1}{L} e^{-t/\tau} u(t) \end{displaymath}
    • If the output is the voltage across the resistor $v_R(t)$, then the transfer function of the system (a voltage divider) is

      \begin{displaymath}H_R(s)=\frac{V_R(s)}{V(s)}=\frac{R}{R+sL}=\frac{R/L}{R/L+s}=\frac{1/\tau}{s+1/\tau} \end{displaymath}


      where $\tau\stackrel{\triangle}{=}L/R > 0$. In time domain, the impulse response of the system is

      \begin{displaymath}h_R(t)=\frac{1}{\tau} e^{-t/\tau} u(t) \end{displaymath}


      RLdiagram.gif
    • If the output is the voltage across the inductor $v_L(t)$, then the transfer function is

      \begin{displaymath}H_L(s)=\frac{V_L(s)}{V(s)}=\frac{sL}{R+sL}=\frac{s}{1/\tau+s}
=1-\frac{1/\tau}{s+1/\tau}=1-H_R(s) \end{displaymath}


      with impulse response in time domain:

      \begin{displaymath}h_L(t)=\delta(t)-\frac{1}{\tau} e^{-t/\tau} u(t) \end{displaymath}
    As the ROCs of both $H_R(s)$ and $H_L(s)$ are the same half plane to the right of the only pole $s_p=-1/\tau$ on the negative side of the real axis, the $j\omega$-axis is contained in ROC and the corresponding frequency response function exists: 
    \begin{displaymath}H_R(j\omega)=H_R(s)\bigg\vert _{s=j\omega}=\frac{1/\tau}{j\om...
...)=H_R(s)\bigg\vert _{s=j\omega}=\frac{j\omega}{j\omega+1/\tau} \end{displaymath}

    Example 2: A voltage $v(t)$ is applied as the input to a resistor $R$, a capacitor $C$ and an inductor $L$ connected in series. According to Kirchhoff's voltage law, the, the system can be described by a differential equation in time domain: 
    \begin{displaymath}
v(t)=v_L(t)+v_R(t)+v_C(t)=L\frac{d}{dt}\;i(t)+R\;i(t)
+\frac{1}{C}\int_{-\infty}^t i(\tau)d\tau
\end{displaymath}

    or an algebraic equation in s-domain: 
    \begin{displaymath}V(s)=V_L(s)+V_R(s)+V_C(s)=[sL+R+\frac{1}{sC}]I(s) \end{displaymath}

    If the current $i(t)$ through the circuit is treated as the output, then the transfer function of the system is 
    \begin{displaymath}H(s)=\frac{V(s)}{I(s)}=sL+R+\frac{1}{sC}=Z(s) \end{displaymath}

    which is the overall impedance of the circuit composed of the individual impedance of the three elements 
    \begin{displaymath}Z_L(s)=sL,\;\;\;\;Z_R(s)=R,\;\;\;\;\;Z_C(s)=\frac{1}{sC} \end{displaymath}

    resistor $R$capacitor $C$inductor $L$
    time domain$i=\frac{v}{R}$$i=\frac{1}{C}\frac{dv}{dt}$$v=\frac{1}{L}\frac{di}{dt}$
    s-domain$V_R=IR$$V_C=I/Cs$$V_L=IsL$
    impedance $Z=V/I$$R$$1/sC$$sL$
    If the output is the voltage across one of the three elements ($V_L$$V_R$, or $V_C$), the transfer function $H(s)$ can be easily obtained by treating the series circuit as a voltage divider:
    • Output is voltage across the capacitor $v_C(t)$

    \begin{displaymath}H_C(s)=\frac{1/sC}{Ls+R+1/sC}=\frac{1/LC}{s^2+(R/L)s+(1/LC)} \end{displaymath}

    • Output is voltage across the resistor $v_R(t)$

    \begin{displaymath}H_R(s)=\frac{R}{Ls+R+1/sC}=\frac{(R/L)s}{s^2+(R/L)s+(1/LC)} \end{displaymath}

    • Output is voltage across the inductor $v_L(t)$

    \begin{displaymath}H_L(s)=\frac{sL}{Ls+R+1/sC}=\frac{s^2}{s^2+(R/L)s+(1/LC)} \end{displaymath}
    If we define 
    \begin{displaymath}\zeta \stackrel{\triangle}{=}\frac{R}{2}\sqrt{\frac{C}{L}},\;\;\;\;\;
\omega_n \stackrel{\triangle}{=}\frac{1}{\sqrt{LC}} \end{displaymath}

    the common denominator of the transfer functions can be written in standard (canonical) form 
    \begin{displaymath}s^2+(R/L)s+(1/LC)=s^2+2\zeta\omega_n s+\omega_n^2=(s-p_1)(s-p_2) \end{displaymath}

    with two roots 
    \begin{displaymath}p_{1,2}=(-\zeta \pm \sqrt{\zeta^2-1})\omega_n
=(-\zeta \pm j \sqrt{1-\zeta^2})\omega_n \end{displaymath}

    and the transfer functions above can be written in standard forms:
    Output across C:

    \begin{displaymath}H_C(s)=\frac{\omega_n^2}{s^2+2\zeta\omega_n s+\omega_n^2} \end{displaymath}


    with two poles $p_1, p_2$ and no zeros.
    Output across R:

    \begin{displaymath}H_R(s)=\frac{2\zeta\omega_n s}{s^2+2\zeta\omega_n s+\omega_n^2} \end{displaymath}


    with two poles $p_1, p_2$ and one zero at the origin.
    Output across L:

    \begin{displaymath}H_L(s)=\frac{s^2}{s^2+2\zeta\omega_n s+\omega_n^2}
\end{displaymath}


    with two poles $p_1, p_2$ and two repeated zeros at the origin.
    As to be discussed later, the magnitude and phase of the corresponding frequency response function $H(j\omega)$ can be qualitatively determined in the s-plane, and it turns out that the three transfer functions behave like low-pass, band-pass and high-pass filter, respectively. Moreover, when the common real part $-\zeta \omega_n$ of the two complex conjugate poles is small (i.e., $0<\zeta < 0.5$), there will be a narrow pass-band around $\omega=\omega_n$ in all three cases.
    Example 3: System identification: find $h(t)$ and $H(s)$ of an LTI, based on the given input $x(t)$ and output $y(t)$
    \begin{displaymath}\left\{ \begin{array}{l}
x(t)=e^{-3t}u(t) \\
y(t)=h(t)*x(t)=(e^{-t}-e^{-2t})u(t)\end{array} \right. \end{displaymath}

    In s-domain, input and output signals are 
    \begin{displaymath}X(s)=\frac{1}{s+3},\;\;\;\;\;\;R_X:\;\;Re[s]>-3 \end{displaymath}


    \begin{displaymath}Y(s)=\frac{1}{s+1}-\frac{1}{s+2}=\frac{1}{(s+1)(s+2)},\;\;\;\;\;R_Y:\;Re[s]>-1\end{displaymath}

    The transfer function can therefore be obtained 
    \begin{displaymath}H(s)=\frac{Y(s)}{X(s)}=\frac{s+3}{(s+1)(s+2)}=\frac{s+3}{s^2+3s+2}
=\frac{2}{s+1}-\frac{1}{s+2} \end{displaymath}

    This system $H(s)$ has two poles $p_1=-1$ and $p_2=-2$ and therefore there are three possible ROCs:
    • $R_H:\;\; Re[s]<-2$ $h(t)$ ,is left sided (anti-causal, unstable);
    • $R_H:\;\;-2<Re[s]<-1$$h(t)$ is two sided (non-causal, unstable);
    • $R_H:\;\;Re[s]>-1$$h(t)$ is right sided (causal, stable).
    We need to determine which of these ROCs is true for $H(s)$. As the ROC of a product is the intersection of the ROCs of the factors (without zero-pole cancellation): 
    \begin{displaymath}Y(s)=H(s)X(s),\;\;\;\;\;\;\;\;\;R_Y=R_H \; \bigcap \; R_X \end{displaymath}

    ROC of $H(s)$ must be the third one above, and we have: 
    \begin{displaymath}h(t)={\cal L}^{-1}\left[\frac{2}{s+1}-\frac{1}{s+2}\right]
=(2 e^{-t}-e^{-2t}) u(t) \end{displaymath}

    The equation for $H(s)$ above can be written as: 
    \begin{displaymath}Y(s)(s^2+3s+2)=X(s)(s+3) \end{displaymath}

    Its inverse Laplace transform is the LCCDE of the system: 
    \begin{displaymath}
\frac{d^2}{dt^2}y(t)+ 3\frac{d}{dt}y(t)+ 2y(t)=\frac{d}{dt}x(t)+ 3x(t)
\end{displaymath}

    • System Algebra and Block Diagram

    Laplace transform converts many time-domain operations such as differentiation, integration, convolution, time shifting into algebraic operations in s-domain. Moreover, the behavior of complex systems composed of a set of interconnected LTI systems can also be easily analyzed in s-domain. We first consider some simple interconnections of LTI systems. 
    Parallel combination: If the system is composed of two LTI systems with $h_1(t)$ and $h_2(t)$ connected in parallel:
    blockdiagram2.gif
    • \begin{displaymath}y(t)=h_1(t)*x(t)+h_2(t)*x(t)=[h_1(t)+h_2(t)]*x(t)=h(t)*x(t) \end{displaymath}


      where $h(t)$ is the overall impulse response: 
      \begin{displaymath}h(t)=h_1(t)+h_2(t),\;\;\;\;\;\mbox{or}\;\;\;\;\;\;H(s)=H_1(s)+H_2(s) \end{displaymath}


    • Series combination: If the system is composed of two LTI systems with $h_1(t)$ and $h_2(t)$ connected in series:
    blockdiagram1.gif 
    • \begin{displaymath}y(t)=h_2(t)*[h_1(t)*x(t)]=[h_2(t)*h_1(t)]*x(t)=h(t)*x(t) \end{displaymath}


      where $h(t)$ is the overall impulse response: 
      \begin{displaymath}h(t)=h_1(t)*h_2(t)=h_2(t)*h_1(t),\;\;\;\;\;\mbox{or}\;\;\;\;\;\;
H(s)=H_1(s)H_2(s)=H_2(s)H_1(s) \end{displaymath}


    • Feedback system:
    :blockdiagram3.gif
    This is a feedback system composed of an LTI system with $h_1(t)$ in a forward path and another LTI system $h_2(t)$ in a feedback path, its output $y(t)$ can be implicitly found in time domain
    \begin{displaymath}y(t)=h_1(t)*e(t)=h_1(t)*[x(t)+h_2(t)*y(t)] \end{displaymath}


    or in s-domain
    \begin{displaymath}Y(s)=H_1(s)E(s)=H_1(s)[X(s)+H_2(s)Y(s)] \end{displaymath}


    While it is difficult to solve the equation in time domain to find an explicit expression for $h(t)$ so that $y(t)=h(t)*x(t)$, it is easy to solve the algebraic equation in s-domain to find $Y(s)$
    \begin{displaymath}Y(s)[1-H_1(s)H_2(s)]=H_1(s) X(s) \end{displaymath}


    and the transfer function can be obtained
    \begin{displaymath}H(s)=\frac{Y(s)}{X(s)}=\frac{H_1(s)}{1-H_1(s)H_2(s)}
=\frac{H_1(s)}{1+[-H_2(s)H_1(s)]} \end{displaymath}


    The feedback could be either positive or negative. For the latter, there will be a negative sign in front of $h_2(t)$ and $H_2(s)$ of the feedback path so that $e(t)=x(t)-h_2(t)*y(t)$ and
    \begin{displaymath}H(s)=\frac{Y(s)}{X(s)}=\frac{H_1(s)}{1+H_1(s)H_2(s)} \end{displaymath}

    Example 1: A first order LTI system 
    \begin{displaymath}\frac{d}{dt}y(t)+3y(t)=\dot{y}(t)+3 y(t)=x(t),\;\;\;\;\mbox{or}\;\;\;\;\;
\dot{y}(t)=x(t)-3 y(t) \end{displaymath}

    which can be represented in the block diagram shown below:
    blockdiagram5.gif
    Alternatively, the system can be described in s-domain by its transfer function: 
    \begin{displaymath}H(s)=\frac{Y(s)}{X(s)}=\frac{1}{s+3}=\frac{1/s}{1+3/s} \end{displaymath}

    Comparing this $H(s)$ with the transfer function of the feedback system, we see that a first order system can be represented as a feedback system with $H_1(s)=1/s$ (an integrator implementable by an operational amplifier) in the forward path, and $H_2(s)=3$ (a feedback coefficient) in the negative feedback path.
    Example 2: Consider a second order system with transfer function 
    \begin{displaymath}
H(s)=\frac{1}{s^2+3s+2}=\frac{1}{s+1}\;\frac{1}{s+2}=\frac{1}{s+1}-\frac{1}{s+2}
\end{displaymath}

    These three expressions of $H(s)$ correspond to three different block diagram representations of the system. The last two expressions are, respectively, the cascade and parallel forms composed of two sub-systems, and they can be easily implemented as shown below:
    blockdiagram7.gif
    Alternatively, the first expression, a direct form, can also be used. To do so, we first consider a general $H(s)=Y(s)/X(s)=1/(s^2+as+b)$, i.e., 
    \begin{displaymath}s^2Y(s)+asY(s)+bY(s)=X(s),\;\;\;\;\;\mbox{or}\;\;\;\;\;\;
s^2Y(s)=X(s)-asY(s)-bY(s) \end{displaymath}

    Given $s^2Y(s)$, we can first obtain $sY(s)$ by an integrator $1/s$, and then obtain the output $Y(s)$ from $sY(s)$ by another integrator $1/s$. We see that this system can be represented as a feedback system with two negative feedback paths of $a=3$ from $sY(s)$ and $b=2$ from $Y(s)$.
    blockdiagram4.gif
    Example 3: A second order system with transfer function 
    \begin{displaymath}H(s)=\frac{cs^2+ds+e}{s^2+as+b}=\frac{1}{s^2+as+b}(cs^2+ds+e) \end{displaymath}

    This system can be represented as a cascade of two systems 
    \begin{displaymath}Z(s)=H_1(s)X(s)=\frac{1}{s^2+as+b}X(s) \end{displaymath}

    and 
    \begin{displaymath}Y(s)=H_2(s)Z(s)=(cs^2+ds+e)Z(s) \end{displaymath}

    The first system $H_1(s)$ can be implemented by two integrators with proper feedback paths as shown in the previous example, and the second system is a linear combination of $s^2Z(s)$$sZ(s)$ and $Z(s)$, all of which are available along the forward path of the first system. The over all system can therefore by represented as shown below.
    blockdiagram6.gif
    Obviously the block diagram of this example can be generalized to represent any system with a rational transfer function: 
    \begin{displaymath}
H(s)=\frac{\sum_{k=0}^M b_k s^k}{\sum_{k=0}^N a_k s^k}\;\;\;\;(M \le N)
\end{displaymath}

    If $M>N$$H(s)$ can be separated into several terms (by long-division) which can be individually implemented and then combined to generate the overall output $Y(s)$.

    • Initial and Final Value Theorems

    A right sided signal's initial value $x(0)\stackrel{\triangle}{=}
\lim_{t \rightarrow 0} x(t)$ and final value $x(\infty)\stackrel{\triangle}{=}
\lim_{t \rightarrow \infty} x(t)$ (if finite) can be found from its Laplace transform $X(s)$ by the following theorems:
    • Initial value theorem: 
      \begin{displaymath}x(0)=\lim_{s\rightarrow \infty}sX(s) \end{displaymath}

    • Final value theorem: 
      \begin{displaymath}x(\infty)=\lim_{s\rightarrow 0}sX(s) \end{displaymath}

    Proof: As $x(t)=x(t)u(t)=0$ for $t<0$, we have 
    $\displaystyle {\cal UL}\left[\frac{d}{dt}x(t)\right]$$\textstyle =$$\displaystyle \int_0^\infty \left[\frac{d}{dt}x(t)\right] e^{-st} dt=\int_0^\infty dx(t)\; e^{-st}$
    $\textstyle =$$\displaystyle x(t)e^{-st}\bigg\vert _0^\infty+s\int_0^\infty x(t) e^{-st}dt=sX(s)-x(0)$

    • When $s\rightarrow 0$, the above equation becomes

      \begin{displaymath}\lim_{s\rightarrow 0} \int_0^\infty \left[\frac{d}{dt}x(t)\ri...
...^\infty dx(t)=x(\infty)-x(0)=\lim_{s\rightarrow 0}[sX(s)-x(0)]
\end{displaymath}


      i.e.,

      \begin{displaymath}\lim_{s\rightarrow 0}sX(s)=x(\infty) \end{displaymath}
    • When $s \rightarrow \infty$, we have

      \begin{displaymath}\lim_{s\rightarrow \infty} \int_0^\infty \left[\frac{d}{dt}x(t)\right] e^{-st} dt
=0=\lim_{s\rightarrow \infty}[sX(s)-x(0)]
\end{displaymath}


      i.e.,

      \begin{displaymath}\lim_{s\rightarrow \infty}sX(s)=x(0) \end{displaymath}
    However, whether a given function $x(t)$ has a final value or not depends on the locations of the poles of its transform $X(s)$. Consider the following cases:
    • If there are poles on the right side of the S-plane, $x(t)$ will contain exponentially growing terms and therefore is not bounded, $x(\infty)$ does not exist.
    • If there are pairs of complex conjugate poles on the imaginary axis, $x(t)$ will contain sinusoidal components and $x(\infty)$ is not defined.
    • If there are poles on the left side of the S-plane, $x(t)$ will contain exponentially decaying terms without contribution to the final value.
    • Only when there are poles at the origin of the S-plane, $x(t)$ will contain constant (DC) component which is the final value, the steady state of the signal.
    Based on the above observation, the final value theorem can also be obtained by taking the partial fraction expansion of the given transform $X(s)$
    \begin{displaymath}X(s)=\sum_{i=0}^n \frac{C_i}{s-p_i}=\frac{C_0}{s}+\sum_{i=1}^n \frac{C_i}{s-p_i} \end{displaymath}

    where $p_i's$ are the poles, and $p_0=0$ by assumption. The corresponding signal in time domain: 
    \begin{displaymath}x(t)=\sum_{i=0}^n C_ie^{p_it}=C_0+\sum_{i=1}^n C_ie^{p_it},\;\;\;\;\;\;\;\;t>0 \end{displaymath}

    All terms except the first one represent exponentially decaying/growing or sinusoidal components of the signal. Multiplying both sides of the equation for $X(s)$ by $s$ and letting $s\rightarrow 0$, we get: 
    \begin{displaymath}\lim_{s\rightarrow 0} s X(s)=\lim_{s\rightarrow 0} \left[C_0+\sum_{i=1}^n \frac{s C_i}{s-p_i}\right]
=C_0 \end{displaymath}

    We see that all terms become zero, except the first term $C_0$. If all poles $p_i,\;i=1,2,\cdots,n$ are on the left side of the S-plane, their corresponding signal components in time domain will decay to zero, leaving only the first term $C_0$, the final value $x(\infty)$.
    Example 1: 
    \begin{displaymath}X(s)=\frac{1}{s(s+2)} \end{displaymath}

    First find $x(t)$
    \begin{displaymath}x(t)={\cal L}[X(s)]={\cal L}\left[\frac{1}{2}\left(\frac{1}{s}+\frac{1}{s+2}\right)\right]
=\frac{1}{2}( 1+e^{-2t} )u(t) \end{displaymath}

    When $t \rightarrow \infty$, we get $x(\infty)=1/2$. Next we apply the final value theorem: 
    \begin{displaymath}x(\infty)=s X(s)\big\vert _{s=0}=\frac{1}{s+2}\bigg\vert _{s=0}=\frac{1}{2} \end{displaymath}

    Example 2: 
    \begin{displaymath}X(s)=\frac{1}{s(s-2)} \end{displaymath}

    According to the final value theorem, we have 
    \begin{displaymath}x(\infty)=s X(s)\big\vert _{s=0}=-\frac{1}{2} \end{displaymath}

    However, as the inverse Laplace transform 
    \begin{displaymath}x(t)={\cal L}[ X(s) ]={\cal L}\left[ \frac{1}{2}\left(\frac{1}{s-2}-\frac{1}{s}\right)\right]=
\frac{1}{2}[e^{2t}-1]u(t) \end{displaymath}

    is unbounded (the first term grows exponentially), final value does not exist.
    The final value theorem can also be used to find the DC gain of the system, the ratio between the output and input in steady state when all transient components have decayed. We assume the input is a unit step function $x(t)=u(t)$, and find the final value, the steady state of the output, as the DC gain of the system: 
    \begin{displaymath}\mbox{DC gain}=\lim_{s\rightarrow 0} \left[s H(s) \frac{1}{s}\right]=\lim_{s\rightarrow 0}H(s) \end{displaymath}

    Example 3: 
    \begin{displaymath}H(s)=\frac{s+2}{s^2+2s+10} \end{displaymath}

    The DC gain at the steady state when $t \rightarrow \infty$ can be found as 
    \begin{displaymath}\lim_{s\rightarrow 0} H(s)=0.2 \end{displaymath}

    Top Ad 728x90