Download signals and systems with matlab PDF

Titlesignals and systems with matlab
TagsLaplace Transform Fourier Transform Discrete Fourier Transform Fourier Series Convolution
File Size8.7 MB
Total Pages68
Table of Contents
                            Signals and Systems
	with MATLABÒ Computing
		and SimulinkÒ Modeling
		Fifth Edition
		Steven T. Karris
Preface Signals and Systems Fifth
	Preface
TOC Signals and Systems Fifth
	Chapter 2
		The Laplace Transformation
			his chapter begins with an introduction to the Laplace transformation, definitions, and properties of the Laplace transformation. The initial value and final value theorems are also discussed and proved. It continues with the derivation of the Laplac...
			2.1 Definition of the Laplace Transformation
				The two-sided or bilateral Laplace Transform pair is defined as
				(2.1)
				(2.2)
				where denotes the Laplace transform of the time function , denotes the Inverse Laplace transform, and is a complex variable whose real part is , and imaginary part , that is, .
				In most problems, we are concerned with values of time greater than some reference time, say , and since the initial conditions are generally known, the two-sided Laplace transform pair of (2.1) and (2.2) simplifies to the unilateral or one-sided Lap...
				(2.3)
				(2.4)
				The Laplace Transform of (2.3) has meaning only if the integral converges (reaches a limit), that is, if
				(2.5)
				To determine the conditions that will ensure us that the integral of (2.3) converges, we rewrite (2.5) as
				(2.6)
				The term in the integral of (2.6) has magnitude of unity, i.e., , and thus the condition for convergence becomes
				(2.7)
				Fortunately, in most engineering applications the functions are of exponential order. Then, we can express (2.7) as,
				(2.8)
				and we see that the integral on the right side of the inequality sign in (2.8), converges if . Therefore, we conclude that if is of exponential order, exists if
				(2.9)
				where denotes the real part of the complex variable .
				Evaluation of the integral of (2.4) involves contour integration in the complex plane, and thus, it will not be attempted in this chapter. We will see in the next chapter that many Laplace transforms can be inverted with the use of a few standard pai...
				In our subsequent discussion, we will denote transformation from the time domain to the complex frequency domain, and vice versa, as
				(2.10)
			2.2 Properties and Theorems of the Laplace Transform
				The most common properties and theorems of the Laplace transform are presented in Subsections 2.2.1 through 2.2.13 below.
				2.2.1 Linearity Property
				2.2.2 Time Shifting Property
				2.2.3 Frequency Shifting Property
				2.2.4 Scaling Property
				2.2.5 Differentiation in Time Domain Property
				2.2.6 Differentiation in Complex Frequency Domain Property
				2.2.7 Integration in Time Domain Property
				2.2.8 Integration in Complex Frequency Domain Property
				2.2.9 Time Periodicity Property
				2.2.10 Initial Value Theorem
				2.2.11 Final Value Theorem
				2.2.12 Convolution in Time Domain Property
				2.2.13 Convolution in Complex Frequency Domain Property
			2.3 Laplace Transforms of Common Functions of Time
				In this section, we will derive the Laplace transform of common functions of time. They are presented in Subsections 2.3.1 through 2.3.11 below.
				2.3.1 Laplace Transform of the Unit Step Function
				2.3.2 Laplace Transform of the Ramp Function
				TABLE 2.1 Summary of Laplace Transform Properties and Theorems
				TABLE 2.2 Laplace Transform Pairs for Common Functions
			2.4 Laplace Transform of Common Waveforms
				In this section, we will present procedures for deriving the Laplace transform of common waveforms using the transform pairs of Tables 1 and 2. The derivations are described in Subsections 2.4.1 through 2.4.5 below.
				2.4.1 Laplace Transform of a Pulse
				2.4.2 Laplace Transform of a Linear Segment
				2.4.3 Laplace Transform of a Triangular Waveform
				2.4.4 Laplace Transform of a Rectangular Periodic Waveform
				2.4.5 Laplace Transform of a Half-Rectified Sine Waveform
			2.5 Using MATLAB for Finding the Laplace Transforms of Time Functions
				We can use the MATLAB function laplace to find the Laplace transform of a time function. For examples, please type
				help laplace
				at MATLAB’s command prompt.
				We will be using this function extensively in the subsequent chapters of this book.
			2.6 Summary
				· The two-sided or bilateral Laplace Transform pair is defined as
				where denotes the Laplace transform of the time function , denotes the Inverse Laplace transform, and is a complex variable whose real part is , and imaginary part , that is, .
				· The unilateral or one-sided Laplace transform defined as
				· We denote transformation from the time domain to the complex frequency domain, and vice versa, as
				· The linearity property states that
				· The time shifting property states that
				· The frequency shifting property states that
				· The scaling property states that
				· The differentiation in time domain property states that
				Also,
				and in general
				where the terms , and so on, represent the initial conditions.
				· The differentiation in complex frequency domain property states that
				and in general,
				· The integration in time domain property states that
				· The integration in complex frequency domain property states that
				provided that the limit exists.
				· The time periodicity property states that
				· The initial value theorem states that
				· The final value theorem states that
				· Convolution in the time domain corresponds to multiplication in the complex frequency domain, that is,
				· Convolution in the complex frequency domain divided by , corresponds to multiplication in the time domain. That is,
				· The Laplace transforms of some common functions of time are shown in Table 2.1, Page 2-13
				· The Laplace transforms of some common waveforms are shown in Table 2.2, Page2-22
				· We can use the MATLAB function laplace to find the Laplace transform of a time function
			2.7 Exercises
				1. Derive the Laplace transform of the following time domain functions:
				a. b. c. d. e.
				2. Derive the Laplace transform of the following time domain functions:
				a. b. c. d. e.
				3. Derive the Laplace transform of the following time domain functions:
				a. b.
				c. d.
				e. Be careful with this! Comment and you may skip derivation.
				4. Derive the Laplace transform of the following time domain functions:
				a. b. c.
				d. e.
				5. Derive the Laplace transform of the following time domain functions:
				a. b. c.
				d. e.
				6. Derive the Laplace transform of the following time domain functions:
				a. b. c. d. e.
				7. Derive the Laplace transform of the following time domain functions:
				a. b. c. d. e.
				8. Derive the Laplace transform for the sawtooth waveform below.
				9. Derive the Laplace transform for the full-rectified waveform below.
				Write a simple MATLAB script that will produce the waveform above.
			2.8 Solutions to End-of-Chapter Exercises
				1. From the definition of the Laplace transform or from Table 2.2, Page 2-22, we obtain:
				a b. c. d. e.
				2. From the definition of the Laplace transform or from Table 2.2, Page 2-22, we obtain:
				a. b. c. d. e.
				3.
				a. From Table 2.2, Page 2-22, and the linearity property, we obtain
				b. and
				c. d. e.
				The answer for part (e) looks suspicious because and the Laplace transform is unilateral, that is, there is one-to-one correspondence between the time domain and the complex frequency domain. The fallacy with this procedure is that we assumed that if...
				syms s t; 2*laplace(sin(4*t)/cos(4*t))
				4. From (2.22), Page 2-6,
				a.
				b.
				c.
				d.
				e.
				and
				5.
				a.
				b.
				c.
				d.
				e.
				6.
				a.
				b.
				c.
				Thus,
				and
				d.
				e.
				7.
				a.
				but to find we must first show that the limit exists. Since , this condition is satisfied and thus . From tables of integrals, . Then, and the constant of integration is evaluated from the final value theorem. Thus,
				and
				b.
				From (a) above, and since , it follows that
				c.
				From (a) above and since , it follows that
				or
				d.
				, , and from tables of integrals,
				. Then, and the constant of integration is evaluated from the final value theorem. Thus,
				and using we obtain
				e.
				, , and from tables of integrals . Then, and the constant of integration is evaluated from the final value theorem. Thus,
				and using , we obtain
				8.
				This is a periodic waveform with period , and its Laplace transform is
				(1)
				and from (2.41), Page 2-14, and limits of integration to , we obtain
				Adding and subtracting in the last expression above, we obtain
				By substitution into (1) we obtain
				9.
				This is a periodic waveform with period and its Laplace transform is
				From tables of integrals,
				Then,
				The full-rectified waveform can be produced with the MATLAB script
				t=0:pi/16:4*pi; x=sin(t); plot(t,abs(x)); axis([0 4*pi 0 1])
Chapter 03 Signals and Systems Fifth
	Chapter 3
		The Inverse Laplace Transformation
			his chapter is a continuation to the Laplace transformation topic of the previous chapter and presents several methods of finding the Inverse Laplace Transformation. The partial fraction expansion method is explained thoroughly and it is illustrated ...
			3.1 The Inverse Laplace Transform Integral
				The Inverse Laplace Transform Integral was stated in the previous chapter; it is repeated here for convenience.
				(3.1)
			3.2 Partial Fraction Expansion
				Quite often the Laplace transform expressions are not in recognizable form, but in most cases appear in a rational form of , that is,
				(3.2)
				(3.3)
				(3.4)
				3.2.1 Distinct Poles
				3.2.2 Complex Poles
				3.2.3 Multiple (Repeated) Poles
			3.3 Case where F(s) is Improper Rational Function
				Our discussion thus far, was based on the condition that is a proper rational function. However, if is an improper rational function, that is, if , we must first divide the numerator by the denominator to obtain an expression of the form
				(3.50)
				(3.51)
				(3.52)
				(3.53)
				(3.54)
			3.4 Alternate Method of Partial Fraction Expansion
				Partial fraction expansion can also be performed with the method of clearing the fractions, that is, making the denominators of both sides the same, then equating the numerators. As before, we assume that  is a proper rational function. If not, we fi...
				(3.55)
				(3.56)
				(3.57)
				(3.58)
				(3.59)
				(3.60)
				(3.61)
				(3.62)
				(3.63)
				(3.64)
				(3.65)
				(3.66)
				(3.67)
				(3.68)
				(3.69)
				(3.70)
				(3.71)
				(3.72)
			3.5 Summary
				· The Inverse Laplace Transform Integral defined as
				is difficult to evaluate because it requires contour integration using complex variables theory.
				· For most engineering problems we can refer to Tables of Properties, and Common Laplace transform pairs to lookup the Inverse Laplace transform.
				· The partial fraction expansion method offers a convenient means of expressing Laplace transforms in a recognizable form from which we can obtain the equivalent time-domain functions.
				· If the highest power of the numerator is less than the highest power of the denominator , i.e., , is said to be expressed as a proper rational function. If , is an improper rational function.
				· The Laplace transform must be expressed as a proper rational function before applying the partial fraction expansion. If is an improper rational function, that is, if , we must first divide the numerator  by the denominator  to obtain an expressio...
				· In a proper rational function, the roots of numerator are called the zeros of and the roots of the denominator are called the poles of .
				· The partial fraction expansion method can be applied whether the poles of are distinct, complex conjugates, repeated, or a combination of these.
				· When is expressed as
				are called the residues and are the poles of .
				· The residues and poles of a rational function of polynomials can be found easily using the MATLAB residue(a,b) function. The direct term is always empty (has no value) whenever is a proper rational function.
				· We can use the MATLAB factor(s) symbolic function to convert the denominator polynomial form of into a factored form.
				· We can use the MATLAB collect(s) and expand(s) symbolic functions to convert the denominator factored form of into a polynomial form.
				· The method of clearing the fractions is an alternate method of partial fraction expansion.
			3.6 Exercises
				1. Find the Inverse Laplace transform of the following:
				a. b. c. d. e.
				2. Find the Inverse Laplace transform of the following:
				a. b. c.
				d. e.
				3. Find the Inverse Laplace transform of the following:
				a. b. Hint:
				c. d. e.
				4. Use the Initial Value Theorem to find given that the Laplace transform of is
				Compare your answer with that of Exercise 3(c).
				5. It is known that the Laplace transform has two distinct poles, one at , the other at . It also has a single zero at , and we know that . Find and .
			3.7 Solutions to End-of-Chapter Exercises
				1.
				a. b. c.
				d.
				e.
				2.
				a.
				b.
				c. Using the MATLAB factor(s) function we obtain:
				syms s; factor(s^2+3*s+2), factor(s^3+5*s^2+10.5*s+9)
				ans = (s+2)*(s+1)
				ans = 1/2*(s+2)*(2*s^2+6*s+9)
				Then,
				d.
				e.
				3.
				a.
				b.
				c.
				d.
				e.
				4. The initial value theorem states that . Then,
				The value is the same as in the time domain expression that we found in Exercise 3(c).
				5. We are given that and . Then,
				Therefore,
				that is,
				and we observe that
	Chapter 4
		Circuit Analysis with Laplace Transforms
			his chapter consists of applications of the Laplace transform. Several examples are presented to illustrate how the Laplace transformation is applied to circuit analysis. Complex impedance, complex admittance, and transfer functions are defined.
			4.1 Circuit Transformation from Time to Complex Frequency
				In this section we will show the voltage-current relationships for the three elementary circuit networks, i.e., resistive, inductive, and capacitive in the time and complex frequency domains. They are shown in Subsections 4.1.1 through 4.1.3 below.
				4.1.1 Resistive Network Transformation
				4.1.2 Inductive Network Transformation
				4.1.3 Capacitive Network Transformation
			4.2 Complex Impedance Z(s)
				Consider the series circuit of Figure 4.11, where the initial conditions are assumed to be zero.
				For this circuit, the sum represents the total opposition to current flow. Then,
				and defining the ratio as , we obtain
				and thus, the current can be found from the relation (4.16) below.
				where
				We recall that . Therefore, is a complex quantity, and it is referred to as the complex input impedance of an series circuit. In other words, is the ratio of the voltage excitation to the current response under zero state (zero initial conditions).
				For the network of Figure 4.12, all values are in (ohms). Find using:
				a. nodal analysis
				b. successive combinations of series and parallel impedances
				Solution:
				a.
				We will first find , and we will compute using (4.15). We assign the voltage at node as shown in Figure 4.13.
				By nodal analysis,
				The current is now found as
				and thus,
				b.
				The impedance can also be found by successive combinations of series and parallel impedances, as it is done with series and parallel resistances. For convenience, we denote the network devices as and shown in Figure 4.14.
				To find the equivalent impedance , looking to the right of terminals and , we start on the right side of the network and we proceed to the left combining impedances as we would combine resistances where the symbol denotes parallel combination. Then,
				We observe that (4.19) is the same as (4.18).
			4.3 Complex Admittance Y(s)
				Consider the parallel circuit of Figure 4.15 where the initial conditions are zero.
				For the circuit of Figure 4.15,
				Defining the ratio as , we obtain
				and thus the voltage can be found from
				where
				We recall that . Therefore, is a complex quantity, and it is referred to as the complex input admittance of an parallel circuit. In other words, is the ratio of the current excitation to the voltage response under zero state (zero initial conditions).
				Compute and for the circuit of Figure 4.16. All values are in (ohms). Verify your answers with MATLAB.
				Solution:
				It is convenient to represent the given circuit as shown in Figure 4.17.
				where
				Then,
				Check with MATLAB:
				syms s; % Define symbolic variable s
				z1 = 13*s + 8/s; z2 = 5*s + 10; z3 = 20 + 16/s; z = z1 + z2 * z3 / (z2+z3)
				z =
				13*s+8/s+(5*s+10)*(20+16/s)/(5*s+30+16/s)
				z10 = simplify(z)
				z10 =
				(65*s^4+490*s^3+528*s^2+400*s+128)/s/(5*s^2+30*s+16)
				pretty(z10)
				4 3 2
				65 s + 490 s + 528 s + 400 s + 128
				-------------------------------------
				2
				s (5 s + 30 s + 16)
				The complex input admittance is found by taking the reciprocal of , that is,
			4.4 Transfer Functions
				In an circuit, the ratio of the output voltage to the input voltage under zero state conditions, is of great interest in network analysis. This ratio is referred to as the voltage transfer function and it is denoted as , that is,
				Similarly, the ratio of the output current to the input current under zero state conditions, is called the current transfer function denoted as , that is,
				The current transfer function of (4.25) is rarely used; therefore, from now on, the transfer function will have the meaning of the voltage transfer function, i.e.,
				Derive an expression for the transfer function for the circuit of Figure 4.18, where represents the internal resistance of the applied (source) voltage , and represents the resistance of the load that consists of , , and .
				Solution:
				No initial conditions are given, and even if they were, we would disregard them since the transfer function was defined as the ratio of the output voltage to the input voltage under zero initial conditions. The circuit is shown in Figure 4.19.
				The transfer function is readily found by application of the voltage division expression of the circuit of Figure 4.19. Thus,
				Therefore,
				Compute the transfer function for the circuit of Figure 4.20 in terms of the circuit constants Then, replace the complex variable with , and the circuit constants with their numerical values and plot the magnitude versus radian frequency .
				Solution:
				The complex frequency domain equivalent circuit is shown in Figure 4.21.
				Next, we write nodal equations at nodes 1 and 2. At node 1,
				At node 2,
				Since (virtual ground), we express (4.29) as
				and by substitution of (4.30) into (4.28), rearranging, and collecting like terms, we obtain:
				or
				To simplify the denominator of (4.31), we use the MATLAB script below with the given values of the resistors and the capacitors.
				syms s; % Define symbolic variable s
				R1=2*10^5; R2=4*10^4; R3=5*10^4; C1=25*10^(-9); C2=10*10^(-9);...
				DEN=R1*((1/R1+1/R2+1/R3+s*C1)*(s*R3*C2)+1/R2); simplify(DEN)
				ans =
				1/200*s+188894659314785825/75557863725914323419136*s^2+5
				188894659314785825/75557863725914323419136 % Simplify coefficient of s^2
				ans =
				2.5000e-006
				1/200 % Simplify coefficient of s^2
				ans =
				0.0050
				Therefore,
				By substitution of with we obtain
				We use MATLAB to plot the magnitude of (4.32) on a semilog scale with the following script:
				w=1:10:10000; Gs=-1./(2.5.*10.^(-6).*w.^2-5.*j.*10.^(-3).*w+5);...
				semilogx(w,abs(Gs)); xlabel('Radian Frequency w'); ylabel('|Vout/Vin|');...
				title('Magnitude Vout/Vin vs. Radian Frequency'); grid
				The plot is shown in Figure 4.22. We observe that the given op amp circuit is a second order low- pass filter whose cutoff frequency () occurs at about .
			4.5 Using the Simulink Transfer Fcn Block
				The Simulink Transfer Fcn block shown above implements a transfer function where the input and the output can be expressed in transfer function form as
				Let us reconsider the active low-pass filter op amp circuit of Figure 4.21, Page 4-15 where we found that the transfer function is
				and for simplicity, let , and . By substitution into (4.34) we obtain
				Next, we let the input be the unit step function , and as we know from Chapter 2, . Therefore,
				To find , we perform partial fraction expansion, and for convenience, we use the MATLAB residue function as follows:
				num=-1; den=[1 3 1 0];[r p k]=residue(num,den)
				r =
				-0.1708
				1.1708
				-1.0000
				p =
				-2.6180
				-0.3820
				0
				k =
				[]
				Therefore,
				The plot for is obtained with the following MATLAB script, and it is shown in Figure 4.23.
				t=0:0.01:10; ft=-1+1.171.*exp(-0.382.*t)-0.171.*exp(-2.618.*t); plot(t,ft); grid
				The same plot can be obtained using the Simulink model of Figure 4.24, where in the Function Block Parameters dialog box for the Transfer Fcn block we enter for the numerator, and  for the denominator, and in the Function Block Parameters dialog box ...
			4.6 Summary
				· The Laplace transformation provides a convenient method of analyzing electric circuits since integrodifferential equations in the are transformed to algebraic equations in the .
				· In the the terms and are called complex inductive impedance, and complex capacitive impedance respectively. Likewise, the terms and and are called complex capacitive admittance and complex inductive admittance respectively.
				· The expression
				is a complex quantity, and it is referred to as the complex input impedance of an series circuit.
				· In the the current can be found from
				· The expression
				is a complex quantity, and it is referred to as the complex input admittance of an parallel circuit.
				· In the the voltage can be found from
				· In an circuit, the ratio of the output voltage to the input voltage under zero state conditions is referred to as the voltage transfer function and it is denoted as , that is,
			4.7 Exercises
				1. In the circuit below, switch has been closed for a long time, and opens at . Use the Laplace transform method to compute for .
				2. In the circuit below, switch has been closed for a long time, and opens at . Use the Laplace transform method to compute for .
				3. Use mesh analysis and the Laplace transform method, to compute and for the circuit below, given that and .
				4. For the circuit below,
				a. compute the admittance
				b. compute the value of when , and all initial conditions are zero.
				5. Derive the transfer functions for the networks (a) and (b) below.
				6. Derive the transfer functions for the networks (a) and (b) below.
				7. Derive the transfer functions for the networks (a) and (b) below.
				8. Derive the transfer function for the networks (a) and (b) below.
				9. Derive the transfer function for the network below. Using MATLAB, plot versus frequency in Hertz, on a semilog scale.
			4.8 Solutions to End-of-Chapter Exercises
				1. At , the switch is closed, and the circuit is as shown below where the resistor is shorted out by the inductor.
				Then,
				and thus the initial condition has been established as
				For all the and circuits are as shown below.
				From the circuit on the right side above we obtain
				2. At , the switch is closed and the circuit is as shown below.
				Then,
				and
				Therefore, the initial condition is
				For all , the circuit is as shown below.
				Then,
				3. The circuit is shown below where , , and
				Then,
				and in matrix form
				We use the MATLAB script below we obtain the values of the currents.
				Z=[z1+z2 -z2; -z2 z2+z3]; Vs=[1/s -2/s]'; Is=Z\Vs; fprintf(' \n');...
				disp('Is1 = '); pretty(Is(1)); disp('Is2 = '); pretty(Is(2))
				Is1 =
				2
				2 s - 1 + s
				-------------------------------
				2 3
				(6 s + 3 + 9 s + 2 s )
				Is2 =
				2
				4 s + s + 1
				- -------------------------------
				2 3
				(6 s + 3 + 9 s + 2 s ) conj(s)
				Therefore,
				(1)
				(2)
				We use MATLAB to express the denominators of (1) and (2) as a product of a linear and a quadratic term.
				p=[2 9 6 3]; r=roots(p); fprintf(' \n'); disp('root1 ='); disp(r(1));...
				disp('root2 ='); disp(r(2)); disp('root3 ='); disp(r(3)); disp('root2 + root3 ='); disp(r(2)+r(3));...
				disp('root2 * root3 ='); disp(r(2)*r(3))
				root1 =
				-3.8170
				root2 =
				-0.3415 + 0.5257i
				root3 =
				-0.3415 - 0.5257i
				root2 + root3 =
				-0.6830
				root2 * root3 =
				0.3930
				and with these values (1) is written as
				(3)
				Multiplying every term by the denominator and equating numerators we obtain
				Equating , , and constant terms we obtain
				We will use MATLAB to find these residues.
				A=[1 1 0; 0.683 3.817 1; 0.393 0 3.817]; B=[1 2 -1]'; r=A\B; fprintf(' \n');...
				fprintf('r1 = %5.2f \t',r(1)); fprintf('r2 = %5.2f \t',r(2)); fprintf('r3 = %5.2f',r(3))
				r1 = 0.48 r2 = 0.52 r3 = -0.31
				By substitution of these values into (3) we obtain
				(4)
				By inspection, the Inverse Laplace of first term on the right side of (4) is
				(5)
				The second term on the right side of (4) requires some manipulation. Therefore, we will use the MATLAB ilaplace(s) function to find the Inverse Laplace as shown below.
				syms s t
				IL=ilaplace((0.52*s-0.31)/(s^2+0.68*s+0.39));
				pretty(IL)
				1217 17 1/2 1/2
				- ---- exp(- -- t) 14 sin(7/50 14 t)
				4900 50
				13 17 1/2
				+ -- exp(- -- t) cos(7/50 14 t)
				25 50
				Thus,
				Next, we will find . We found earlier that
				and following the same procedure we obtain
				(6)
				Multiplying every term by the denominator and equating numerators we obtain
				Equating , , and constant terms, we obtain
				We will use MATLAB to find these residues.
				A=[1 1 0; 0.683 3.817 1; 0.393 0 3.817]; B=[-4 -1 -1]'; r=A\B; fprintf(' \n');...
				fprintf('r1 = %5.2f \t',r(1)); fprintf('r2 = %5.2f \t',r(2)); fprintf('r3 = %5.2f',r(3))
				r1 = -4.49 r2 = 0.49 r3 = 0.20
				By substitution of these values into (6) we obtain
				(7)
				By inspection, the Inverse Laplace of first term on the right side of (7) is
				(8)
				The second term on the right side of (7) requires some manipulation. Therefore, we will use the MATLAB ilaplace(s) function to find the Inverse Laplace as shown below.
				syms s t
				IL=ilaplace((0.49*s+0.20)/(s^2+0.68*s+0.39)); pretty(IL)
				167 17 1/2 1/2
				---- exp(- -- t) 14 sin(7/50 14 t)
				9800 50
				49 17 1/2
				+ --- exp(- -- t) cos(7/50 14 t)
				100 50
				Thus,
				4.
				a. Mesh 1:
				or
				(1)
				Mesh 2:
				(2)
				Addition of (1) and (2) yields
				or
				and thus
				b. With we obtain
				5.
				Network (a):
				and thus
				Network (b):
				and thus
				Both of these networks are first-order low-pass filters.
				6.
				Network (a):
				and
				Network (b):
				and
				Both of these networks are first-order high-pass filters.
				7.
				Network (a):
				and thus
				This network is a second-order band-pass filter.
				Network (b):
				and
				This network is a second-order band-elimination (band-reject) filter.
				8.
				Network (a):
				Let and . For inverting op amps , and thus
				This network is a first-order active low-pass filter.
				Network (b):
				Let and . For inverting op-amps , and thus
				This network is a first-order active high-pass filter.
				9.
				At Node :
				(1)
				At Node :
				and since , we express the last relation above as
				(2)
				At Node :
				(3)
				From (1)
				(4)
				From (2)
				and with (4)
				(5)
				By substitution of (4) and (5) into (3) we obtain
				and thus
				By substitution of the given values and after simplification we obtain
				We use the MATLAB script below to plot this function.
				w=1:10:10000; s=j.*w; Gs=7.83.*10.^7./(s.^2+1.77.*10.^4.*s+5.87.*10.^7);...
				semilogx(w,abs(Gs)); xlabel('Radian Frequency w'); ylabel('|Vout/Vin|');...
				title('Magnitude Vout/Vin vs. Radian Frequency'); grid
				The plot above indicates that this circuit is a second-order low-pass filter.
	Chapter 5
		State Variables and State Equations
			5.1 Expressing Differential Equations in State Equation Form
			5.2 Solution of Single State Equations
			5.3 The State Transition Matrix
			5.4 Computation of the State Transition Matrix
				5.4.1 Distinct Eigenvalues (Real of Complex)
				5.4.2 Multiple (Repeated) Eigenvalues
			5.5 Eigenvectors
			5.6 Circuit Analysis with State Variables
			5.7 Relationship between State Equations and Laplace Transform
			5.8 Summary
			5.9 Exercises
			5.10 Solutions to End-of-Chapter Exercises
	Chapter 6
		The Impulse Response and Convolution
			his chapter begins with the definition of the impulse response, that is, the response of a circuit that is subjected to the excitation of the impulse function. Then, it defines convolution and how it is applied to circuit analysis. Evaluation of the ...
			6.1 The Impulse Response in Time Domain
				In this section we will discuss the impulse response of a network, that is, the output (voltage or current) of a network when the input is the delta function. Of course, the output can be any voltage or current that we choose as the output. The compu...
				We learned in the previous chapter that the state equation
				(6.1)
				has the solution
				(6.2)
				Therefore, with initial condition , and with the input , the solution of (6.2) reduces to
				(6.3)
				Using the sifting property of the delta function, i.e.,
				(6.4)
				and denoting the impulse response as , we obtain
				(6.5)
				where the unit step function is included to indicate that this relation holds for .
		Example 6.1
			Compute the impulse response of the series circuit of Figure 6.1 in terms of the constants and , where the response is considered to be the voltage across the capacitor, and . Then, compute the current through the capacitor.
				Figure 6.1. Circuit for Example 6.1
			Solution:
				We assign currents and with the directions shown in Figure 6.2, and we apply KCL.
				Then,
				or
				(6.6)
				We assign the state variable
				Then,
				and (6.6) is written as
				or
				(6.7)
				Equation (6.7) has the form
				and as we found in (6.5),
				For this example,
				and
				Therefore,
				or
				(6.8)
				The current can now be computed from
				Thus,
				Using the sampling property of the delta function, we obtain
				(6.9)
		Example 6.2
			For the circuit of Figure 6.3, compute the impulse response given that the initial conditions are zero, that is, , and .
				Figure 6.3. Circuit for Example 6.2
			Solution:
				This is the same circuit as that of Example 5.10, Chapter 5, Page 5-22, where we found that
				and
				The impulse response is obtained from (6.5), Page 6-1, that is,
				then,
				(6.10)
				In Example 5.10, Chapter 5, Page 5-22, we defined
				and
				Then,
				or
				(6.11)
				Of course, this answer is not the same as that of Example 5.10, because the inputs and initial conditions were defined differently.
			6.2 Even and Odd Functions of Time
				A function is an even function of time if the following relation holds.
				(6.12)
				that is, if in an even function we replace with , the function does not change. Thus, polynomials with even exponents only, and with or without constants, are even functions. For instance, the cosine function is an even function because it can be wri...
				Other examples of even functions are shown in Figure 6.4.
				A function is an odd function of time if the following relation holds.
				(6.13)
				that is, if in an odd function we replace with , we obtain the negative of the function . Thus, polynomials with odd exponents only, and no constants are odd functions. For instance, the sine function is an odd function because it can be written as t...
				Other examples of odd functions are shown in Figure 6.5.
				We observe that for odd functions, . However, the reverse is not always true; that is, if , we should not conclude that is an odd function. An example of this is the function in Figure 6.4.
				The product of two even or two odd functions is an even function, and the product of an even function times an odd function, is an odd function.
				Henceforth, we will denote an even function with the subscript , and an odd function with the subscript . Thus, and will be used to represent even and odd functions of time respectively.
				For an even function ,
				(6.14)
				and for an odd function ,
				(6.15)
				A function that is neither even nor odd can be expressed as
				(6.16)
				or as
				(6.17)
				Addition of (6.16) with (6.17) yields
				(6.18)
				that is, any function of time can be expressed as the sum of an even and an odd function.
		Example 6.3
			Determine whether the delta function is an even or an odd function of time.
			Solution:
				Let be an arbitrary function of time that is continuous at . Then, by the sifting property of the delta function
				and for ,
				Also,
				and
				As stated earlier, an odd function evaluated at is zero, that is, . Therefore, from the last relation above,
				(6.19)
				and this indicates that the product is an odd function of . Then, since is odd, it follows that must be an even function of for (6.19) to hold.
			6.3 Convolution
				Consider a network whose input is , and its output is the impulse response . We can represent the input-output relationship as the block diagram shown below.
				In general,
				Next, we let be any input whose value at is . Then,
				Multiplying both sides by the constant , integrating from , and making use of the fact that the delta function is even, i.e., , we obtain
				Using the sifting property of the delta function, we find that the second integral on the left side reduces to and thus
				The integral
				(6.20)
				is known as the convolution integral; it states that if we know the impulse response of a network, we can compute the response to any input using either of the integrals of (6.20).
				The convolution integral is usually represented as or , where the asterisk (*) denotes convolution.
				In Section 6.1, we found that the impulse response for a single input is . Therefore, if we know , we can use the convolution integral to compute the response of any input using the relation
				(6.21)
			6.4 Graphical Evaluation of the Convolution Integral
				The convolution integral is more conveniently evaluated by the graphical evaluation. The procedure is best illustrated with the following examples.
		Example 6.4
			The signals and are as shown in Figure 6.6. Compute using the graphical evaluation.
				Figure 6.6. Signals for Example 6.4
			Solution:
				The convolution integral states that
				(6.22)
				where is a dummy variable, that is, and , are considered to be the same as and . We form by first constructing the image of ; this is shown as in Figure 6.7.
				Next, we form by shifting to the right by some value as shown in Figure 6.8.
				Now, evaluation of the convolution integral
				entails multiplication of by for each value of , and computation of the area from . Figure 6.9 shows the product as point moves to the right.
				We observe that . Shifting to the right so that , we obtain the sketch of Figure 6.10 where the integral of the product is denoted by the shaded area, and it increases as point moves further to the right.
				The maximum area is obtained when point reaches as shown in Figure 6.11.
				Using the convolution integral, we find that the area as a function of time is
				(6.23)
				Figure 6.12 shows how increases during the interval . This is not an exponential increase; it is the function in (6.23), and each point on the curve of Figure 6.12 represents the area under the convolution integral.
				Evaluating (6.23) at , we obtain
				(6.24)
				The plot for the interval is shown in Figure 6.13.
				As we continue shifting to the right, the area starts decreasing, and it becomes zero at , as shown in Figure 6.14.
				The plot of Figure 6.13 was obtained with the MATLAB script below.
				t1=0:0.01:1; x=t1-t1.^2./2; axis([0 1 0 0.5]); plot(t1,x); grid
				Using the convolution integral, we find that the area for the interval is
				(6.25)
				Thus, for , the area decreases in accordance with .
				Evaluating (6.25) at , we find that . For , the product is zero since there is no overlap between these two signals. The convolution of these signals for , is shown in Figure 6.15.
				The plot of Figure 6.15 was obtained with the MATLAB script below.
				t1=0:0.01:1; x=t1-t1.^2./2; axis([0 1 0 0.5]);...
				t2=1:0.01:2; y=t2.^2./2-2.*t2+2; axis([1 2 0 0.5]); plot(t1,x,t2,y); grid
		Example 6.5
			The signals and are as shown in Figure 6.16. Compute using the graphical evaluation method.
				Figure 6.16. Signals for Example 6.5
			Solution:
				Following the same procedure as in the previous example, we form by first constructing the image of . This is shown as in Figure 6.17.
				Next, we form by shifting to the right by some value as shown in Figure 6.18.
				As in the previous example, evaluation of the convolution integral
				entails multiplication of by for each value ot , and computation of the area from . Figure 6.19 shows the product as point moves to the right.
				We observe that . Shifting to the right so that , we obtain the sketch of Figure 6.20 where the integral of the product is denoted by the shaded area, and it increases as point moves further to the right.
				The maximum area is obtained when point reaches as shown in Figure 6.21.
				Its value for is
				(6.26)
				Evaluating (6.26) at , we obtain
				(6.27)
				The plot for the interval is shown in Figure 6.22.
				As we continue shifting to the right, the area starts decreasing. As shown in Figure 6.23, it approaches zero as becomes large but never reaches the value of zero.
				The plot of Figure 6.13 was obtained with the MATLAB script below.
				t1=0:0.01:1; x=1-exp(-t1); axis([0 1 0 0.8]); plot(t1,x); grid
				Therefore, for the time interval , we have
				(6.28)
				Evaluating (6.28) at , we find that .
				For , the product approaches zero as . The convolution of these signals for , is shown in Figure 6.24.
				The plot of Figure 6.24 was obtained with the MATLAB script below.
				t1=0:0.01:1; x=1-exp(-t1); axis([0 1 0 0.8]);...
				t2=1:0.01:2; y=1.718.*exp(-t2); axis([1 2 0 0.8]); plot(t1,x,t2,y); grid
		Example 6.6
			Perform the convolution where and are as shown in Figure 6.25.
				Figure 6.25. Signals for Example 6.6
			Solution:
				We will use the convolution integral
				(6.29)
				The computation steps are as in the two previous examples, and are evident from the sketches of Figures 6.26 through 6.29.
				Figure 6.26 shows the formation of .
				Figure 6.27 shows the formation of and convolution with for .
				For ,
				(6.30)
				Figure 6.28 shows the convolution of with for .
				For ,
				(6.31)
				Figure 6.29 shows the convolution of with for .
				For
				(6.32)
				From (6.30), (6.31), and (6.32), we obtain the waveform of Figure 6.30 that represents the convolution of the signals and .
				In summary, the procedure for the graphical evaluation of the convolution integral, is as follows:
				1. We substitute and with and respectively.
				2. We fold (form the mirror image of) or about the vertical axis to obtain or .
				3. We slide or to the right a distance t to obtain or .
				4. We multiply the two functions to obtain the product , or .
				5. We integrate this product by varying t from .
			6.5 Circuit Analysis with the Convolution Integral
				We can use the convolution integral in circuit analysis as illustrated by the following example.
		Example 6.7
			For the circuit of Figure 6.31, use the convolution integral to find the capacitor voltage when the input is the unit step function , and .
				Figure 6.31. Circuit for Example 6.7
			Solution:
				Before we apply the convolution integral, we must know the impulse response of this circuit. The circuit of Figure 6.31 was analyzed in Example 6.1, Page 6-2, where we found that
				(6.33)
				With the given values, (6.33) reduces to
				(6.34)
				Next, we use the graphical evaluation of the convolution integral as shown in Figures 6.32 through 6.34.
				The formation of is shown in Figure 6.32.
				Figure 6.33 shows the formation of .
				Figure 6.34 shows the convolution .
				Therefore, for the interval , we obtain
				(6.35)
				and the convolution is shown in Figure 6.35.
			6.6 Summary
				· The impulse response is the output (voltage or current) of a network when the input is the delta function.
				· The determination of the impulse response assumes zero initial conditions.
				· A function is an even function of time if the following relation holds.
				· A function is an odd function of time if the following relation holds.
				· The product of two even or two odd functions is an even function, and the product of an even function times an odd function, is an odd function.
				· A function that is neither even nor odd, can be expressed as
				or as
				where denotes an even function and denotes an odd function.
				· Any function of time can be expressed as the sum of an even and an odd function, that is,
				· The delta function is an even function of time.
				· The integral
				or
				is known as the convolution integral.
				· If we know the impulse response of a network, we can compute the response to any input with the use of the convolution integral.
				· The convolution integral is usually denoted as or , where the asterisk (*) denotes convolution.
				· The convolution integral is more conveniently evaluated by the graphical evaluation method.
			6.7 Exercises
				1. Compute the impulse response in terms of and for the circuit below. Then, compute the voltage across the inductor.
				2. Repeat Example 6.4, Page 6-8, by forming instead of , that is, use the convolution integral
				3. Repeat Example 6.5, Page 6-12, by forming instead of .
				4. Compute given that
				5. For the series circuit shown below, the response is the current . Use the convolution integral to find the response when the input is the unit step .
				6. Compute for the network shown below using the convolution integral, given that .
				7. Compute for the network shown below given that . Using MATLAB, plot for the time interval .
				Hint: Use the result of Exercise 6.
			6.8 Solutions to End-of-Chapter Exercises
				1.
				Letting state variable , the above relation is written as
				and this has the form where , , and . Its solution is
				and from (6.5), Page 6-1,
				The voltage across the inductor is found from
				and using the sampling property of the delta function, the above relation reduces to
				2.
				From the plots above we observe that the area reaches the maximum value of at , and then decreases to zero at . Alternately, using the convolution integral we obtain
				where , , , and . Then, for ,
				and we observe that at ,
				Next, for ,
				and we observe that at ,
				3.
				From the plots above we observe that the area reaches its maximum value at , and then decreases exponentially to zero as . Alternately, using the convolution integral we obtain
				where , , , and . Then, for
				For ,
				4.
				From tables of integrals,
				and thus
				Check:
				, ,
				syms s t; ilaplace(4/(s^3+2*s^2))
				ans =
				2*t-1+exp(-2*t)
				5.
				To use the convolution integral, we must first find the impulse response. It was found in Exercise 1 as
				and with the given values,
				When the input is the unit step ,
				6.
				We will first compute the impulse response, that is, the output when the input is the delta function, i.e., . Then, by KVL
				and with
				or
				By comparison with , we observe that and .
				From (6.5)
				Now, we compute when by convolving the impulse response with this input , that is, . The remaining steps are as in Example 6.5 and are shown below.
				7.
				From Exercise 6,
				Then, for this circuit,
				The plot for the time interval is shown below.
				The plot above was obtained with the MATLAB script below.
				t1=0:0.01:1; x=exp(-t1); axis([0 1 0 1]);...
				t2=1:0.01:5; y=-1.718.*exp(-t2); axis([1 5 0 1]); plot(t1,x,t2,y); grid
	Chapter 8
		The Fourier Transform
			his chapter introduces the Fourier Transform, also known as the Fourier Integral. The definition, theorems, and properties are presented and proved. The Fourier transforms of the most common functions are derived, the system function is defined, and ...
			8.1 Definition and Special Forms
				We recall that the Fourier series for periodic functions of time, such as those we discussed on the previous chapter, produce discrete line spectra with non-zero values only at specific frequencies referred to as harmonics. However, other functions o...
				We may think of a non-periodic signal as one arising from a periodic signal in which the period extents from . Then, for a signal that is a function of time with period from , we form the integral
				(8.1)
				(8.2)
				(8.3)
				(8.4)
				(8.5)
			8.2 Special Forms of the Fourier Transform
				The time function is, in general, complex, and thus we can express it as the sum of its real and imaginary parts, that is, as
				(8.6)
				(8.7)
				(8.8)
				(8.9)
				(8.10)
				(8.11)
				(8.12)
				(8.13)
				(8.14)
				TABLE 8.1 Time Domain and Frequency Domain Correspondence (Refer to Tables 8.2 - 8.7)
				TABLE 8.2 Time Domain and Frequency Domain Correspondence (Refer also to Tables 8.3 - 8.7)
				TABLE 8.3 Time Domain and Frequency Domain Correspondence (Refer also to Tables 8.4 - 8.7)
				TABLE 8.4 Time Domain and Frequency Domain Correspondence (Refer also to Tables 8.5 - 8.7)
				TABLE 8.5 Time Domain and Frequency Domain Correspondence (Refer also to Tables 8.6 - 8.7)
				TABLE 8.6 Time Domain and Frequency Domain Correspondence (Refer also to Table 8.7)
				TABLE 8.7 Time Domain and Frequency Domain Correspondence (Completed Table)
			8.3 Properties and Theorems of the Fourier Transform
				The most common properties and theorems of the Fourier transform are described in Subsections 8.3.1 through 8.3.14 below.
				8.3.1 Linearity
				8.3.2 Symmetry
				8.3.3 Time Scaling
				8.3.4 Time Shifting
				8.3.5 Frequency Shifting
				8.3.6 Time Differentiation
				8.3.7 Frequency Differentiation
				8.3.8 Time Integration
				8.3.9 Conjugate Time and Frequency Functions
				8.3.10 Time Convolution
				8.3.11 Frequency Convolution
				8.3.12 Area Under f(t)
				8.3.13 Area Under
				8.3.14 Parseval’s Theorem
			8.4 Fourier Transform Pairs of Common Functions
				The Fourier transform pair of the most common functions described in Subsections 8.4.1 through 8.4.9 below.
				8.4.1 The Delta Function Pair
				TABLE 8.8 Fourier Transform Properties and Theorems
			8.5 Derivation of the Fourier Transform from the Laplace Transform
				If a time function is zero for , we can obtain the Fourier transform of from the one- sided Laplace transform of by substitution of with .
				Example 8.1
				Solution:
				Example 8.2
				Solution:
				Example 8.3
				Solution:
			8.6 Fourier Transforms of Common Waveforms
				In this section, we will derive the Fourier transform of some common time domain waveforms. These are described in Subsections 8.6.1 through 8.6.6 below.
				8.6.1 The Transform of
				8.6.2 The Transform of
				8.6.3 The Transform of
				8.6.4 The Transform of
				8.6.5 The Transform of a Periodic Time Function with Period T
				8.6.6 The Transform of the Periodic Time Function
			8.7 Using MATLAB for Finding the Fourier Transform of Time Functions
				MATLAB has the built-in fourier and ifourier functions to compute the Fourier transform and its inverse. Their descriptions and examples, can be displayed with the help fourier and help ifourier commands. In examples 8.4 through 8.7 we present some F...
				Example 8.4
				Example 8.5
				Example 8.6
				Example 8.7
				TABLE 8.9 Common Fourier transform pairs
			8.8 The System Function and Applications to Circuit Analysis
				We recall from Chapter 6 that, by definition, the convolution integral is
				(8.101)
				(8.102)
				(8.103)
				(8.104)
				Example 8.8
				Solution:
				Example 8.9
				Solution:
				Example 8.10
				Solution:
				Example 8.11
				Solution:
			8.9 Summary
				· The Fourier transform is defined as
				· The Inverse Fourier transform is defined as
				· The Fourier transform is, in general, a complex function. We can express it as the sum of its real and imaginary components, or in exponential form as
				· We often use the following notations to express the Fourier transform and its inverse.
				· If is real, is, in general, complex.
				· If is real and even, is also real and even.
				· If is real and odd, is imaginary and odd.
				· If is imaginary, is, in general, complex.
				· If is imaginary and even, is also imaginary and even.
				· If is imaginary and odd, is real and odd.
				· If , is real.
				· The linearity property states that
				· The symmetry property states that
				· The scaling property states that
				· The time shifting property states that
				· The frequency shifting property states that
				· The Fourier transforms of the modulated signals and are
				· The time differentiation property states that
				· The frequency differentiation property states that
				· The time integration property states that
				· If is the Fourier transform of the complex function , then,
				· The time convolution property states that
				· The frequency convolution property states that
				· The area under a time function is equal to the value of its Fourier transform evaluated at . In other words,
				· The value of a time function , evaluated at , is equal to the area under its Fourier transform times . In other words,
				· Parseval’s theorem states that
				· The delta function and its Fourier transform are as shown below.
				· The unity time function and its Fourier transform are as shown below.
				· The Fourier transform of the complex time function is as indicated below.
				· The Fourier transforms of the time functions , and are as shown below.
				· The signum function and its Fourier transform are as shown below.
				· The unit step function and its Fourier transform are as shown below.
				· The Fourier transform pairs of , , and are as follows:
				· If a time function is zero for , we can obtain the Fourier transform of from the one- sided Laplace transform of by substitution of with .
				· If a time function for , and for , we use the substitution
				to obtain the Fourier transform of from the one-sided Laplace transform of .
				· The pulse function and its Fourier transform are as shown below.
				· The Fourier transform of a periodic time function with period is
				· The Fourier transform of a periodic train of equidistant delta functions in the time domain, is a periodic train of equally spaced delta functions in the frequency domain.
				· The system function and the impulse response form the Fourier transform pair
				and
			8.10 Exercises
				1. Prove that
				2. Compute
				3. Sketch the time and frequency waveforms of
				4. Derive the Fourier transform of
				5. Derive the Fourier transform of
				6. Derive the Fourier transform of
				7. For the circuit below, use the Fourier transform method to compute .
				8. The input-output relationship in a certain network is
				Use the Fourier transform method to compute given that .
				9. In a bandpass filter, the lower and upper cutoff frequencies are , and respectively. Compute the energy of the input, and the percentage that appears at the output, if the input signal is volts.
				10. In Subsection 8.6.1, Page 8-27, we derived the Fourier transform pair
				Compute the percentage of the energy of contained in the interval of .
				11. In Subsection 8.6.2, Page 8-28, we derived the Fourier transform pair
				Use the fplot MATLAB command to plot the Fourier transform of this rectangular waveform.
			8.11 Solutions to End-of-Chapter Exercises
				1.
				and since at , whereas at , , we replace the limits of integration with and . Then,
				2.
				From tables of integrals
				Then,
				With the upper limit of integration we obtain
				To evaluate the lower limit of integration, we apply L’Hôpital’s rule, i.e.,
				and thus
				Check:
				and since
				it follows that
				3.
				From Subsection 8.6.4, Page 8-30,
				and using the MATLAB script below,
				fplot('cos(x)',[-2*pi 2*pi -1.2 1.2])
				fplot('sin(x)./x',[-20 20 -0.4 1.2])
				we obtain the plots below.
				4.
				From Subsection 8.6.1, Page 8-27,
				and from the time shifting property,
				Then,
				or
				5.
				From tables of integrals or integration by parts,
				Then,
				and multiplying both the numerator and denominator by we obtain
				We observe that since is real and odd, is imaginary and odd.
				Alternate Solution:
				The waveform of is the derivative of the waveform of and thus . From Subsection 8.6.1, Page 8-27,
				From the frequency differentiation property, transform pair (8.48), Page 8-13,
				or
				Then,
				6.
				We denote the given waveform as , that is,
				We observe that is the integral of . Therefore, we will find , and by integration we will find .
				We begin by finding the Fourier Transform of the pulse denoted as , and using and the time shifting and linearity properties, we will find .
				Using the time shifting property
				the Fourier transform of the left pulse of is
				Likewise, the Fourier transform of the right pulse of is
				and using the linearity property we obtain
				This curve is shown below and it is created with the following MATLAB script:
				fplot('sin(x./2).^2./x',[0 16*pi 0 0.5])
				Now, we find of the triangular waveform of with the use of the integration property by multiplying by . Thus,
				We can plot by letting . Then, simplifies to the form
				This curve is shown below and it is created with the following MATLAB script.
				fplot('(sin(x)./x).^2',[-8*pi 8*pi 0 1])
				7.
				By KCL
				Taking the Fourier transform of both sides we obtain
				and since
				,
				and
				where
				Then,
				and
				Next, using the sifting property of , we simplify the above to
				8.
				Taking the Fourier transform of both sides we obtain
				We use the following MATLAB script for partial fraction expansion where we let .
				syms s; collect((s+1)*(s+2)*(s+3))
				ans =
				s^3+6*s^2+11*s+6
				num=[0 0 0 20]; den=[1 6 11 6]; [num,den]=residue(num,den); fprintf(' \n');...
				fprintf('r1 = %4.2f \t', num(1)); fprintf('p1 = %4.2f', den(1)); fprintf(' \n');...
				fprintf('r2 = %4.2f \t', num(2)); fprintf('p2 = %4.2f', den(2)); fprintf(' \n');...
				fprintf('r3 = %4.2f \t', num(3)); fprintf('p3 = %4.2f', den(3))
				r1 = 10.00 p1 = -3.00
				r2 = -20.00 p2 = -2.00
				r3 = 10.00 p3 = -1.00
				Then,
				and thus
				9.
				The input energy in joules is
				and the Fourier transform of the input is
				The energy at the output for the frequency interval or is
				and from tables of integrals
				Then,
				fprintf(' \n'); fprintf('atan(6*pi) = %4.2f \t', atan(6*pi)); fprintf('atan(2*pi) = %4.2f', atan(2*pi))
				atan(6*pi) = 1.52 atan(2*pi) = 1.41
				and thus
				Therefore, the percentage of the input appearing at the output is
				10.
				First, we compute the total energy of the pulse in terms of .
				and since is an even function,
				Next, we denote the energy in the frequency interval as in the frequency domain we obtain
				and since is an even function,
				(1)
				For simplicity, we let . Then, and . Also, when , , and when or , . With these substitutions we express (1) as
				(2)
				But the last integral in (2) is an improper integral and does not appear in tables of integrals. Therefore, we will attempt to simplify (2) using integration by parts. We start with the familiar relation
				from which
				or
				Letting and , it follows that and . With these substitutions (2) is written as
				(3)
				The last integral in (3) is also an improper integral. Fortunately, some handbooks of mathematical tables include numerical values of the integral
				for arguments of in the interval . Then, replacing with we obtain , , , and for , whereas for , . Then, by substitution into (3) we obtain
				(4)
				From Table 5.3 of Handbook of Mathematical Functions, 1972 Edition, Dover Publications, with or we obtain
				and thus (4) reduces to
				Therefore, the percentage of the output for the frequency interval is
				Since this computation involves numerical integration, we can obtain the same result much faster and easier with MATLAB as follows:
				First, we define the function fourierxfm1 and we save it as an .m file as shown below. This file must be created with MATLAB’s editor (or any other editor) and saved as an .m file in a drive that has been added to MATLAB’s path.
				function y1=fourierxfm1(x)
				x=x+(x==0)*eps; % This statement avoids the sin(0)/0 value.
				% It says that if x=0, then (x==0) = 1
				% but if x is not zero, then (x==0) = 0
				% and eps is approximately equal to 2.2e-16
				% It is used to avoid division by zero.
				y1=sin(x)./x;
				Then, at MATLAB’s Command prompt, we write and execute the program below.
				% The quad function below performs numerical integration from 0 to 2*pi
				% using a form of Simpson's rule of numerical integration.
				value1=quad('fourierxfm1',0,2*pi)
				value1 =
				1.4182
				We could also have used numerical integration with the integral
				thereby avoiding the integration by parts procedure. Shown below are the function fourierxfm2 which is saved as an .m file and the program execution using this function.
				function y2=fourierxfm2(x)
				x=x+(x==0)*eps;
				y2=(sin(x)./x).^2;
				and after this file is saved, we execute the statement below observing that the limits of integration are from to .
				value2=quad('fourierxfm2',0,pi)
				value2 =
				1.4182
				11.
				fplot('abs(2.*exp(-j.*w)*(sin(w)/w))',[0 4*pi 0 2])
	Chapter 9
		Discrete-Time Systems and the Z Transform
			his chapter is devoted to discrete-time systems, and introduces the one-sided Z Transform. The definition, theorems, and properties are discussed, and the Z transforms of the most common discrete-time functions are derived. The discrete transfer func...
			9.1 Definition and Special Forms
				The Z transform performs the transformation from the domain of discrete-time signals, to another domain which we call . It is used with discrete-time signals, the same way the Laplace and Fourier transforms are used with continuous-time signals. The ...
				(9.1)
				and the Inverse Z transform is defined as
				(9.2)
				We can obtain a discrete-time waveform from an analog (continuous or with a finite number of discontinuities) signal, by multiplying it by a train of impulses. We denote the continuous signal as , and the impulses as
				(9.3)
				Multiplication of by produces the signal defined as
				(9.4)
				These signals are shown in Figure 9.1.
				Of course, after multiplication by , the only values of which are not zero, are those for which , and thus we can express (9.4) as
				(9.5)
				Next, we recall from Chapter 2, that the to transform pairs for the delta function are and . Therefore, taking the Laplace transform of both sides of (9.5), and, for simplicity, letting , we obtain
				(9.6)
				Relation (9.6), with the substitution , becomes the same as (9.1), and like , is also a complex variable.
				The Z and Inverse Z transforms are denoted as
				(9.7)
				and
				(9.8)
				The function , as defined in (9.1), is a series of complex numbers and converges outside the circle of radius , that is, it converges (approaches a limit) when . In complex variables theory, the radius is known as the radius of absolute convergence.
				In the complex the region of convergence is the set of for which the magnitude of is finite, and the region of divergence is the set of for which the magnitude of is infinite.
			9.2 Properties and Theorems of the Z Transform
				The properties and theorems of the Z transform are similar to those of the Laplace transform. In this section, we will state and prove the most common Z transforms listed in Subsections 9.2.1 through 9.2.12 below.
				9.2.1 Linearity
				9.2.2 Shift of in the Discrete-Time Domain
				9.2.3 Right Shift in the Discrete-Time Domain
				9.2.4 Left Shift in the Discrete-Time Domain
				9.2.5 Multiplication by in the Discrete-Time Domain
				9.2.6 Multiplication by in the Discrete-Time Domain
				9.2.7 Multiplication by and in the Discrete-Time Domain
				9.2.8 Summation in the Discrete-Time Domain
				9.2.9 Convolution in the Discrete-Time Domain
				9.2.10 Convolution in the Discrete-Frequency Domain
				9.2.11 Initial Value Theorem
				9.2.12 Final Value Theorem
				TABLE 9.1 Properties and Theorems of the Z transform
				Property / Theorem
				Time Domain
				Z transform
				Linearity
				Shift of
				Right Shift
				Left Shift
				Multiplication by
				Multiplication by
				Multiplication by n
				Multiplication by
				Summation in Time
				Time Convolution
				Frequency Convolution
				Initial Value Theorem
				Final Value Theorem
			9.3 The Z Transform of Common Discrete-Time Functions
				In this section we will provide several examples to find the Z transform of some discrete-time functions. In this section, we will derive the Z transforms of the most common discrete-time functions in Subsections 9.3.1 through 9.3.5 below.
				9.3.1 The Transform of the Geometric Sequence
				9.3.2 The Transform of the Discrete-Time Unit Step Function
				9.3.3 The Transform of the Discrete-Time Exponential Sequence
				9.3.4 The Transform of the Discrete-Time Cosine and Sine Functions
				9.3.5 The Transform of the Discrete-Time Unit Ramp Function
			9.4 Computation of the Z Transform with Contour Integration
				Let be the Laplace transform of a continuous time function and the transform of the sampled time function which we denote as . It is shown in complex variables theory that can be derived from by the use of the contour integral in (9.57)
				(9.57)
				where is a contour enclosing all singularities (poles) of , and is a dummy variable for . We can compute the Z transform of a discrete-time function using the transformation
				(9.58)
				TABLE 9.2 The Z transform of common discrete-time functions
			9.5 Transformation Between s and z Domains
				It is shown in complex variables textbooks that every function of a complex variable maps (transforms) a plane to another plane . In this section, we will investigate the mapping of the plane of the complex variable , into the plane of the complex va...
				Let us reconsider expressions (9.6) and (9.1), Pages 9-2 and 9-1 respectively, which are repeated here for convenience.
				(9.62)
				and
				(9.63)
				By comparison of (9.62) with (9.63),
				(9.64)
				Thus, the variables and are related as
				(9.65)
				and
				(9.66)
				Therefore,
				(9.67)
				Since , and are both complex variables, relation (9.67) allows the mapping (transformation) of regions of the -plane into the -plane. We find this transformation by recalling that and therefore, expressing  in magnitude-phase form and using (9.65), w...
				(9.68)
				where,
				(9.69)
				and
				(9.70)
				Since
				the period defines the sampling frequency . Then, or , and
				Therefore, we express (9.70) as
				(9.71)
				and by substitution of (9.69) and (9.71) into (9.68), we obtain
				(9.72)
				The quantity in (9.72), defines the unity circle; therefore, let us examine the behavior of when is negative, zero, or positive.
				Case I : When is negative, from (9.69), we see that , and thus the left half of the -plane maps inside the unit circle of the -plane, and for different negative values of , we obtain concentric circles with radius less than unity.
				Case II : When is positive, from (9.69), we see that , and thus the right half of the -plane maps outside the unit circle of the -plane, and for different positive values of we obtain concentric circles with radius greater than unity.
				Case III : When is zero, from (9.72), we see that and all values of lie on the circumference of the unit circle. For illustration purposes, we have mapped several fractional values of the sampling radian frequency , and these are shown in Table 9.3.
				From Table 9.3, we see that the portion of the axis for the interval in the -plane, maps on the circumference of the unit circle in the -plane as shown in Figure 9.5. Thus, in digital signal processing the unit circle represents frequencies from zero...
				TABLE 9.3 Mapping of multiples of sampling frequency
				w
				q
				0
				1
				0
				1
				1
				1
				1
				1
				1
				1
				1
			9.6 The Inverse Z Transform
				The Inverse Z transform enables us to extract from . It can be found by any of the following three methods:
				a. Partial Fraction Expansion
				b. The Inversion Integral
				c. Long Division of polynomials
				These methods are described in Subsections 9.6.1 through 9.6.3 below.
				9.6.1 Partial Fraction Expansion
				9.6.2 The Inversion Integral
				9.6.3 Long Division of Polynomials
				TABLE 9.4 Methods of Evaluation of the Inverse Z transform
			9.7 The Transfer Function of Discrete-Time Systems
				The discrete-time system of Figure 9.12 below can be described by the linear difference equation (9.106).
				(9.106)
				where and are constant coefficients. In a compact form, relation (9.105) is expressed as
				(9.107)
				Assuming that all initial conditions are zero, taking the Z transform of both sides of (9.106), and using the Z transform pair
				we obtain
				(9.108)
				(9.109)
				(9.110)
				We define the discrete-time system transfer function as
				(9.111)
				and by substitution of (9.110) into (9.109), we obtain
				(9.112)
				The discrete impulse response is the response to the input , and since
				we can find the discrete-time impulse response by taking the Inverse Z transform of the discrete transfer function , that is,
				(9.113)
				Example 9.10
				Solution:
			9.8 State Equations for Discrete-Time Systems
				As with continuous time systems, we choose state variables for discrete-time systems, either from block diagrams that relate the input-output information, or directly from a difference equation.
				Consider the block diagram of Figure 9.15.
				We learned in Chapter 5 that the state equations representing this continuous time system are
				(9.124)
				In a discrete-time block diagram, the integrator is replaced by a delay device. The analogy between an integrator and a unit delay device is shown in Figure 9.17.
				Example 9.11
				Solution:
				Example 9.12
				Solution:
			9.9 Summary
				· The Z transform performs the transformation from the domain of discrete-time signals, to another domain which we call . It is used with discrete-time signals, the same way the Laplace and Fourier transforms are used with continuous-time signals.
				· The one-sided Z transform of a discrete-time function defined as
				and it is denoted as
				· The Inverse Z transform is defined as
				and it is denoted as
				· The linearity property of the Z transform states that
				· The shifting of where is the discrete unit step function, produces the Z transform pair
				· The right shifting of allows use of non-zero values for and produces the Z transform pair
				For , this transform pair reduces to
				and for , reduces to
				· The left shifting of where is a positive integer, produces the Z transform pair
				For , the above expression reduces to
				and for , reduces to
				· Multiplication by produces the Z transform pair
				· Multiplication by produces the Z transform pair
				· Multiplications by and produce the Z transform pairs
				· The summation property of the Z transform states that
				· Convolution in the discrete-time domain corresponds to multiplication in the -domain, that is,
				· Multiplication in the discrete-time domain corresponds to convolution in the -domain, that is,
				· The initial value theorem of the Z transform states that
				· The final value theorem of the Z transform states that
				· The Z transform of the geometric sequence
				is
				· The Z transform of the discrete unit step function shown below
				is
				· The Z transform of the discrete exponential sequence
				is
				for
				· The Z transforms of the discrete-time functions and are respectively
				· The Z transform of the discrete unit ramp is
				· The Z transform can also be found by means of the contour integral
				and the residue theorem.
				· The variables and are related as
				and
				· The relation
				allows the mapping (transformation) of regions of -plane to -plane.
				· The Inverse Z transform can be found by partial fraction expansion, the inversion integral, and long division of polynomials.
				· The discrete-time system transfer function is defined as
				· The input and output are related by the system transfer function as
				· The discrete-time impulse response and the discrete transfer function are related as
				· The discrete-time state equations are
				and the general form of the solution is
				· The MATLAB c2d function converts the continuous time state space equation
				to the discrete-time state space equation
				· The MATLAB d2c function converts the discrete-time state equation
				to the continuous time state equation
			9.10 Exercises
				1. Find the Z transform of the discrete-time pulse defined as
				2. Find the Z transform of where is defined as in Exercise 1.
				3. Prove the following Z transform pairs:
				a. b. c.
				d. e.
				4. Use the partial fraction expansion to find given that
				5. Use the partial fraction expansion method to compute the Inverse Z transform of
				6. Use the Inversion Integral to compute the Inverse Z transform of
				7. Use the long division method to compute the first 5 terms of the discrete-time sequence whose Z transform is
				8.
				a. Compute the transfer function of the difference equation
				b. Compute the response when the input is
				9. Given the difference equation
				a. Compute the discrete transfer function
				b. Compute the response to the input
				10. A discrete-time system is described by the difference equation
				where
				a. Compute the transfer function
				b. Compute the impulse response
				c. Compute the response when the input is
				11. Given the discrete transfer function
				write the difference equation that relates the output to the input .
			9.11 Solutions to End-of-Chapter Exercises
				1.
				By the linearity property
				2.
				and from Exercise 1,
				Then,
				or
				3.
				a.
				b.
				and since is zero for all except , it follows that
				c.
				From (9.40), Page 9-14,
				(1)
				Differentiating (1) with respect to and multiplying by we obtain
				(2)
				Also, from the multiplication by property
				(3)
				and from (2) and (3)
				(4)
				we observe that for (4) above reduces to
				d.
				From (9.40), Page 9-14,
				(1)
				and taking the second derivative of (1) with respect to we obtain
				(2)
				Also, from the multiplication by property
				(3)
				From Exercise 9.3(c), relation (2)
				(4)
				and by substitution of (2) and (4) into (3) we obtain
				We observe that for the above reduces to
				e.
				Let and we know that
				(1)
				The term represents the sum of the first values, including , of and thus it can be written as the summation
				Since summation in the discrete-time domain corresponds to integration in the continuous time domain, it follows that
				where represents the discrete unit ramp. Now, from the summation in time property,
				and with (1) above
				and thus
				4.
				First, we multiply the numerator and denominator by to eliminate the negative exponents of .
				Then,
				or
				Since
				it follows that
				5.
				(1)
				and clearing of fractions yields
				(2)
				With (2) reduces to from which
				With (2) reduces to from which
				With (2) reduces to
				or from which
				By substitution into (1) and multiplication by we obtain
				Using the transforms , , and we obtain
				Check with MATLAB:
				syms z n; Fz=z^2/((z+1)*(z-0.75)^2); iztrans(Fz)
				ans =
				-16/49*(-1)^n+16/49*(3/4)^n+4/7*(3/4)^n*n
				6.
				Multiplication by yields
				From (9.87), Page 9-32,
				and for this exercise,
				Next, we examine to find out if there are any values of for which there is a pole at the origin. We observe that for there is a second order pole at because
				Also, for there is a simple pole at . But for the only poles are and . Then, following the same procedure as in Example 9.12, for we obtain:
				The first term on the right side of the above expression has a pole of order 2 at ; therefore, we must evaluate the first derivative of
				at . Thus, for , it reduces to
				For , it reduces to
				or
				For there are no poles at , that is, the only poles are at and . Therefore,
				for .
				We can express for all as
				where the coefficients of and are the residues that were found for and at .The coefficient is multiplied by to emphasize that this value exists only for and coefficient is multiplied by to emphasize that this value exists only for .
				Check with MATLAB:
				syms z n; Fz=(z^3+2*z^2+1)/(z*(z-1)*(z-0.5)); iztrans(Fz)
				ans =
				2*charfcn[1](n)+6*charfcn[0](n)+8-13*(1/2)^n
				7.
				Multiplication of each term by yields
				The long division of the numerator by the denominator is shown below.
				Therefore,
				(1)
				Also,
				(2)
				Equating like terms on the right sides of (1) and (2) we obtain
				8.
				a.
				Taking the Z transform of both sides we obtain
				and thus
				b.
				Then,
				or
				(1)
				By substitution into (1) and multiplication by we obtain
				Recalling that and we obtain
				9.
				a.
				Taking the Z transform of both sides we obtain
				and thus
				b.
				Then,
				or
				(1)
				By substitution into (1) and multiplication by we obtain
				Recalling that and we obtain
				10.
				a.
				Taking the Z transform of both sides we obtain
				and thus
				b.
				c.
				11.
				Multiplication of each term by yields
				and taking the Inverse Z transform, we obtain
	Chapter 10
		The DFT and the FFT Algorithm
			his chapter begins with the actual computation of frequency spectra for discrete time systems. For brevity, we will use the acronyms DFT for the Discrete Fourier Transform and FFT for Fast Fourier Transform algorithm respectively. The definition, the...
		10.1 The Discrete Fourier Transform (DFT)
			In the Fourier series topic, Chapter 7, we learned that a periodic and continuous time function, results in a non periodic and discrete frequency function. Next, in the Fourier transform topic, Chapter 8, we saw that a non-periodic and continuous tim...
			In this chapter we will see that a periodic and discrete time function results in a periodic and discrete frequency function. For convenience, we summarize these facts in Table 10.1.
			TABLE 10.1 Characteristics of Fourier and Z transforms
				Fourier Series
				Continuous, Periodic
				Discrete, Non-Periodic
				Fourier Transform
				Continuous, Non-Periodic
				Continuous, Non-Periodic
				Z transform
				Discrete, Non-Periodic
				Continuous, Periodic
				Discrete Fourier Transform
				Discrete, Periodic
				Discrete, Periodic
				In our subsequent discussion we will denote a discrete time signal as , and its discrete frequency transform as .
				Let us consider again the definition of the Z transform, that is,
				(10.1)
				Its value on the unit circle of the -plane, is
				(10.2)
				This represents an infinite summation; it must be truncated before it can be computed. Let this truncated version be represented by
				(10.3)
				where represents the number of points that are equally spaced in the interval to on the unit circle of the -plane, and
				for . We refer to relation (10.3) as the DFT of .
				The Inverse DFT is defined as
				(10.4)
				for .
				In general, the discrete frequency transform is complex, and thus we can express it as
				(10.5)
				for .
				Since
				(10.6)
				we can express (10.3) as
				(10.7)
				For , (10.7) reduces to . Then, the real part can be computed from
				(10.8)
				and the imaginary part from
				(10.9)
				We observe that the summation in (10.8) and (10.9) is from to since appears in (10.8).
				Example 10.1
				Solution:
				Example 10.2
				Solution:
				Example 10.3
				Solution:
		10.2 Even and Odd Properties of the DFT
			The discrete time and frequency functions are defined as even or odd in accordance with the following relations:
			(10.20)
			(10.21)
			(10.22)
			(10.23)
			In Chapter 8, we developed Table 8-7, Page 8-8, showing the even and odd properties of the Fourier transform. Table 10.2 shows the even and odd properties of the DFT.
			TABLE 10.2 Even and Odd Properties of the DFT
				Discrete Time Sequence
				Discrete Frequency Sequence
				Real
				Complex
				Real part is Even
				Imaginary Part is Odd
				Real and Even
				Real and Even
				Real and Odd
				Imaginary and Odd
				Imaginary
				Complex
				Real part is Odd
				Imaginary Part is Even
				Imaginary and Even
				Imaginary and Even
				Imaginary and Odd
				Real and Odd
				The even and odd properties of the DFT shown in Table 10.2 can be proved by methods similar to those that we have used for the continuous Fourier transform. For instance, to prove the first entry we expand
				into its real and imaginary parts using Euler’s identity, and we equate these with the real and imaginary parts of . Now, since the real part contains the cosine, and the imaginary contains the sine function, and while , this entry is proved.
		10.3 Common Properties and Theorems of the DFT
			The most common properties and theorems of the DFT are presented in Subsections 10.3.1 through 10.3.5 below. For brevity, we will denote the DFT and Inverse DFT as follows:
			(10.24)
			and
			(10.25)
			10.3.1 Linearity
				(10.26)
				where , , and and are arbitrary constants.
				Proof:
				The proof is readily obtained by using the definition of the DFT.
			10.3.2 Time Shift
				(10.27)
				Proof:
				By definition,
				and if is shifted to the right by sampled points for , we must change the lower and upper limits of the summation from to , and from to respectively. Then, replacing with in the definition above, we obtain
				(10.28)
				Now, we let ; then , and when , . Therefore, the above relation becomes
				(10.29)
				We must remember, however, that although the magnitudes of the frequency components are not affected by the shift, a phase shift of radians is introduced as a result of the time shift. To prove this, let us consider the relation . Taking the DFT of b...
				(10.30)
				Since both and are complex quantities, they can be expressed in magnitude and phase angle form as
				and
				By substitution of these into (10.30), we obtain
				(10.31)
				and since , it follows that
				(10.32)
			10.3.3 Frequency Shift
				(10.33)
				Proof:
				(10.34)
				and we observe that the last term on the right side of (10.34) is the same as except that m is replaced with . Therefore,
				(10.35)
			10.3.4 Time Convolution
				(10.36)
				Proof:
				Since
				then,
				(10.37)
				Next, interchanging the order of the indices and in the lower limit of the summation, and also changing the range of summation from to for the bracketed term on the right side of (10.37), we obtain
				(10.38)
				Now, from the time shifting theorem,
				(10.39)
				and by substitution into (10.38),
				(10.40)
			10.3.5 Frequency Convolution
				(10.41)
				Proof:
				The proof is obtained by taking the Inverse DFT, changing the order of the summation, and letting .
		10.4 The Sampling Theorem
			The sampling theorem, also known as Shannon’s Sampling Theorem, states that if a continuous time function is band-limited with its highest frequency component less than , then can be completely recovered from its sampled values, if the sampling fre...
			For example, if we assume that the highest frequency component in a signal is , this signal must be sampled at or higher so that it can be completed specified by its sampled values. If the sampled frequency remains the same, i.e., , and the highest f...
			A typical digital signal processing system contains a low-pass analog filter, often called pre-sampling filter, to ensure that the highest frequency allowed into the system, will be equal or less the sampling rate so that the signal can be recovered....
			If a signal is not band-limited, or if the sampling rate is too low, the spectral components of the signal will overlap each another and this condition is called aliasing. To avoid aliasing, we must increase the sampling rate.
			A discrete time signal may have an infinite length; in this case, it must be limited to a finite interval before it is sampled. We can terminate the signal at the desired finite number of terms, by multiplying it by a window function. There are sever...
			A third problem that may arise in using the DFT, results from the fact the spectrum of the DFT is not continuous; it is a discrete function where the spectrum consists of integer multiples of the fundamental frequency. It is possible, however, that s...
			To gain a better understanding of the sampling frequency , Nyquist frequency , number of samples , and the periods in the time and frequency domains, we will adopt the following notations:
			These quantities are shown in Figure 10.6. Thus, we have the relations
			(10.42)
			Example 10.4
				The period of a periodic discrete time function is millisecond, and it is sampled at equally spaced points. It is assumed that with this number of samples, the sampling theorem is satisfied and thus there will be no aliasing.
				a. Compute the period of the frequency spectrum in .
				b. Compute the interval between frequency components in .
				c. Compute the sampling frequency
			d. Compute the Nyquist frequency
			Solution:
				For this example, and . Therefore, the time between successive time components is
				Then,
				a. the period of the frequency spectrum is
				b. the interval between frequency components is
				c. the sampling frequency is
				d. the Nyquist frequency must be equal or less than half the sampling frequency, that is,
		10.5 Number of Operations Required to Compute the DFT
			Let us consider a signal whose highest (Nyquist) frequency is , the sampling frequency is , and 1024 samples are taken, i.e., . The time required to compute the entire DFT would be
			(10.43)
			To compute the number of operations required to complete this task, let us expand the -point DFT defined as
			(10.44)
			Then,
			(10.45)
			and it is worth remembering that
			(10.46)
			Since is a complex number, the computation of any frequency component , requires complex multiplications and complex additions, that is, complex arithmetic operations are required to compute any frequency component of . If we assume that  is real, th...
			Fortunately, many of the terms in (10.45) are unity. Moreover, because of some symmetry properties, the number of complex operations can be reduced considerably. This is possible with the algorithm known as FFT (Fast Fourier Transform) that was devel...
		10.6 The Fast Fourier Transform (FFT)
			In this section, we will be making extensive use of the complex rotating vector
			(10.47)
			and the additional properties of in (10.48) below.
			(10.48)
			We rewrite the array of (10.45) in matrix form as shown in (10.49) below.
			(10.49)
			This is a complex Vandermonde matrix and it is expressed in a more compact form as
			(10.50)
			The algorithm that was developed by Cooley and Tukey, is based on matrix decomposition methods, where the matrix in (10.50) is factored into smaller matrices, that is,
			(10.51)
			where is chosen as or .
			Each row of the matrices on the right side of (10.51) contains only two non-zero terms, unity and . Then, the vector is obtained from
			(10.52)
			The FFT computation begins with matrix in (10.52). This matrix operates on vector producing a new vector, and each component of this new vector, is obtained by one multiplication and one addition. This is because there are only two non-zero elements ...
			Under those assumptions, we construct Table 10.3 to compare the percentage of computations achieved by the use of FFT versus the DFT.
			TABLE 10.3 DFT and FFT Computations
				DFT
				FFT
				FFT/DFT
				%
				8
				64
				24
				37.5
				16
				256
				64
				25
				32
				1024
				160
				15.6
				64
				4096
				384
				9.4
				128
				16384
				896
				5.5
				256
				65536
				2048
				3.1
				512
				262144
				4608
				1.8
				1024
				1048576
				10240
				1
				2048
				4194304
				22528
				0.5
				A plethora of FFT algorithms has been developed and published. They are divided into two main categories:
				Category I
				a. Decimation in Time
				b. Decimation in Frequency
				Category II
				a. In-Place
				b. Natural Input-Output (Double-Memory Technique)
				To define Category I, we need to refer to the DFT and Inverse DFT definitions. They are repeated below for convenience.
				(10.53)
				and
				(10.54)
				We observe that (10.53) and (10.54) differ only by the factor , and the replacement of with . If the DFT algorithm is developed in terms of the direct DFT of (10.53), it is referred to as decimation in time, and if it is developed in terms of the Inv...
				is replaced by its complex conjugate
				that is, the sine terms are reversed in sign, and the multiplication by the factor can be done either at the input or output.
				The Category II algorithm schemes are described in the Table 10.4 along with their advantages and disadvantages.
			TABLE 10.4 In-Place and Natural Input-Output algorithms
				Category II
				Description
				Advantages
				Disadvantages
				In-Place
				The process where the result of a computation of a new vector is stored in the same memory location as the result of the previous computation
				Eliminates the need for intermediate storage and memory requirements
				The output appears in an unnatural order and must be re-ordered.
				Natural
				Input-Output
				(Double Memory)
				The process where the output appears in same (natural) order as the input
				No re-ordering is required
				Requires more
				internal memory to preserve the natural order
				Now, we will explain how the unnatural order occurs and how it can be re-ordered.
				Consider the discrete time sequence ; its DFT is found from
				(10.55)
				We assume that is a power of and thus, it is divisible by . Then, we can decompose the sequence into two subsequences, which contains the even components, and which contains the odd components. In other words, we choose these as
				and
				for
				Each of these subsequences has a length of and thus, their DFTs are, respectively,
				(10.56)
				and
				(10.57)
				where
				(10.58)
				For an -point DFT, . Expanding (10.55) for we obtain
				(10.59)
				Expanding (10.56) for and recalling that , we obtain
				(10.60)
				Expanding also (10.57) for and using , we obtain
				(10.61)
				The vector is the same in (10.59), (10.60) and (10.61), and . Then,
				Multiplying both sides of (10.61) by , we obtain
				(10.62)
				and from (10.59), (10.60) and (10.62), we observe that
				(10.63)
				for .
				Continuing the process, we decompose into and . These are sequences of length .
				Denoting their DFTs as and , and using the relation
				(10.64)
				for , we obtain
				(10.65)
				and
				(10.66)
				The sequences of (10.65) and (10.66) cannot be decomposed further. They justify the statement made earlier, that each computation produces a vector where each component of this vector, for  is obtained by one multiplication and one addition. This is ...
				Substitution of (10.65) and (10.66) into (10.60), yields
				(10.67)
				Likewise, can be decomposed into DFTs of length 2; then, can be computed from
				(10.68)
				for . Of course, this procedure can be extended for any that is divisible by 2.
				Figure 10.5 shows the signal flow graph of a decimation in time, in-place FFT algorithm for , where the input is shuffled in accordance with the above procedure. The subscript in has been omitted for clarity.
				In the signal flow graph of Figure 10.5, the input appears in Column . The , , and -point FFTs are in Columns , , and respectively.
				In simplified form, the output of the node associated with row and column , indicated as , is found from
				(10.69)
				where , , and the exponent in is indicated on the signal flow graph.
				The binary input words and the bit-reversed words applicable to the this signal flow graph, are shown in Table 10.5.
				We will illustrate the use of (10.69) with the following example.
				Example 10.5
			TABLE 10.5 Binary words for the signal flow graph of Figure 10.5
				n
				Binary Word
				Reversed-Bit Word
				Input Order
				0
				000
				000
				x[0]
				1
				001
				100
				x[4]
				2
				010
				010
				x[2]
				3
				011
				110
				x[6]
				4
				100
				001
				x[1]
				5
				101
				101
				x[5]
				6
				110
				011
				x[3]
				7
				111
				111
				x[7]
				Solution:
		10.7 Summary
			· The -point DFT is defined as
			where represents the number of points that are equally spaced in the interval to on the unit circle of the -plane, and .
			· The -point Inverse DFT is defined as
			for .
			· In general, the discrete frequency transform is complex, and it is expressed as
			The real part can be computed from
			and the imaginary part from
			· We can use the fft(x) function to compute the DFT, and the ifft(x) function to compute the Inverse DFT.
			· The term is a rotating vector, where the range is divided into equal segments and it is denoted as represent it as , that is,
			and consequently
			Accordingly, the DFT pair is normally denoted as
			and
			· The correspondence between and is denoted as
			· If is an -point real discrete time function, only of the frequency components of are unique.
			· The discrete time and frequency functions are defined as even or odd, in accordance with the relations
			· The even and odd properties of the DFT are similar to those of the continuous Fourier transform and are listed in Table 10.2.
			· The linearity property of the DFT states that
			· The time shift property of the DFT states that
			· The frequency shift property of the DFT states that
			· The time convolution property of the DFT states that
			· The frequency convolution property of the DFT states that
			· The sampling theorem, also known as Shannon’s Sampling Theorem, states that if a continuous time function is band-limited with its highest frequency component less than , then can be completely recovered from its sampled values, if the sampling ...
			· A typical digital signal processing system contains a low-pass analog filter, often called pre-sampling filter, to ensure that the highest frequency allowed into the system, will be equal or less the sampling rate so that the signal can be recover...
			· If a signal is not band-limited, or if the sampling rate is too low, the spectral components of the signal will overlap each another and this condition is called aliasing.
			· If a discrete time signal has an infinite length, we can terminate the signal at a desired finite number of terms, by multiplying it by a window function. However, we must choose a suitable window function; otherwise, the sequence will be terminat...
			· If in a discrete time signal some significant frequency component that lies between two spectral lines and goes undetected, the picket-fence effect is produced. This effect can be alleviated by adding zeros at the end of the discrete signal, there...
			· The number of operations required to compute the DFT can be significantly reduced by the FFT algorithm.
			· The Category I FFT algorithms are classified either as decimation it time or decimation in frequency. Decimation in time implies that DFT algorithm is developed in terms of the direct DFT, whereas decimation in frequency implies that the DFT is de...
			· The Category II FFT algorithms are classified either as in-place or natural input-output. In-place refers to the process where the result of a computation of a new vector is stored in the same memory location as the result of the previous computat...
			· The FFT algorithms are usually shown in a signal flow graph. In some signal flow graphs the input is shuffled and the output is natural, and in others the input is natural and the output is shuffled. These combinations occur in both decimation in ...
		10.8 Exercises
			1. Compute the DFT of the sequence ,
			2. A square waveform is represented by the discrete time sequence
			and
			Use MATLAB to compute and plot the magnitude of this sequence.
			3. Prove that
			a.
			b.
			4. The signal flow below is a decimation in time, natural-input, shuffled-output type FFT algorithm. Using this graph and relation (10.69), compute the frequency component . Verify that this is the same as that found in Example 10.5.
			5. The signal flow graph below is a decimation in frequency, natural input, shuffled output type FFT algorithm. There are two equations that relate successive columns. The first is
			and it is used with the nodes where two dashed lines terminate on them.
			The second equation is
			and it is used with the nodes where two solid lines terminate on them. The number inside the circles denote the power of , and the minus (-) sign below serves as a reminder that the bracketed term of the second equation involves a subtraction. Using ...
			6. Plot the Fourier transform of the rectangular pulse shown below, using the MATLAB fft function. Then, use the ifft function to verify that the inverse transformation produces the rectangular pulse.
			7. Plot the Fourier transform of the triangular pulse shown below using the MATLAB fft function. Then, use the ifft function to verify that the inverse transformation produces the rectangular pulse.
		10.9 Solutions to End-of-Chapter Exercises
			1.
			where and , . Then,
			for
			:
			:
			:
			:
			Check with MATLAB:
			fn=[1 1 -1 -1]; Fm=fft(fn)
			Fm =
			0 2.0000 - 2.0000i 0 2.0000 + 2.0000i
			2.
			and
			fn=[1 1 1 1 -1 -1 -1 -1]; magXm=abs(fft(fn)); fprintf(' \n');...
			fprintf('magXm0 = %4.2f \t', magXm(1)); fprintf('magXm1 = %4.2f \t', magXm(2));...
			fprintf('magXm2 = %4.2f \t', magXm(3)); fprintf('magXm3 = %4.2f \t', magXm(4)); fprintf('\n');...
			fprintf('magXm4 = %4.2f \t', magXm(5)); fprintf('magXm5 = %4.2f \t', magXm(6));...
			fprintf('magXm6 = %4.2f \t', magXm(7)); fprintf('magXm7 = %4.2f \t', magXm(8))
			magXm0 = 0.00 magXm1 = 5.23 magXm2 = 0.00 magXm3 = 2.16
			magXm4 = 0.00 magXm5 = 2.16 magXm6 = 0.00 magXm7 = 5.23
			The MATLAB stem command can be used to plot discrete sequence data. For this Exercise we use the script
			fn=[1 1 1 1 -1 -1 -1 -1]; stem(abs(fft(fn)))
			and we obtain the plot below.
			3.
			From the frequency shift property of the DFT
			(1)
			Then,
			(2)
			Adding (1) and (2) and multiplying the sum by we obtain
			and thus
			Likewise, subtracting (2) from (1) and multiplying the difference by we obtain
			4.
			(1)
			where
			and
			Going backwards (to the left) we find that
			and by substitution into (1)
			(2)
			From the DFT definition
			(3)
			By comparison, we see that the first 4 terms of (3) are the same the first, second, fourth, and sixth terms of (2) since , that is, , , and so on. The remaining terms in (2) and (3) are also the same since and thus , , , and .
			5.
			We are asked to compute only. However, we will derive all equations as we did in Example 10.5.
			Column 1 (C=1):
			(1)
			Column 2 (C=2):
			(2)
			Column 3 (C=3):
			(3)
			(4)
			where
			and
			From (1)
			and by substitution into (4)
			(5)
			From Exercise 4,
			(6)
			Since , and (see proof below), we see that , , , , , and . Therefore, (5) and (6) are the same.
			Proof that :
			6.
			The rectangular pulse is produced with the MATLAB script below.
			x=[linspace(-2,-1,100) linspace(-1,1,100) linspace(1,2,100)];...
			y=[linspace(0,0,100) linspace(1,1,100) linspace(0,0,100)]; plot(x,y)
			and the FFT is produced with
			plot(x, fft(y))
			The Inverse FFT is produced with
			plot(x,ifft(fft(y)))
			The original rectangular pulse, its FFT, and the Inverse FFT are shown below.
			7.
			The triangular pulse is produced with the MATLAB script below.
			x=linspace(-1,1,100); y=[linspace(0,1,50) linspace(1,0,50)]; plot(x,y)
			and the FFT is produced with
			plot(x, fft(y))
			The Inverse FFT is produced with
			plot(x,ifft(fft(y)))
			The original triangular pulse, its FFT, and the Inverse FFT are shown below.
	Chapter 11
		Analog and Digital Filters
			his chapter is an introduction to analog and digital filters. It begins with the basic analog filters, transfer functions, and frequency response. The magnitude characteristics of Butterworth and Chebychev filters and conversion of analog to equivale...
			11.1 Filter Types and Classifications
				Analog filters are defined over a continuous range of frequencies. They are classified as low-pass, high-pass, band-pass and band-elimination (stop-band). The ideal magnitude characteristics of each are shown in Figure 11.1. The ideal characteristics...
				Another, less frequently mentioned filter, is the all-pass or phase shift filter. It has a constant magnitude response but is phase varies with frequency. Please refer to Exercise 4, Page 11-94, at the end of this chapter.
				A digital filter, in general, is a computational process, or algorithm that converts one sequence of numbers representing the input signal into another sequence representing the output signal. Accordingly, a digital filter can perform functions as di...
				Analog filter functions have been used extensively as prototype models for designing digital filters and, therefore, we will present them first.
			11.2 Basic Analog Filters
				An analog filter can also be classified as passive or active. Passive filters consist of passive devices such as resistors, capacitors and inductors. Active filters are, generally, operational amplifiers with resistors and capacitors connected to the...
				11.2.1 RC Low-Pass Filter
				11.2.2 RC High-Pass Filter
				11.2.3 RLC Band-Pass Filter
				11.2.4 RLC Band-Elimination Filter
			11.3 Low-Pass Analog Filter Prototypes
				In this section, we will use the analog low-pass filter as a basis. We will see later that, using transformations, we can derive high-pass and the other types of filters from a basic low-pass filter. We will discuss the Butterworth, Chebyshev Type I,...
				The first step in the design of an analog low-pass filter is to derive a suitable magnitude-squared function , and from it derive a function such that
				(11.11)
				(11.12)
				Example 11.1
				Solution:
				Example 11.2
				Solution:
				11.2.1 Butterworth Analog Low-Pass Filter Design
				TABLE 11.1 Values for the coefficients in (11.31)
				TABLE 11.2 Factored forms for Butterworth low-pass filters
				TABLE 11.3 Coefficients for Butterworth low-pass filter designs
				TABLE 11.4 Ratio of conventional cutoff frequency to ripple width frequency
			11.3 High-Pass, Band-Pass, and Band-Elimination Filter Design
				Transformation methods have been developed where a low-pass filter can be converted to another type of filter simply by transforming the complex variable . These transformations are listed in Table 11.5 where  is the cutoff frequency of a low-pass fi...
				Example 11.12
				Solution:
				TABLE 11.5 Filter transformations
			Power of
			Numerator
			Denominator
				In all of the above examples, we have shown the magnitude, but not the phase response of each filter type. However, we can use the MATLAB function bode(num,den) to generate both the magnitude and phase responses of any transfer function describing th...
				Example 11.17
				Solution:
				TABLE 11.6 Advantages / Disadvantages of different types of filters
				11.4 Digital Filters
				11.5 Digital Filter Design with Simulink
				11.6 Summary
				11.7 Exercises
				11.8 Solutions to End-of-Chapter Exercises
Appendix A Signals and Systems Fifth
Appendix B Signals and Systems Fifth
	Appendix B
		B.1 Simulink and its Relation to MATLAB
		B.2 Simulink Demos
Appendix C Signals and Systems Fifth
	Appendix C
		A Review of Complex Numbers
			his appendix is a review of the algebra of complex numbers. The basic operations are defined and illustrated with several examples. Applications using Euler’s identities are presented, and the exponential and polar forms are discussed and illustrat...
			C.1 Definition of a Complex Number
				In the language of mathematics, the square root of minus one is denoted as , that is, . In the electrical engineering field, we denote as to avoid confusion with current . Essentially, is an operator that produces a 90-degree counterclockwise rotatio...
				Note: In our subsequent discussion, we will designate the x-axis (abscissa) as the real axis, and the y-axis (ordinate) as the imaginary axis with the understanding that the “imaginary” axis is just as “real” as the real axis. In other words,...
				An imaginary number is the product of a real number, say , by the operator . Thus, is a real number and is an imaginary number.
				A complex number is the sum (or difference) of a real number and an imaginary number. For example, the number where and are both real numbers, is a complex number. Then, and where denotes real part of A, and the imaginary part of .
				By definition, two complex numbers and where and , are equal if and only if their real parts are equal, and also their imaginary parts are equal. Thus, if and only if and .
			C.2 Addition and Subtraction of Complex Numbers
				The sum of two complex numbers has a real component equal to the sum of the real components, and an imaginary component equal to the sum of the imaginary components. For subtraction, we change the signs of the components of the subtrahend and we perf...
				and
				then
				and
				Example C.1
				Solution:
			C.3 Multiplication of Complex Numbers
				Complex numbers are multiplied using the rules of elementary algebra, and making use of the fact that . Thus, if
				and
				then
				and since , it follows that
				(C.1)
				Example C.2
				Solution:
				Example C.3
				Solution:
				Example C.4
				Solution:
			C.4 Division of Complex Numbers
				When performing division of complex numbers, it is desirable to obtain the quotient separated into a real part and an imaginary part. This procedure is called rationalization of the quotient, and it is done by multiplying the denominator by its conju...
				(C.2)
				In (C.2), we multiplied both the numerator and denominator by the conjugate of the denominator to eliminate the j operator from the denominator of the quotient. Using this procedure, we see that the quotient is easily separated into a real and an ima...
				Example C.5
				Solution:
			C.5 Exponential and Polar Forms of Complex Numbers
				The relations
				(C.3)
				and
				(C.4)
				are known as the Euler’s identities.
				Multiplying (C.3) by the real positive constant C we obtain:
				(C.5)
				This expression represents a complex number, say , and thus
				(C.6)
				where the left side of (C.6) is the exponential form, and the right side is the rectangular form.
				Equating real and imaginary parts in (C.5) and (C.6), we obtain
				(C.7)
				Squaring and adding the expressions in (C.7), we obtain
				Then,
				or
				(C.8)
				Also, from (C.7)
				or
				(C.9)
				To convert a complex number from rectangular to exponential form, we use the expression
				(C.10)
				To convert a complex number from exponential to rectangular form, we use the expressions
				(C.11)
				The polar form is essentially the same as the exponential form but the notation is different, that is,
				(C.12)
				where the left side of (C.12) is the exponential form, and the right side is the polar form.
				We must remember that the phase angle is always measured with respect to the positive real axis, and rotates in the counterclockwise direction.
				Example C.6
				Solution:
				Example C.7
				Solution:
				Example C.8
				Solution:
				Example C.9
				Solution:
Appendix D Signals and Systems Fifth
	Appendix D
		Matrices and Determinants
			his appendix is an introduction to matrices and matrix operations. Determinants, Cramer’s rule, and Gauss’s elimination method are reviewed. Some definitions and examples are not applicable to the material presented in this text, but are included...
			D.1 Matrix Definition
				A matrix is a rectangular array of numbers such as those shown below.
				In general form, a matrix A is denoted as
				The numbers are the elements of the matrix where the index indicates the row, and indicates the column in which each element is positioned. For instance, indicates the element positioned in the fourth row and third column.
				A matrix of rows and columns is said to be of order matrix.
				If , the matrix is said to be a square matrix of order (or ). Thus, if a matrix has five rows and five columns, it is said to be a square matrix of order 5.
				In a square matrix, the elements are called the main diagonal elements. Alternately, we say that the matrix elements , are located on the main diagonal.
				† The sum of the diagonal elements of a square matrix is called the trace of .
				† A matrix in which every element is zero, is called a zero matrix.
			D.2 Matrix Operations
				Two matrices and are equal, that is, , if and only if
				Two matrices are said to be conformable for addition (subtraction), if they are of the same order .
				If and are conformable for addition (subtraction), their sum (difference) will be another matrix with the same order as and , where each element of is the sum (difference) of the corresponding elements of and , that is,
				Compute and given that
				and
				and
				Check with MATLAB:
				A=[1 2 3; 0 1 4]; B=[2 3 0; -1 2 5]; % Define matrices A and B
				A+B, A-B % Add A and B, then Subtract B from A
				ans = 3 5 3
				-1 3 9
				ans = -1 -1 3
				1 -1 -1
				Check with Simulink:
				If is any scalar (a positive or negative number), and not which is a matrix, then multiplication of a matrix by the scalar is the multiplication of every element of by .
				Multiply the matrix
				by
				a.
				b.
				a.
				b.
				Check with MATLAB:
				k1=5; k2=(-3 + 2*j); % Define scalars k1 and k2
				A=[1 -2; 2 3]; % Define matrix A
				k1*A, k2*A % Multiply matrix A by scalar k1, then by scalar k2
				ans = 5 -10
				10 15
				ans =
				-3.0000+ 2.0000i 6.0000- 4.0000i
				-6.0000+ 4.0000i -9.0000+ 6.0000i
				Two matrices and are said to be conformable for multiplication in that order, only when the number of columns of matrix is equal to the number of rows of matrix . That is, the product  (but not ) is conformable for multiplication only if  is an  matr...
				For the product we have:
				For matrix multiplication, the operation is row by column. Thus, to obtain the product , we multiply each element of a row of by the corresponding element of a column of ; then, we add these products.
				Matrices and are defined as
				and
				Compute the products and
				The dimensions of matrices and are respectively ; therefore the product is feasible, and will result in a , that is,
				The dimensions for and are respectively and therefore, the product is also feasible. Multiplication of these will produce a matrix as follows:
				Check with MATLAB:
				C=[2 3 4]; D=[1 -1 2]’; % Define matrices C and D. Observe that D is a column vector
				C*D, D*C % Multiply C by D, then multiply D by C
				ans = 7
				ans = 2 3 4
				-2 -3 -4
				4 6 8
				Division of one matrix by another, is not defined. However, an analogous operation exists, and it will become apparent later in this appendix when we discuss the inverse of a matrix.
			D.3 Special Forms of Matrices
				† A square matrix is said to be upper triangular when all the elements below the diagonal are zero. The matrix of (D.4) is an upper triangular matrix. In an upper triangular matrix, not all elements above the diagonal need to be non-zero.
				† A square matrix is said to be lower triangular, when all the elements above the diagonal are zero. The matrix of (D.5) is a lower triangular matrix. In a lower triangular matrix, not all elements below the diagonal need to be non-zero.
				† A square matrix is said to be diagonal, if all elements are zero, except those in the diagonal. The matrix of (D.6) is a diagonal matrix.
				† A diagonal matrix is called a scalar matrix, if where is a scalar. The matrix of (D.7) is a scalar matrix with .
				A scalar matrix with , is called an identity matrix . Shown below are , , and identity matrices.
				The MATLAB eye(n) function displays an identity matrix. For example,
				eye(4) % Display a 4 by 4 identity matrix
				ans = 1 0 0 0
				0 1 0 0
				0 0 1 0
				0 0 0 1
				Likewise, the eye(size(A)) function, produces an identity matrix whose size is the same as matrix . For example, let matrix be defined as
				A=[1 3 1; -2 1 -5; 4 -7 6] % Define matrix A
				A =
				1 3 1
				-2 1 -5
				4 -7 6
				Then,
				eye(size(A))
				displays
				ans =
				1 0 0
				0 1 0
				0 0 1
				† The transpose of a matrix , denoted as , is the matrix that is obtained when the rows and columns of matrix are interchanged. For example, if
				In MATLAB, we use the apostrophe (¢) symbol to denote and obtain the transpose of a matrix. Thus, for the above example,
				A=[1 2 3; 4 5 6] % Define matrix A
				A =
				1 2 3
				4 5 6
				A' % Display the transpose of A
				ans =
				1 4
				2 5
				3 6
				† A symmetric matrix is a matrix such that , that is, the transpose of a matrix is the same as . An example of a symmetric matrix is shown below.
				† If a matrix has complex numbers as elements, the matrix obtained from by replacing each element by its conjugate, is called the conjugate of , and it is denoted as , for example,
				MATLAB has two built-in functions which compute the complex conjugate of a number. The first, conj(x), computes the complex conjugate of any complex number, and the second, conj(A), computes the conjugate of a matrix . Using MATLAB with the matrix  d...
				A = [1+2j j; 3 2-3j] % Define and display matrix A
				A =
				1.0000 + 2.0000i 0 + 1.0000i
				3.0000 2.0000 - 3.0000i
				conj_A=conj(A) % Compute and display the conjugate of A
				conj_A =
				1.0000 - 2.0000i 0 - 1.0000i
				3.0000 2.0000 + 3.0000i
				† A square matrix such that is called skew-symmetric. For example,
				Therefore, matrix above is skew symmetric.
				† A square matrix such that is called Hermitian. For example,
				Therefore, matrix above is Hermitian.
				† A square matrix such that is called skew-Hermitian. For example,
				Therefore, matrix above is skew-Hermitian.
			D.4 Determinants
				Let matrix be defined as the square matrix
				then, the determinant of , denoted as , is defined as
				The determinant of a square matrix of order n is referred to as determinant of order n.
				Let be a determinant of order , that is,
				Then,
				Matrices and are defined as
				and
				Compute and .
				Check with MATLAB:
				A=[1 2; 3 4]; B=[2 -1; 2 0]; % Define matrices A and B
				det(A), det(B) % Compute the determinants of A and B
				ans = -2
				ans = 2
				Let be a matrix of order , that is,
				then, is found from
				A convenient method to evaluate the determinant of order , is to write the first two columns to the right of the matrix, and add the products formed by the diagonals from upper left to lower right; then subtract the products formed by the diagonals f...
				This method works only with second and third order determinants. To evaluate higher order determinants, we must first compute the cofactors; these will be defined shortly.
				Compute and if matrices and are defined as
				and
				or
				Likewise,
				or
				Check with MATLAB:
				A=[2 3 5; 1 0 1; 2 1 0]; det(A) % Define matrix A and compute detA
				ans = 9
				B=[2 -3 -4; 1 0 -2; 0 -5 -6];det(B)
				% Define matrix B and compute detB
				ans = -18
			D.5 Minors and Cofactors
				Let matrix be defined as the square matrix of order as shown below.
				If we remove the elements of its row, and column, the remaining square matrix is called the minor of , and it is denoted as .
				The signed minor is called the cofactor of and it is denoted as .
				Matrix is defined as
				Compute the minors , , and the cofactors , and .
				and
				The remaining minors
				and cofactors
				are defined similarly.
				Compute the cofactors of matrix defined as
				It is useful to remember that the signs of the cofactors follow the pattern below
				that is, the cofactors on the diagonals have the same sign as their minors.
				Let be a square matrix of any size; the value of the determinant of is the sum of the products obtained by multiplying each element of any row or any column by its cofactor.
				Matrix is defined as
				Compute the determinant of using the elements of the first row.
				Check with MATLAB:
				A=[1 2 -3; 2 -4 2; -1 2 -6]; det(A) % Define matrix A and compute detA
				ans = 40
				We must use the above procedure to find the determinant of a matrix of order or higher. Thus, a fourth-order determinant can first be expressed as the sum of the products of the elements of its first row by its cofactor as shown below.
				Determinants of order five or higher can be evaluated similarly.
				Compute the value of the determinant of the matrix defined as
				Using the above procedure, we will multiply each element of the first column by its cofactor. Then,
				Next, using the procedure of Example D.5 or Example D.8, we find
				, , ,
				and thus
				We can verify our answer with MATLAB as follows:
				A=[ 2 -1 0 -3; -1 1 0 -1; 4 0 3 -2; -3 0 0 1]; delta = det(A)
				delta = -33
				Some useful properties of determinants are given below.
				Property 1: If all elements of one row or one column are zero, the determinant is zero. An example of this is the determinant of the cofactor above.
				Property 2: If all the elements of one row or column are m times the corresponding elements of another row or column, the determinant is zero. For example, if
				then,
				Here, is zero because the second column in is times the first column.
				Check with MATLAB:
				A=[2 4 1; 3 6 1; 1 2 1]; det(A)
				ans = 0
				Property 3: If two rows or two columns of a matrix are identical, the determinant is zero. This follows from Property 2 with .
			D.6 Cramer’s Rule
				Let us consider the systems of the three equations below
				and let
				Cramer’s rule states that the unknowns x, y, and z can be found from the relations
				provided that the determinant D (delta) is not zero.
				We observe that the numerators of (D.31) are determinants that are formed from D by the substitution of the known values , , and , for the coefficients of the desired unknown.
				Cramer’s rule applies to systems of two or more equations.
				If (D.30) is a homogeneous set of equations, that is, if , then, are all zero as we found in Property 1 above. Then, also.
				Use Cramer’s rule to find , , and if
				and verify your answers with MATLAB.
				Rearranging the unknowns , and transferring known values to the right side, we obtain
				By Cramer’s rule,
				Using relation (D.31) we obtain
				We will verify with MATLAB as follows:
				% The following code will compute and display the values of v1, v2 and v3.
				format rat % Express answers in ratio form
				B=[2 -1 3; -4 -3 -2; 3 1 -1]; % The elements of the determinant D of matrix B
				delta=det(B); % Compute the determinant D of matrix B
				d1=[5 -1 3; 8 -3 -2; 4 1 -1]; % The elements of D1
				detd1=det(d1); % Compute the determinant of D1
				d2=[2 5 3; -4 8 -2; 3 4 -1]; % The elements of D2
				detd2=det(d2); % Compute the determinant of D2
				d3=[2 -1 5; -4 -3 8; 3 1 4]; % The elements of D3
				detd3=det(d3); % Compute he determinant of D3
				v1=detd1/delta; % Compute the value of v1
				v2=detd2/delta; % Compute the value of v2
				v3=detd3/delta; % Compute the value of v3
				%
				disp('v1=');disp(v1); % Display the value of v1
				disp('v2=');disp(v2); % Display the value of v2
				disp('v3=');disp(v3); % Display the value of v3
				v1= 17/7
				v2= -34/7
				v3= -11/7
				These are the same values as in (D.34)
			D.7 Gaussian Elimination Method
				We can find the unknowns in a system of two or more equations also by the Gaussian elimination method. With this method, the objective is to eliminate one unknown at a time. This can be done by multiplying the terms of any of the equations of the sys...
				Use the Gaussian elimination method to find , , and of the system of equations
				As a first step, we add the first equation of (D.35) with the third to eliminate the unknown v2 and we obtain the equation
				Next, we multiply the third equation of (D.35) by 3, and we add it with the second to eliminate , and we obtain the equation
				Subtraction of (D.37) from (D.36) yields
				Now, we can find the unknown from either (D.36) or (D.37). By substitution of (D.38) into (D.36) we obtain
				Finally, we can find the last unknown from any of the three equations of (D.35). By substitution into the first equation we obtain
				These are the same values as those we found in Example D.10.
				The Gaussian elimination method works well if the coefficients of the unknowns are small integers, as in Example D.11. However, it becomes impractical if the coefficients are large or fractional numbers.
			D.8 The Adjoint of a Matrix
				Let us assume that is an n square matrix and is the cofactor of . Then the adjoint of , denoted as , is defined as the n square matrix below.
				We observe that the cofactors of the elements of the ith row (column) of are the elements of the ith column (row) of .
				Compute if Matrix is defined as
			D.9 Singular and Non-Singular Matrices
				An square matrix is called singular if ; if , is called non-singular.
				Matrix is defined as
				Determine whether this matrix is singular or non-singular.10
				Therefore, matrix is singular.
			D.10 The Inverse of a Matrix
				If and are square matrices such that , where is the identity matrix, is called the inverse of , denoted as , and likewise, is called the inverse of , that is,
				If a matrix is non-singular, we can compute its inverse from the relation
				Matrix is defined as
				Compute its inverse, that is, find
				Here, , and since this is a non-zero value, it is possible to compute the inverse of using (D.44).
				From Example D.12,
				Then,
				Check with MATLAB:
				A=[1 2 3; 1 3 4; 1 4 3], invA=inv(A) % Define matrix A and compute its inverse
				A = 1 2 3
				1 3 4
				1 4 3
				invA = 3.5000 -3.0000 0.5000
				-0.5000 0 0.5000
				-0.5000 1.0000 -0.5000
				Multiplication of a matrix by its inverse produces the identity matrix , that is,
				Prove the validity of (D.47) for the Matrix defined as
				Proof:
				Then,
				and
			D.11 Solution of Simultaneous Equations with Matrices
				Consider the relation
				where and are matrices whose elements are known, and is a matrix (a column vector) whose elements are the unknowns. We assume that and are conformable for multiplication.
				Multiplication of both sides of (D.48) by yields:
				or
				Therefore, we can use (D.50) to solve any set of simultaneous equations that have solutions. We will refer to this method as the inverse matrix method of solution of simultaneous equations.
				For the system of the equations
				compute the unknowns using the inverse matrix method.
				In matrix form, the given set of equations is where
				Then,
				or
				Next, we find the determinant , and the adjoint .
				Therefore,
				and with relation (D.53) we obtain the solution as follows:
				To verify our results, we could use the MATLAB’s inv(A) function, and then multiply by . However, it is easier to use the matrix left division operation ; this is MATLAB’s solution of for the matrix equation , where matrix is the same size as mat...
				A=[2 3 1; 1 2 3; 3 1 2]; B=[9 6 8]'; X=A \ B
				X = 1.9444
				1.6111
				0.2778
				For the electric circuit of Figure D.2,
				the loop equations are
				Use the inverse matrix method to compute the values of the currents , , and
				For this example, the matrix equation is or , where
				The next step is to find . It is found from the relation
				Therefore, we must find the determinant and the adjoint of . For this example, we find that
				Then,
				and
				Check with MATLAB:
				R=[10 -9 0; -9 20 -9; 0 -9 15]; V=[100 0 0]'; I=R\V; fprintf(' \n');...
				fprintf('I1 = %4.2f \t', I(1)); fprintf('I2 = %4.2f \t', I(2)); fprintf('I3 = %4.2f \t', I(3)); fprintf(' \n')
				I1 = 22.46 I2 = 13.85 I3 = 8.31
				We can also use subscripts to address the individual elements of the matrix. Accordingly, the MATLAB script above could also have been written as:
				R(1,1)=10; R(1,2)=-9; % No need to make entry for A(1,3) since it is zero.
				R(2,1)=-9; R(2,2)=20; R(2,3)=-9; R(3,2)=-9; R(3,3)=15; V=[100 0 0]'; I=R\V; fprintf(' \n');...
				fprintf('I1 = %4.2f \t', I(1)); fprintf('I2 = %4.2f \t', I(2)); fprintf('I3 = %4.2f \t', I(3)); fprintf(' \n')
				I1 = 22.46 I2 = 13.85 I3 = 8.31
				Spreadsheets also have the capability of solving simultaneous equations with real coefficients using the inverse matrix method. For instance, we can use Microsoft Excel’s MINVERSE (Matrix Inversion) and MMULT (Matrix Multiplication) functions, to o...
				The procedure is as follows:
				1. We begin with a blank spreadsheet and in a block of cells, say B3:D5, we enter the elements of matrix R as shown in Figure D.2. Then, we enter the elements of matrix in G3:G5.
				2. Next, we compute and display the inverse of , that is, . We choose B7:D9 for the elements of this inverted matrix. We format this block for number display with three decimal places. With this range highlighted and making sure that the cell marker ...
				=MININVERSE(B3:D5)
				and we press the Crtl-Shift-Enter keys simultaneously. We observe that appears in these cells.
				3. Now, we choose the block of cells G7:G9 for the values of the current . As before, we highlight them, and with the cell marker positioned in G7, we type the formula
				=MMULT(B7:D9,G3:G5)
				and we press the Crtl-Shift- Enter keys simultaneously. The values of then appear in G7:G9.
				For the phasor circuit of Figure D.4
				the current can be found from the relation
				and the voltages and can be computed from the nodal equations
				and
				Compute, and express the current in both rectangular and polar forms by first simplifying like terms, collecting, and then writing the above relations in matrix form as , where , , and
				The matrix elements are the coefficients of and . Simplifying and rearranging the nodal equations of (D.60) and (D.61), we obtain
				Next, we write (D.62) in matrix form as
				where the matrices , , and are as indicated.
				We will use MATLAB to compute the voltages and , and to do all other computations. The script is shown below.
				Y=[0.0218-0.005j -0.01; -0.01 0.03+0.01j]; I=[2; 1.7j]; V=Y\I; % Define Y, I, and find V
				fprintf('\n'); % Insert a line
				disp('V1 = '); disp(V(1)); disp('V2 = '); disp(V(2)); % Display values of V1 and V2
				V1 = 1.0490e+002 + 4.9448e+001i
				V2 = 53.4162 + 55.3439i
				Next, we find from
				R3=100; IX=(V(1)-V(2))/R3 % Compute the value of IX
				IX = 0.5149 - 0.0590i
				This is the rectangular form of . For the polar form we use the MATLAB script
				magIX=abs(IX), thetaIX=angle(IX)*180/pi % Compute the magnitude and the angle in degrees
				magIX = 0.5183
				thetaIX = -6.5326
				Therefore, in polar form
				We can also find the current using a Simulink model, and to simplify the model we first derive the Thevenin equivalent in Figure D.4 to that shown in Figure D.5.
				By application of Thevenin’s theorem, the electric circuit of Figure 5.45(a) can be simplified to that shown in Figure 5.45(b).
				Next, we let , , , and . Application of the voltage division expression yields
				Now, we use the model in Figure D.6 to convert all quantities from the rectangular to the polar form, perform the addition and multiplication operations, display the output voltage in both polar and rectangular forms, and show the output voltage on a...
				Spreadsheets have limited capabilities with complex numbers, and thus we cannot use them to compute matrices that include complex numbers in their elements as in Example D.18.
Appendix E Signals and Systems Fifth
	Appendix E
		E.1 Window Function Defined
		E.2 Common Window Functions
			E.2.1 Rectangular Window Function
			E.2.2 Triangular Window Function
			E.2.3 Hanning Window Function
			E.2.4 Hamming Window Function
			E.2.5 Blackman Window Function
			E.2.6 Kaiser Family of Window Functions
		E.3 Other Window Functions
		E.4 Fourier Series Method for Approximating an FIR Amplitude Response
Appendix F Signals and Systems Fifth
	Appendix F
		F.1 Cross Correlation
		F.2 Autocorrelation
Appendix G Signals and Systems Fifth
	Appendix G
		G.1 Describing Functions
	References and Suggestions for Further Study
	A. The following publications by The MathWorks, are highly recommended for further study. They are available from The MathWorks, 3 Apple Hill Drive, Natick, MA, 01760, www.mathworks.com.
	1. Getting Started with MATLAB
	2. Using MATLAB
	3. Using MATLAB Graphics
	4. Using Simulink
	5. Sim Power Systems
	6. Fixed-Point Toolbox
	7. Simulink Fixed-Point
	8. Real-Time Workshop
	9. Signal Processing Toolbox
	10. Getting Started with Signal Processing Blockset
	10. Signal Processing Blockset
	11. Control System Toolbox
	12. Stateflow
	For the complete list of all of The MathWorks products and MATLAB / Simulink based books, please refer to:
	http://www.mathworks.com/index.html?ref=pt
	B. Other references indicated in footnotes throughout this text, are listed below.
	1. Introduction to Simulink with Engineering Applications, ISBN 978-1-934404-21-8
	2. Introduction to Stateflow with Applications, ISBN 978-1-934404-07-2
	3. Digital Circuit Analysis and Design with Simulink Applications and Introduction to CPLDs and FPGAs, Second Edition, ISBN 978-1-934404-05-8
	4. Electronic Devices and Amplifier Circuits with MATLAB Applications, Second Edition, ISBN 978-1- 934404-13-3
	5. Circuit Analysis I with MATLAB Applications and Simulink / SimPower Systems Modeling, ISBN 978-1- 934404-17-1
	6. Circuit Analysis II with MATLAB Applications and Simulink / SimPower Systems Modeling, ISBN 978- 1-934404-19-5
	7. Mathematics for Business, Science, and Technology, Third Edition, ISBN 978-1-934404-01-0
	8. Numerical Analysis Using MATLAB and Excel, Third Edition ISBN 978-1-934404-03-4
	C. Signals & Systems and Signal Processing Texts
	Many Signals & Systems and Signal Processing texts authored and published by others can be found on the Internet.
S&S Fifth Index
                        
Document Text Contents
Page 34

Fourier Transform Pairs of Common Functions
Likewise, the Fourier transform for the shifted delta function is

(8.61)

We will use the notation to show the time domain to frequency domain correspon-
dence. Thus, (8.60) may also be denoted as in Figure 8.1.

TABLE 8.8 Fourier Transform Properties and Theorems

Property
Linearity

Symmetry

Time Scaling

Time Shifting

Frequency Shifting

Time Differentiation

Frequency Differentiation

Time Integration

Conjugate Functions

Time Convolution

Frequency Convolution

Area under

Area under

Parseval’s Theorem

f t( ) F ω( )
a1 f1 t( ) a2 f2 t( ) …+ + a1 F1 ω( ) a2 F2 ω( ) …+ +

F t( ) 2πf ω–( )

f at( ) 1
a
-----F ω

a
----
 
 

f t t0–( ) F ω( )e
jωt0–

e
jω0tf t( )

F ω ω0–( )

d
n

dt
n

--------- f t( ) jω( )
n

F ω( )

jt–( )
n

f t( ) d
n


n

-----------F ω( )

f τ( ) τd
∞–

t


F ω( )


------------ πF 0( )δ ω( )+

f∗ t( ) F∗ ω–( )

f1 t( )∗f2 t( ) F1 ω( ) F2 ω( )⋅

f1 t( ) f2 t( )⋅ 1

------F1 ω( )∗F2 ω( )

f t( )
F 0( ) f t( ) td

∞–


=

F ω( )
f 0( ) 1


------ F ω( ) ωd

∞–


=

f t( ) 2 td
∞–




1

------ F ω( ) 2 ωd

∞–


=

δ t t0–( )

δ t t0–( ) e
jωt0–⇔

f t( ) F ω( )↔
Signals and Systems with MATLAB ® Computing and Simulink ® Modeling, Fifth Edition 8−17
Copyright © Orchard Publications

Page 35

Chapter 8 The Fourier Transform
Figure 8.1. The Fourier transform of the delta function

8.4.2 The Constant Function Pair

(8.62)
Proof:

and (8.62) follows.

The correspondence is also shown in Figure 8.2.

Figure 8.2. The Fourier transform of constant A

Also, by direct application of the Inverse Fourier transform, or the frequency shifting property and
(8.62), we derive the transform

(8.63)

The transform pairs of (8.62) and (8.63) can also be derived from (8.60) and (8.61) by using the
symmetry property

8.4.3 The Cosine Function Pair

(8.64)

Proof:

This transform pair follows directly from (8.63). The correspondence is also shown in
Figure 8.3.

0 t

1

ω0

f t( )

δ t( )
F ω( )

A 2Aπδ ω( )⇔

F 1– 2Aπδ ω( ){ } 1

------ 2Aπδ ω( )e jωt ωd

∞–



 A δ ω( )e
jωt ωd

∞–



 Ae
jωt

ω 0= A= = = =

f t( ) F ω( )↔

A

ω
0 0

t

f t( ) F ω( )
2Aπδ ω( )

e
jω0t 2πδ ω ω0–( )⇔

F t( ) 2πf ω–( )⇔

ω0tcos
1
2
--- e

jω0t e
j– ω0t+( ) πδ ω ω0–( ) πδ ω ω0+( )+⇔=

f t( ) F ω( )↔
8−18 Signals and Systems with MATLAB ® Computing and Simulink ® Modeling, Fifth Edition
Copyright © Orchard Publications

Page 67

conformable for multiplication D-4 Nyquist frequency 10-13 recursive realization digital filter

congugate of D-8 see digital filter

definition of D-1 O region of

determinant D-9 convergence 9-3

minor of D-11 octave defined 11-11 divergence 9-3

non-singular D-19 odd functions 6-11, 7-333 relationship between state equations

singular D-19 odd symmetry - see Fourier and Laplace Transform 5-28

diagonal D-1 series - symmetry residue 3-3, 9-37

diagonal elements of D-1 orthogonal functions 7-2 residue MATLAB function 3-3, 3-12

elements of D-1 orthogonal vectors 5-19 residue theorem 9-19

Hermitian D-8 orthonormal basis 5-19 right shift in the discrete-time domain

identity D-6 see Z transform - properties of

inverse of D-220 P RLC band-elimination filter - see filter

left division in MATLAB D-23 RLC band-pass filter - see filter

multiplication in MATLAB A-17 parallel form realization - see digital filter roots of polynomials in MATLAB A-3

power series of 5-9 Parseval’s theorem - see roots(p) MATLAB function 3-5, A-3

scalar D-6 Fourier transform - properties of round(n) MATLAB function A-22

size of D-7 partial fraction expansion 3-1 row vector in MATLAB A-3

skew-Hermitian D-9 alternate method of 3-14 Runge-Kutta method 5-1

skew-symmetric D-8 method of clearing the fractions 3-14 running Simulink B-7

square D-1 phase angle 11-2

symmetric D-8 phase shift filter - see filter S

trace of D-2 picket-fence effect 10-14

transpose of D-7 plot MATLAB command A-9 sampling property of the delta function

triangular polar form of complex numbers C-5 see delta function

lower D-6 polar plot in MATLAB A-23 sampling theorem 10-13

upper D-7 polar(theta,r) MATLAB function A-22 sawtooth waveform - see Laplace

zero D-2 poles 3-1 transform of common waveforms

matrix left division in MATLAB - see matrix complex 3-5 sawtooth waveform - Fourier series of

matrix multiplication in MATLAB - see matrix distinct 3-2 see Fourier series of

matrix power series - see matrix multiple (repeated) 3-7 common waveforms

maximally flat filter - see filter poly MATLAB function A-4 scalar matrix - see matrix

mesh(x,y,z) MATLAB function A-15 polyder MATLAB function A-6 scaling property of the Laplace transform

meshgrid(x,y) MATLAB command A-16 polynomial construction from see Laplace transform - properties of

m-file in MATLAB A-1, A-224 known roots in MATLAB A-4 Scope block in Simulink B-12

minor of determinant - see matrix polyval MATLAB function A-5 script file in MATLAB A-2, A-24

MINVERSE Excel function D-25 pre-sampling filter 10-13 second harmonic - see Fourier series

MMULT Excel function D-25 pre-warping 11-52 harmonics of

modulated signals 8-11 proper rational function - semicolons in MATLAB A-7

multiple eigenvalues - see eigenvalues definition of 3-1 semilogx MATLAB command A-12

multiple poles - see poles properties of the DFT semilogy MATLAB command A-12

multiplication by a
n
in discrete-time domain see DFT - common properties of series form realization - see digital filter

see Z transform - properties of properties of the Fourier Transform Shannon’s sampling theorem

multiplication by e
-naT

in discrete-time see Fourier transform - properties of see sampling theorem

domain - see Z transform - properties of properties of the Laplace Transform shift of f[n] u0[n] in discrete-time domain

multiplication by n in discrete-time domain see Laplace transform - properties of see Z transform - properties of

see Z transform - properties of properties of the Z Transform sifting property of the delta function

multiplication by n
2
indiscrete-time domain see Z transform - properties of see delta function

see Z transform - properties of signal flow graph 10-22

multiplication of complex numbers C-2 Q signals described in math form 1-1

signum function - see Fourier transform

N quarter-wave symmetry - see of common functions

Fourier series - symmetry simout To Workspace block

NaN in MATLAB A-25 quit MATLAB command A-2 in Simulink B-13

natural input-output FFT algorithm simple MATLAB symbolic function 3-6

see FFT algorithm R Simulation drop menu in Simulink B-12

network transformation simulation start icon in Simulink B-12

resistive 4-1 radius of absolute convergence 9-3 Simulink icon B-7

capacitive 4-1 ramp function 1-9 Simulink Library Browser B-8

inductive 4-1 randn MATLAB function 11-65 sine function - Fourier transform of

non-recursive realization digital filter Random Source Simulink block 11-76 see Fourier transform of

see digital filter rationalization of the quotient C-4 common functions

non-singular determinant - see matrix RC high-pass filter - see filter singular determinant - see matrix

nonlinear system G-1 RC low-pass filter - see filter Sinks library in Simulink B-18

normalized cutoff frequency 11-14 real axis C-1 sinw0t u0(t) Fourier transform of - see

notch filter - see filter real number C-2 Fourier transform of common functions

N-point DFT - see DFT - definition of real(z) MATLAB function A-22 size of a matrix - see matrix

nth-order delta function - see delta function rectangular form C-5 skew-Hermitian matrix - see matrix

numerical evaluation of Fourier coefficients rectangular pulse expressed in terms skew-symmetric matrix - see matrix

see Fourier series coefficients of the unit step function 1-4 special forms of the Fourier transform

IN-4

Page 68

see Fourier transform Transfer Fcn block in Simulink 4-17 convolution in the discrete

spectrum analyzer 7-35 Transfer Fcn Direct Form II time domain 9-8

square matrix - see matrix Simulink block 11-68 final value theorem 9-9

square waveform with even symmetry - see transfer function of initial value theorem 9-9

Fourier series of common waveforms continuous-time systems 4-13 left shift 9-5

square waveform with odd symmetry - see discrete-time systems 9-35 linearity 9-3

Fourier series of common waveforms transformation between multiplication by a
n
9-6

ss2tf MATLAB function 5-31 s and z domains 9-20 multiplication by e
-naT

9-6

stability 11-13 transformation methods for mapping multiplication by n 9-6

start simulation in Simulink B-12 analog prototype filters to digital filters multiplication by n
2
9-6

state equations Impulse Invariant Method 11-50 right shift 9-4

for continuous-time systems 5-1 Step Invariant Method 11-50 shift of f[n] u0[n] 9-3

for discrete-time systems 9-40 Bilinear transformation 11-50 summation 9-7

state transition matrix 5-8 transpose of a matrix - see matrix Z Transform of discrete-time functions

state variables Tree Pane in Simulink B-7 cosine function cosnaT 9-15

for continuous-time systems 5-1 triangular waveform expressed in terms exponential sequence e
-naT

u0[n] 9-15

for discrete-time systems 9-40 of the unit step function 1-6 geometric sequence a
n
9-12

State-Space block in Simulink B-13 triplet - see delta function sine function sinnaT 9-15

state-space equations Tukey - see Cooley and Tukey unit ramp function nu0[n] 9-16

for continuous-time systems 5-1 unit step function u0[n] 9-14

for discrete-time systems 9-40 U zero matrix - see matrix

step function - see unit step function zeros 3-1, 3-2

step invariant method - see trans- unit eigenvectors 5-18 zp2tf MATLAB function 11-16

formation methods for mapping analog unit impulse function (d(t)) 1-8

prototype filters to digital filters unit ramp function (u1(t)) 1-9

stop-band filter - see filter unit step function (u0(t)) 1-2

string in MATLAB A-15 upper triangular matrix - see matrix

subplots in MATLAB A-16 using MATLAB for finding the Laplace

summation in the discrete-time Domain transforms of time functions 2-26

see Z transform - properties of using MATLAB for finding the Fourier

symmetric matrix - see matrix transforms of time function 8-31

symmetric rectangular pulse expressed

as sum of unit step functions 1-5 V

symmetric triangular waveform expressed

as sum of unit step functions 1-6 Vandermonde matrix 10-18

symmetry - see Fourier series - symmetry Vector Scope Simulink block 11-78

symmetry property of the Fourier transform

see Fourier transform - properties of W

system function - definition of 8-34

warping 11-52

T window functions

Blackman E-10

Taylor series 5-1 Fourier series method for approximating

text MATLAB command A-13 an FIR amplitude response E-15

tf2ss MATLAB function 5-33 Hamming E-8, E-30

theorems of the DFT 10-10 Hanning E-6, E-26

theorems of the Fourier Transform 8-9 Kaiser E-12, E-33

theorems of the Laplace transform 2-2 other used as MATLAB functions E-14

theorems of the Z Transform 9-3 rectangular E-2

third harmonic - see Fourier triangular E-4, E-22

series - harmonics of Window Visualization Tool in MATLAB E-4

time convolution in DFT

see DFT - common properties of X

time integration property of the Fourier

transform - see Fourier xlabel MATLAB command A-12

transform - properties of

time periodicity property of the Laplace Y

transform - see Laplace

transform - properties of ylabel MATLAB command A-12

time scaling property of the Fourier

transform - see Fourier Z

transform - properties of

time shift in DFT Z transform

see DFT - common properties of computation of with contour

time shift property of the Fourier transform integration 9-17

see Fourier transform - properties of definition of 9-1

time shift property of the Laplace transform Inverse of 9-1, 9-25

see Laplace transform - properties of Z transform - properties of

MATLAB command A-12 convolution in the discrete

trace of a matrix - see matrix frequency domain 9-9

IN-5

Similer Documents