OLLSCOIL NA hÉIREANN, GAILLIMH

THE NATIONAL UNIVERSITY OF IRELAND, GALWAY



SUMMER EXAMINATIONS 2000

SECOND ARTS, ENGINEERING, AND SCIENCE


NUMERICAL ANALYSIS MM246


Dr D. L. Flannery
Professor J. Wiegold


Time allowed: two hours.
Answer only three questions.



1.
(a)
Calculate $\displaystyle{\int_{0}^{1} e^x \cos x \, dx}$ to five decimal places using
(i)
the Trapezoid Rule,
(ii)
Simpson's Rule,
with five equally spaced points. Also calculate the integral exactly and compare the estimated error with the actual error, in both methods.
(b)
Find the number of equally spaced points required in each method to ensure an absolute error of less than $10^{-5}$.




2.
(a)
Describe the partial pivoting technique used for solving systems of linear equations by Gaussian elimination. Why is partial pivoting used?
(b)
Consider the following system of linear equations:

\begin{displaymath}
\begin{array}{rclclcr}
6x_1 & - & 2x_2 & + & x_3 & = & 11 \\...
...3 & = & 5 \\
x_1 & + & 2x_2 & - & 5x_3 & = & -1 .
\end{array}\end{displaymath}

(i)
Write down the Jacobi and Gauss-Seidel iterative schemes for this system.
(ii)
Determine, with justification, whether each of these schemes converges, for the given system.
(iii)
Working to four decimal places, carry out three iterations of the Gauss-Seidel scheme to estimate the solution $(x_1,x_2,x_3)$ of the system. Take $(x_{1}^{(0)}, x_{2}^{(0)},
x_{3}^{(0)} ) = (0,0,0)$.



p.t.o.

3.
(a)
(i)
Let $A$ be a symmetric real $n\times n$ matrix. Prove that the power method for estimating the dominant eigenvalue and eigenvector of $A$ is convergent, for a suitable choice of initial vector. (Use the fact that there is a basis of ${\Bbb R}^n$ consisting of eigenvectors of $A$.)
(ii)
Carry out three iterations of the power method, with initial vector $(0,0,1)^{\sf T}$, to evaluate the dominant eigenvalue of

\begin{displaymath}
A = \left( \begin{array}{ccc}
3 & 0 & 1 \\
0 & 3 & 2 \\
1 & 2 & 3
\end{array} \right)
\end{displaymath}

to four decimal places. Also estimate an eigenvector for this eigenvalue.
(b)
Briefly describe Jacobi's method for finding all eigenvalues and eigenvectors of a square symmetric matrix.




4.
(a)
With specific reference to Taylor's Theorem, explain what is meant by the order and local truncation error of a single-step method for solving a first order initial value problem.
(b)
Consider the following initial value problem:

\begin{displaymath}y' = x+y, \ \ \ \ \ \ y( 0) = 0.\end{displaymath}

By writing out sufficiently many terms of the Taylor series expansion of $y(x_i + h)$ at $x_i$, show that the improved Euler method for this problem is $O(h^2)$.
(c)
Use a 4-stage Runge-Kutta method to estimate $y(0.2)$ and $y(0.4)$, for the initial value problem in (b).




5.
(a)
Derive the modified Euler formula

\begin{displaymath}
y_{i+1} = y_i + hf(x_i + \frac{h}{2} , y_i + \frac{h}{2}
f(x_i , y_i )) \end{displaymath}

for solving the initial value problem $y' = f(x,y)$, $y(x_0) = y_0$. Draw an appropriate picture to illustrate your reasoning.
(b)
Consider the following second order initial value problem:


\begin{displaymath}
y''-3y'+2y + e^x=0, \ \ \ \ \ \ \ y(1) = e, \ \ y'(1) = 2e.
\end{displaymath}

(i)
Write this as a system of first order initial value problems.
(ii)
Using a modified Euler approach with $h = 0.1$, find an approximate value of $y(1.2)$.







p.t.o.

SOME FORMULAE



A.
Let $I = \displaystyle{
\int_{a}^{b} f(x) dx}$, with $x_i = a + ih$, $f_i =
f(x_i)$ for $i \geq 0$, and $h= (b-a)/n$.


Trapezoid Rule: $I = T + E_{T}$, where

\begin{displaymath}
\begin{array}{c}
T = \displaystyle{\frac{h}{2}}(f_0 + 2f_1 +...
...} (b-a) f''(c) , \ \mbox{some} \ c
\in [ a, b] .
\end{array}\end{displaymath}


Simpson's Rule ($n$ even): $I = S + E_{S}$, where

\begin{displaymath}
\begin{array}{c}
S = \displaystyle{\frac{h}{3}}
(f_0 + 2 ( ...
...-a) f^{(4)}(c) , \ \mbox{some} \ c
\in [ a, b] .
\end{array}\end{displaymath}


B.
Taylor's Theorem: suppose $y$ and all of its derivatives are defined on $[a , x]$. Then

\begin{displaymath}
y(x) = y(a) + y'(a)(x-a) + \frac{y''(a)}{2!}(x-a)^2 +
\cdots +
\frac{y^{(k)}(a)}{k!}(x-a)^k + \cdots
\end{displaymath}


C.
Suppose $y(x)$ satisfies the first order initial value problem $y' = f(x,y)$, $y(x_0) = y_0$. Let $x_{i+1}
= x_{i} +h$ for $i \geq 0$.


Improved Euler method:

\begin{displaymath}y_{i+1} = y_i + \frac{h}{2}[f(x_i, y_i )
+ f( x_i + h, y_i + h f(x_i , y_i )) ] . \end{displaymath}


4-stage Runge-Kutta method:

\begin{displaymath}y_{i+1}= y_i + \frac{1}{6}(k_1 + 2k_2 + 2k_3 + k_4)\end{displaymath}

where

\begin{displaymath}
\begin{array}{rcl}
k_1 & = & hf(x_i, y_i) \\
k_2 & = & hf(x...
... + k_2 / 2) \\
k_4 & = & hf(x_i + h, y_i + k_3 ) .
\end{array}\end{displaymath}



Niall Madden 2001-04-25