Exercises

Weekly exercises

This is a list of exercises which you should, but don't have to, work on. 

  • 23–27 August
    • In the first lecture I considered an example with the matrix \(A = \begin{pmatrix} 1 & 3&0\\2&7&2\\1&2&0\end{pmatrix}\). Find the LU factorization of A. That is, trace through the first part of Gaussian elimination, find the coefficients \(\mu_k\), and build the corresponding matrices \(L=(I-\mu_1E^{r_1s_1})\cdots(I-\mu_NE^{r_Ns_N})\) and U.
    • Prove Theorem 2.1.
    • The proof of Theorem 2.3 describes a technique for computing the pivoted LU factorization, but it's written in mathematical language. Study the proof and write an algorithm (in pseudocode) for computing P, L and U.
  • 30 Aug–3 Sep 
    • Let \(A = \begin{pmatrix}1 & 1 \\ 0 & \varepsilon \end{pmatrix}, \; b=\begin{pmatrix}2 \\ 3\end{pmatrix}\) where \(\varepsilon\) is a small, positive number. Describe what can go wrong when solving the system Ax =on a computer. Compute the condition number (with respect to the 2-norm) of A, and use it to quantify how a small error in b would propagate into the solution x.
    • Problem 2.7, 2.9, 2.14, 2.15
  • 6–10 Sep
    • The function g(x)=cos(x) has a fixed point somewhere around x≈0.7. Find an interval \(D \subset \mathbb{R}\) such that you can apply Banach's fixed point theorem to g on D.
    • We want to find the solution of the equation \(2\sin(x) = 1-x\), that is, the zero of the function \(f(x)=2\sin(x)+x-1\).
      • Plot the functions 2 sin(x) and 1–x, in order to get a feeling for the problem.
      • Let \(g(x)=x-f(x)\), so that the problem is transformed to finding a fixed point for g. Show that g is not a contraction.
      • Run 100 iterations of a fixed point iterations for g and plot the resulting \(x^{(0)}, x^{(1)}, x^{(2)}, \dots\). Does the iteration seem to converge?
      • Define now instead \(g(x)=x-\lambda f(x)\), for some \(\lambda\neq0\). Find a value of \(\lambda\) guaranteeing that g is a contraction. Run the iteration again on your computer and check that you converge to a solution of the problem.
    • Let \(f(x)=\begin{pmatrix}(x_1+x_2)^2 \\ (x_1-2x_2)^2\end{pmatrix}\).
      • Find all solutions to f(x)=0.
      • Run ten steps of Newton's method for f starting at \(x^{(0)}=\begin{pmatrix}1\\1\end{pmatrix}\).
      • Do you observe quadratic convergence? Why/why not? Justify your answer using our convergence theorem.
    • Do the same for the function \(f(x)=\begin{pmatrix}(x_1+x_2)^2+x_1 \\ (x_1-2x_2)^2+x_2\end{pmatrix}\) (you can concentrate on "finding" the solution x=0).
  • 13–17 Sep
    • Let \(f\) be a continuous function. Show that there is a unique polynomial \(p\in\mathbb{P}_2\) such that
      \(p(0)=f(0),\quad p'(1)=f'(1), \quad \int_0^1 p(x)\,dx = \int_0^1 f(x)\,dx.\)
      Show that there is in general either no, or infinitely many, polynomials \(p\in\mathbb{P}_2\) such that
      \(p(0)=f(0),\quad p'(\tfrac13)=f'(\tfrac13), \quad \int_0^1 p(x)\,dx = \int_0^1 f(x)\,dx.\)
      (The morale is: We have to be careful when deciding on how to determine the degrees of freedom of the interpolant.)
      Hint: Write \(p(x)=a_0+a_1x+a_2x^2\) and determine \(a_0,a_1,a_2\).
    • Problem 6.1–6.4. Try to generalize Problem 6.4 to an arbitrary interval [a, b].
    • For a point \(x\in\mathbb{R}\) and some \(h>0\), it is well-known that \(f'(x) \approx \frac{f(x+h)-f(x-h)}{2h}\).
      • Use the interpolating polynomial through the points x-h and x+h to derive the above formula.
      • Use Theorem 6.5 to bound the error in the above approximation by \(hM_2\), where \(M_k=\max_y |f^{(k)}(y)|\).
      • Use Taylor expansions to show that the error can be bounded by \(\frac{h^2M_3}{6}\) (which is smaller than the first estimate, if h is small enough). Thus, the theorems in Chapter 6 do not always give sharp estimates.
  • 20–24 Sep
    • Find the order of the trapezoidal rule (i.e. n = 1) and Simpson's rule (i.e. n = 2). Recall that the order of a quadrature rule I? is the highest number N for which \(I_n(p)=I(p)\) for all \(p\in\mathcal{P}_{N-1}\).
    • Exercise 7.1–7.3, 7.7.
  • 27 Sep – 1 Oct
    • Plot the Bernstein approximation of orders = 1, 2, ..., 10 of the functions \(f(x) = \frac{1}{1+10^2(x-1/2)^2}\) (Runge's function, scaled to the interval [0, 1]) and \(f(x)=\sqrt{x}\), both for \(x\in[0,1]\). Compare with the corresponding interpolation polynomials over equispaced points.
    • Exercise 8.2–8.5, 8.9, 8.10
    • If you would like to see a proof of the Weierstrass approximation theorem, see Exercise 8.12.
  • 4–8 Oct
    • Find the minimax polynomial of order n for a general polynomial \(f\in\mathcal{P}_{n+1}\)
      Hint: Use Theorem 8.6.
    • Plot the interpolating polynomial of Runge's function over \(x\in[-1,1]\) with interpolation points given by the Chebyshev points (i.e., the zeros of \(T_{n+1}\)). Compare with the interpolation polynomial for equispaced interpolation points.
    • Exercises 9.1–9.3, 9.6, 9.8
  • 11–29 Oct
    • Exercise 10.1, 10.5, 10.6
    • Do the exercise on p. 4 in the Monte Carlo note
    • For the function in oblig 2, estimate the variance of f and deduce an estimate of how many samples M are needed to push the mean square error below some number ?. Deduce a similar estimate for the composite midpoint method.
    • Prove that the hat functions \(B_{-1}^1,\dots,B_{n-1}^1\) constitute a basis for \(S_1\), the space of first-order splines. In particular, show that any \(s\in S_1\) can be expressed as \(s(x)=\sum_{j=0}^n s(x_j) B_{j-1}^1(x),\; x\in[a,b]\).
    • For k = 2, show that it is possible to find functions \(B_i^2\in S_2\) which are strictly positive in \((x_i,x_{i+3})\), and zero elsewhere. You will need to find an expression for \(B_i^2\). (It will be easier if you set \(x_i=0\), and \(x_j=j-i\).)
  • 8–12 Nov
    • Prove that the truncation error for the explicit Euler method for the equation \(\dot{x}(t)=f(x(t))\) can be bounded by \(|\tau_k| \leq \tfrac12 MKh^2\) for all k. Here, M is the upper bound for f and K is the Lipschitz constant for f.
    • Consider the linear equation \(\dot{x}=\lambda x\) for some complex \(\lambda\). Write down the implicit Euler method for this problem and show that the stability function is \(R(z)=\tfrac{1}{1-z}\). In other words, show that the method can be written as \(x_{k+1}=R(h\lambda)x_k\).
    • Show that the implicit Euler method for the above linear problem is consistent.
    • Finish our derivation of the exact solution of the problem \(\dot{x} = Ax,\;x(0)=x_0\), where the n×n matrix A is diagonalizable: \(A=R\Lambda R^{-1}\) for a diagonal matrix \(\Lambda\) and an invertible matrix R. In particular, show that \(x(t)=\alpha_1e^{\lambda_1t}r_1+\cdots+\alpha_ne^{\lambda_nt}r_n\) (see the lecture notes for the notation). Conclude that the solution for an arbitrary \(x_0\) is bounded if and only if the eigenvalues of A are nonpositive.
    • Implement the explicit and implicit Euler methods for the problem \(\dot{x}=\lambda x,\; x(0)=x_0\) when \(\lambda = -5,\;x_0=1\) . Compute up to time T=2, first using N=4 points (so the step length is \(h=\tfrac{T}{N}=\tfrac12\)) and then using N=100. How large must N be to guarantee linear stability of both methods?
  • For the remainder of the semester: (work in progress!)
    The following list of problems are particularly relevant for the exam, and solving these will make you well prepared for the exam.
Publisert 23. aug. 2021 13:44 - Sist endret 10. des. 2021 11:01