octave-symband (1.0.10-1) documentation-sources.diff

Summary

 doc/Makefile       |   29 ++++
 doc/SymBandDoc.tex |  369 +++++++++++++++++++++++++++++++++++++++++++++++++++++
 2 files changed, 398 insertions(+)

    
download this patch

Patch contents

Add the LaTeX source for the the SymBand documentation.  This makes
the package DFSG-compliant.

 -- Rafael Laboissiere <rafael@debian.org>  Sat, 23 May 2009 12:01:04 +0200


--- /dev/null
+++ b/doc/Makefile
@@ -0,0 +1,29 @@
+sinclude ../../../Makeconf
+
+TEX = $(wildcard *.tex)
+PDF = $(patsubst %.tex,%.pdf,$(TEX))
+
+all : $(PDF) html/index.html
+
+%.pdf : %.tex
+	latex $< > /dev/null 2>&1
+	latex $< > /dev/null 2>&1
+	$(DVIPDF) $(@:.pdf=.dvi)
+
+# Note verbosity=0 as well as making latex2html quieter, has the side-effect
+# of not including a url to the raw text, which it'll get wrong
+html/index.html : $(TEX)
+	latex2html -verbosity=0 -local_icons $<
+	if [ -e "html" ]; then \
+	  rm -fr html; \
+	fi; \
+	mv -f $(patsubst %.tex,%,$<) html
+
+clean:
+	rm -fr $(patsubst %.tex,%,$(TEX)) html *.log
+	rm -f $(PDF) *~
+	rm -f $(patsubst %.tex,%.aux,$(TEX))
+	rm -f $(patsubst %.tex,%.out,$(TEX))
+	rm -f $(patsubst %.tex,%.dvi,$(TEX))
+	rm -f $(patsubst %.tex,%.toc,$(TEX))
+	rm -f $(patsubst %.tex,%.idx,$(TEX))
--- /dev/null
+++ b/doc/SymBandDoc.tex
@@ -0,0 +1,369 @@
+\documentclass[11pt]{article}
+\usepackage{verbatim}
+\usepackage{amsfonts}
+
+\setlength{\oddsidemargin}{-5mm}
+\setlength{\evensidemargin}{-5mm}
+\setlength{\headheight}{2em}
+\setlength{\headsep}{1cm}
+\setlength{\topmargin}{-15mm} %letter
+\setlength{\textheight}{235mm}
+\setlength{\textwidth}{173mm}
+
+%%\input macros
+\newcommand{\text}[1]{\ #1 \ }
+\newcommand{\id}{\mathbb{I}}
+\newcommand{\dfrac}{\frac}
+\newcommand{\Octave}{\textit{Octave}}
+\newcommand{\currdir}{./}
+\newcommand{\ID}[1]{\index{#1}}
+
+\title{Symmetric Banded Matrices}
+\author{Version 0.1 December, 2001\\
+Andreas Stahel
+}
+
+\begin{document}
+
+\maketitle
+
+%%%%%%%%%%%%%%%%%%%%%%%% Document %%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%
+
+\def\currdir{./}
+\setlength{\marginparwidth}{25mm}
+
+\tableofcontents
+
+\section{Basic description}
+Many  matrices used to solve PDE (using FEM) are symmetric. It the
+nodes are numbered properly then the matrix will show
+a band structure, i.e. all nonzero elements are located close to the main
+diagonal. The algorithm of Cholesky or the $LDL^T$ factorization can take
+advantage of this structure, see~\cite{GoluVanLoan96}. For a symmetric
+matrix $A$ of size $n\times n$ with semi-bandwidth $b$ the approximate
+computational cost to solve one system of equations is 
+given by
+\[ \mbox{Gauss}\approx \frac{1}{3}\;n^3 \text{and}
+ \mbox{Band Cholesky}\approx \frac{1}{2}\,n\,b^2\]
+Obviously for $b\ll n$ is is advantageous to
+use a banded solver. A more detailed analysis and an implementation is
+given in~\cite{VarFem}.
+
+To take advantage of the symmetry and the band structure the matrices will
+be stored in a modified format, as illustrated below.
+\[\left|
+  \begin{array}{ccccc}
+   10&2&3&0&0\\2&20&4&5&0\\3&4&30&6&7\\0&5&6&40&8\\0&0&7&8&50
+  \end{array} \right|
+\longrightarrow
+  \left| \begin{array}{ccc}
+   10&2&3\\20&4&5\\30&6&7\\40&8&0\\50&0&0
+  \end{array} \right| \]
+A banded version of the $LDL^T$ factorization in~\cite{GoluVanLoan96}
+can be implemented. If the matrix $A$ is strictly positive definite, then
+the algorithm is known to be stable. If $A$ is not positive definite, then
+problems might occur, since no pivoting is done. The matrix $A$ is
+positive definite if and only if the diagonal matrix $D$ is positive.
+
+For a given matrix some of its smallest eigenvalues can be computed with an
+algorithm based on inverse power iteration. Precise information on the
+numerical errors is provided. The code is capable of finding eigenvalues of
+medium size matrices, where the standard command \texttt{eig()} is either very
+slow of will fail.
+
+%This author prefers the notation of a $R^TDR$ factorization, the
+%difference is in notation only, i.e. $R^T=L$. 
+%The presented code can be obtained form authors home
+%page\footnote{\texttt{http://www.hta-bi.bfh.ch/\~{}sha}}.
+
+\begin{table}[htbp]
+  \begin{center}
+\begin{tabular}{|l|l|}\hline
+\multicolumn{2}{|l|}{Operations for symmetric, banded matrices}\\\hline
+\texttt{SBSolve()} & solve a system of linear equations \\
+\texttt{SBFactor()} & find the $R^TDR$ factorization \\
+\texttt{SBBacksub()} & use back-substitution to solve system of equations\\
+\texttt{SBEig()} & find a few of the smallest eigenvalues and eigenvectors\\
+\texttt{SBProd()} & multiply symmetric banded matrix with full matrix\\
+\texttt{FullToBand()} & convert a symmetric matrix to a banded matrix\\
+\texttt{BandToFull()} & convert a banded matrix to a symmetric matrix\\
+\texttt{BandToSparse()} & convert a banded matrix to a sparse matrix\\
+\hline\end{tabular}
+    \caption{List of commands}
+    \label{tab:commands}
+  \end{center}
+\end{table}
+\begin{center}
+\end{center}
+
+
+\section{Description of the commands}
+ 
+\subsection{\texttt{SBSolve}}\ID{SBSolve}
+The basic factorization algorithm is implemented in \texttt{SBSolve}. The
+function can return the solution of the system of linear equations, or the
+solution and the factorization of the original matrix.
+Multiple sets of equations can be solved.
+
+\begin{verbatim}
+[...] = SBSolve(...)
+  solve a system of linear equations with a symmetric banded matrix
+
+  X=SBSolve(A,B)
+  [R,X]=SBSolve(A,B)
+
+   solves A X = B
+
+   A is mxt where t-1 is number of non-zero super-diagonals
+   B is mxn
+   X is mxn
+   R is mxt
+
+  if A would be ! 11000 ! then A= ! 11 ! 
+                ! 14300 !         ! 43 ! 
+                ! 03520 !         ! 52 ! 
+                ! 00285 !         ! 85 ! 
+                ! 00059 !         ! 90 ! 
+
+  B is a full matrix
+
+  The code is based on a LDL' decomposition (use L=R'), without pivoting.
+  If A is positive definite, then it reduces to the Cholesky algorithm.
+
+  R is an upper right band matrix
+  The first column of R contains the entries of a diagonal matrix D. 
+  If the first column of R is filled by 1's, then we have R'*D*R = A
+\end{verbatim}
+
+To determine the \ID{matrix, inverse}inverse matrix $\mathtt{A}^{-1}$ one can
+use the command \texttt{invA = SBSolve(A,eye(n));}.  Be aware that calculating
+the inverse matrix is rarely a wise thing to do. Most often the inverse of a
+banded matrix will loose the band structure.
+
+If the matrix \textbf{A} is strictly positive definite, then the algorithm is
+stable and one can expect the solution to be as accurate as the conditions
+number of \textbf{A} permits. If \textbf{A} is semidefinite, then large errors
+might occur, since \textbf{not pivoting} is implemented in the code. The
+matrix is positive definite iff all eigenvalues are positive, this can be
+verified by inspection the sign of the numbers in the first column of
+\textbf{R}.  The matrix is positive definite if the first column of the
+factorization matrix \texttt{R} (use \texttt{SBFactor()}) contains positive
+numbers only. A description of the algorithm can be found
+in~\cite{GoluVanLoan96} or~\cite{VarFem}.
+
+\subsection{\texttt{SBFactor} and \texttt{SBBacksub}}
+\ID{SBFactor}\ID{SBBacksub}
+Instead of calling \texttt{X=SBSolve(A,B)} one can first call
+\texttt{R=SBFactor(A)} to determine the factorization $A=R^TDR$ and
+then \texttt{B=SBBacksub(R,X)} to solve the system(s) $A\cdot X=B$~.
+Since most of the computational effort is in the factorization, this can be
+useful if many system of linear equations have to be solved sequentially.
+If multiple system are to be solved simultaneously it is preferable to use
+\texttt{SBSolve(A,B)} with a matrix \texttt{B}~.
+
+\begin{verbatim}
+[...] = SBFactor(...)
+  find the R'DR factorization of a symmetric banded matrix
+
+  R=SBFactor(A)
+
+   A is mxt where t-1 is number of non-zero super diagonals
+   R is mxt
+
+  if A would be ! 11000 ! then A= ! 11 !
+                ! 14300 !         ! 43 !
+                ! 03520 !         ! 52 !
+                ! 00285 !         ! 85 !
+                ! 00059 !         ! 90 !
+
+
+  The code is based on a LDL' decomposition (use L=R'), without pivoting.
+  If A is positive definite, then it reduces to the Cholesky algorithm.
+
+  R is an upper right band matrix
+  The first column of R contains the entries of a diagonal matrix D.
+  If the first column of R is filled by 1's, then we have R'*D*R = A
+\end{verbatim}
+
+\begin{verbatim}
+[...] = SBBacksub(...)
+  using backsubstitution  to return the solution of a system of linear equations
+
+  X=SBBacksub(R,B)
+
+   B is mxn
+   X is mxn
+   R is mxt
+
+   R is produced by a call of [X,R] = SBSolve(A,B) or R = SBFactor(A)
+   It is an upper right band matrix
+   The first column of R contains the entries of a diagonal matrix D.
+   If the first column of R is filled by 1's, then we have R'*D*R = A
+\end{verbatim}
+
+
+If there is interest in the classical Cholesky decomposition
+\ID{Cholesky decomposition}  of the matrix \texttt{A} 
+(i.e. $\mathtt{A}=\mathtt{R}^\prime\cdot \mathtt{R}$) then \texttt{R} can be
+computed by
+\begin{verbatim}
+rBand=SBFactor(A);
+d=sqrt(rBand(:,1));
+rBand(:,1)=ones(n,1);
+r=triu(diag(d)*rBand)
+\end{verbatim}
+
+The number of positive/negative numbers in the first column of \textbf{R}
+equals the number of positive/negative eigenvalues of \textbf{A}.
+
+
+\subsection{\texttt{SBEig}}\ID{SBEig}
+For given symmetric matrices \textbf{A} and \textbf{B} the standard
+(resp. generalized) eigenvalue problem will be solved, i.e.
+\[ \mathbf{A}\,\vec v=\lambda\,\vec v \text{resp.}
+   \mathbf{A}\,\vec v=\lambda\,\mathbf{B}\,\vec v  \]
+
+Using inverse power iteration a given number of the smallest (absolute value)
+eigenvalues if a symmetric matrix \textbf{A} are computed. If needed the
+eigenvectors are also generated.  A set of initial vectors \textbf{V} have to
+be given. If those are already close to the eigenvectors, then the algorithm
+will converge rather quickly. For a precise description and analysis
+consult~\cite{GoluVanLoan96}.
+\begin{verbatim}
+[...] = SBEig(...)
+  find a few eigenvalues of the symmetric, banded matrix
+  inverse power iteration is used for the standard and generalized
+  eigenvalue problem
+
+  [Lambda,{Ev,err}] = SBEig(A,V,tol)     solve A*Ev = Ev*diag(Lambda)
+                    standard eigenvalue problem
+
+  [Lambda,{Ev,err}] = SBEig(A,B,V,tol)   solve A*Ev = B*Ev*diag(Lambda)
+                    generalized eigenvalue problem
+
+   A   is mxt, where t-1 is number of non-zero superdiagonals
+   B   is mxs, where s-1 is number of non-zero superdiagonals
+   V   is mxn, where n is the number of eigenvalues desired
+       contains the initial eigenvectors for the iteration
+   tol is the relative error, used as the stopping criterion
+
+   X   is a column vector with the eigenvalues
+   Ev  is a matrix whose columns represent normalized eigenvectors
+   err is a vector with the a posteriori error estimates for the eigenvalues
+\end{verbatim}
+
+The algorithm is based on inverse power iteration 
+\ID{iteration, inverse power}  with $n$ independent vectors.
+The iteration will proceed until the relative change of all eigenvalues is
+smaller than the given value of \texttt{tol}. This does not guarantee that the
+relative error is smaller than \texttt{tol}.  The initial guesses \textbf{V}
+for the eigenvectors have to be linearly independent. The closer the initial
+guess is to the actual eigenvector,the faster the algorithm will converge. 
+The algorithm returns $n$ eigenvalues closest to~0~.
+
+
+For the standard eigenvalue problem $\mathbf{A}\;\vec v_i = \lambda_i\, \vec
+v_i$ the eigenvectors $\vec v_i$ will be orthonormal with respect to the
+standard scalar product, i.e, $\langle \vec v_i\,,\,\vec v_j\rangle
+=\delta_{i,j}$. For the generalized eigenvalue problem 
+$\mathbf{A}\;\vec v_i = \lambda_i\, \mathbf{B}\,\vec v_i$ this translates to
+$\langle \vec v_i\,,\,\mathbf{B}\,\vec v_j\rangle=\delta_{i,j}$. The symmetric
+matrix \textbf{B} should be positive definite. The columns of \texttt{Ev} can
+be used to restart the algorithm if higher accuracy is required.
+
+The algorithm will return reliable estimates for the errors in the eigenvalues.
+The \`a posteriori\ID{a posteriori estimate} error estimate is based on the
+residual $\vec r = \mathbf{A}\,\vec v -\lambda\,\vec v$ and
+\[ \min_{\lambda_i\in\sigma(\mathbf{A})}|\lambda-\lambda_i|\leq 
+  \langle \vec r\,,\,\vec r\rangle=\|\vec r\|\]
+where we use the normalization $\langle \vec v\,,\,\vec v\rangle =1$.
+If one of the eigenvalues has to be computed with high accuracy, the
+approximate value $\lambda$ may be subtracted from the diagonal of the matrix.
+Then the eigenvalue closest to zero of the modified matrix
+$\mathbf{A}-\lambda\id$ can be computed, using the already computed
+eigenvector. If the eigenvalue is isolated the algorithm will converge very
+quickly. This algorithm is similar to the Rayleigh 
+\ID{Rayleigh quotient iteration} quotient iteration. A good description is
+given in~\cite{GoluVanLoan96}.
+\medskip
+
+If the eigenvalue closest to $\lambda$ is denoted by $\lambda_i$ we have
+the improved estimate
+\[ |\lambda-\lambda_i|\leq  \dfrac{\|\vec r\|^2}{\mbox{gap}}\text{where}
+ \mbox{gap}=\min\{|\lambda-\lambda_j|\;:\;
+  \lambda_j\in\sigma(\mathbf{A}),j\neq i\} \]
+It is very easy to implement this test in \Octave{}. If the estimate is
+based on approximate values of the eigenvalues, then the result is not as
+reliable as the previous one. Since the value of \texttt{gap} will carry an
+approximation error. The situation is particularly bad if some eigenvalues are
+clustered. A code  sample is provided.
+
+\bigskip
+
+For the generalized eigenvalue problem \ID{eigenvalue problem, generalized}
+we use the residual
+$\vec r = \mathbf{A}\,\vec v -\lambda\,\mathbf{B}\,\vec v$
+and the estimate
+\[ \min_{\lambda_i\in\sigma(\mathbf{A})}|\lambda-\lambda_i|\leq 
+  \sqrt{\langle \vec r\,,\,\mathbf{B}^{-1}\vec r\rangle}\text{and}
+|\lambda-\lambda_i|\leq  \dfrac{\|\vec r\|^2}{\mbox{gap}}\]
+where we use the normalization 
+$\langle \vec v\,,\,\mathbf{B}\,\vec v\rangle =1$.
+The precise algorithm and proof of the above estimate is given
+in~\cite{VarFem}.
+
+\subsection{\texttt{SBProd}}
+With this command a symmetric banded matrix can be multiplied with a full
+matrix.
+\begin{verbatim}
+[...] = SBProd(...)
+  multiplies a symmetric banded matrix with a matrix
+
+  X=SBProd(A,B)
+
+   A is mxt where t-1 is number of non-zero super diagonals
+   B is mxn
+   X is mxn
+
+  if A would be ! 11000 ! then A= ! 11 ! 
+                ! 14300 !         ! 43 ! 
+                ! 03520 !         ! 52 ! 
+                ! 00285 !         ! 85 ! 
+                ! 00059 !         ! 90 ! 
+
+  B is full matrix Ax=B
+\end{verbatim}
+
+\subsection{\texttt{BandToFull}, \texttt{FullToBand} and \texttt{BandToSparse}}
+With these commands conversion between full, symmetric matrices and banded
+symmetric matrices is possible. A conversion to a sparse format is also
+included.
+
+%%\addcontentsline{toc}{section}{Bibliography}
+
+\begin{thebibliography}{2}
+
+\bibitem{GoluVanLoan96}
+G. Golub and C. Van Loan,
+\newblock \textit{Matrix computations},
+John Hopkins University Press, third edition, 1996
+
+\bibitem{VarFem}
+A. Stahel,
+\newblock \textit{Calculus of Variations and Finite Elements},
+Lecture notes used at HTA Biel, 2000
+
+\end{thebibliography}
+
+%\clearpage
+
+%\printindex
+%\addcontentsline{toc}{section}{Index}
+
+\end{document}
+
+
+%%% Local Variables: 
+%%% mode: latex
+%%% TeX-master: t
+%%% End: