[ Search | Site Map | Contact ]

Center for Scientific Computation and Mathematical Modeling

Research Activities > Programs > Fast Approximate Algorithms > Tutorials


Fast Multipole Method, Tree-Code and Related Approximate Algorithms.
Trading Exactness for Efficiency.


CSIC Building (#406), Seminar Room 4122.
Directions: home.cscamm.umd.edu/directions


Fast Multipole Methods: Fundamentals and Applications

Nail Gumerov and Ramani Duraiswami

University of Maryland

The course is directed at a graduate level at participants who have had an introductory numerical analysis course, some partial differential equations, and have some knowledge of linear algebra. Participants are also expected to have some experience in programming numerical methods.

Lecture 1: An Introduction to the Fast Multipole Method

This course will introduce you to the fast multipole method, a numerical algorithm that is extremely promising for achieving fast solutions of many applied problems in science, especially related to the solution of potential problems, but also the solution of various approximation and interpolation problems arising in computer vision and statistics. The fast multipole method has been called one of the ten most significant numerical algorithms discovered in the 20th century, and won its inventors, Vladimir Rokhlin and Leslie Greengard, the Steele prize. The algorithm allows the product of particular dense matrices with a vector to be evaluated in O(N logN) operations, when direct multiplication requires O(N2) operations. Coupled with advances in iterative methods for solution of linear systems, they allow O(N logN) time complexity and O(N) memory complexity solution of problems that hitherto required O(N3) time complexity and O(N2) memory complexity. We will discuss the general ideas of the method, and provide some specific examples of its application. [LECTURE SLIDES]

Lecture 2: Key Ideas in the Fast Multipole Method

The FMM relies on several ideas such as i) function representation and factorization, ii) space partitioning, iii) translation theory, iv) error analysis and bounds, v) data structures. This lecture will introduce these ideas and how they are used in the FMM. During the lecture we will visit several schemes for using these ideas for fast matrix-vector multiplication (or function summation), which will culminate in the introduction to the multilevel fast multipole method. [LECTURE SLIDES]

Lecture 3: Representations and Translations of Functions in Fast Multipole Methods

One of the key steps in the FMM is the representation of functions in various forms, such as series. The function representations are related to different spatial domains and functional bases. Local and fat-field representations are used in the FMM, and will be first introduced. Translation operators enable the conversion from one representation to the other. Next we will introduce the multipole-to-multipole, multipole-to-local and local-to-local, translation operations, and associated operators. [LECTURE SLIDES]

Lecture 4: Data Structures for the FMM

The FMM is meant for solving large-scale scientific computing problems in many variables. An important practical issue in implementing an FMM algorithm, is how to organize this large amount of data, and the choice of data-structures consistent with the time and memory complexity of the FMM. We introduce techniques of hierarchical spatial ordering based on bit-interleaving and de-interleaving to allow fast grouping, neighbor search, and other operations required for the implementation of a multi-level FMM software. [LECTURE SLIDES]

Lecture 5: Multilevel FMM Algorithm

In this lecture we go step-by-step over a general multi-level FMM algorithm that can be used for problems with different kernel functions, and in different spatial dimensions. We also will provide some results from the implementation of this algorithm, and consider simple optimization of the algorithm. [LECTURE SLIDES]

Lecture 6: FMM for the Solution of Laplace’s Equation

The solution of problems involving Laplace equations were the first area of application of the FMM. Since that time the speed of the algorithm has been substantially improved. We will discuss the original formulations and illustrate how they can be further improved. A key issue is fast translation operators. We will discuss methods proposed to achieve fast translation. Some areas of application will also be discussed. [LECTURE SLIDES]

Lecture 7: FMM for the Solution of Helmholtz’ Equation

Problems involving the wave and Maxwell equations in the frequency domain can be reduced to the solution of the Helmholtz equation. These have some peculiarities compared to the Laplace equation. The FMM must be substantially modified, to achieve the expected O(NlogN) complexity. We discuss these issues, and current work on fast translation methods including translations based on matrix representations, and those based on the diagonal forms. [LECTURE SLIDES]