×
This is the old CSCAMM website made available for archival purposes. For the current CSCAMM website, please visit www.cscamm.umd.edu.

Research Activities > Seminars > Spring 2005

Spring 2005 Seminars

Click here to subscribe to the CSCAMM Seminar mailing list

  • All talks are in the CSIC Bldg (#406) Room 4122 at 2.00pm (unless otherwise stated)
  • Directions can be found at: home.cscamm.umd.edu/directions
  • Refreshments will be served after the talk
  • Contact Email:




  • January 26

    2.00 PM,
    4122 CSIC Bldg

    Professor Dany Leviatan, School of Mathematics, Tel-Aviv University

    Multivariate polynomial approximation in Convex sets, and applications to image compression

    We shall begin with a discussion of two fundamental estimates in the approximation by multivariate polynomials in convex domains in Rd, the Bramble-Hilbert lemma and the Whitney inequality.

    The Bramble-Hilbert lemma is frequently applied in the analysis of Finite Element Methods (FEM) used for numerical solutions of PDEs. However, this classical estimate depends on the geometry of the domain and may ‘blow-up’ for simple examples such as a sequence of triangles of equivalent diameter that become thinner and thinner. Thus, in FEM applications one usually requires that the mesh has ‘quasi-uniform’ geometry. This assumption is too restrictive when one tries to obtain estimates of nonlinear approximation methods that use piecewise polynomials. We show that it is possible to obtain estimates where the constant is independent of the geometry of the domain. We will apply it first to obtain the Whitney inequality for convex domains (with constants that are independent of the geometry of the domain).

    Our results allow us to describe a nonlinear algorithm for image compression making use of a new family of what we call geometric wavelets. We will describe the algorithm give some estimates on its rate of approximation to the image, and compare the compression by diadic wavelets and by geometric wavelets of some familiar images.

    If time allows we will apply the above estimates to characterization of nonlinear multivariate approximation by piecewise polynomials on families of nested triangulations of Rd into simplices. Again it is essential here that we do not have to pay attention to how slim the simplices might become.



    February 1

    3:30 PM,
    Math 3206


    Joint Math/CSCAMM Seminar

    TUESDAY, February 1st at 3:30PM Math Colloquium Room #3206


    Professor Donald Estep, Department of Mathematics, Colorado State University

    Fast and Reliable Methods for Determining the Evolution of Uncertain Parameters in Differential Equations

    A very common problem in science and engineering is the determination of the effects of uncertainty or variation in parameters and data on the output of a nonlinear operator. For example, such variations may describe the effect of experimental error or may arise as part of a sensitivity analysis of the model. The Monte-Carlo Method is a widely used tool for understanding such effects that employs random sampling of the input space in order to produce a pointwise representation of the output. It is a robust and easily implemented tool. Unfortunately, it generally requires sampling the operator very many times at a significant cost. Moreover, it provides no robust measure of the error of information computed from a particular representation. In this paper, we present an alternative approach for ascertaining the effects of variations and uncertainty in parameters in a differential equation that is based on techniques borrowed from a posteriori error analysis for finite element methods. The generalized Green's function is used to describe how variation propagates into the solution around localized points in the parameter space. This information can be used either to create a higher order method or produce an error estimate for information computed from a given representation. In the latter case, this provides the basis for adaptive sampling. Both the higher order method and the adaptive sampling methods are generally orders of magnitude faster than Monte-Carlo methods in a variety of situations.



    February 8

    3:30 PM,
    3206 Math


    Joint Numerical Analysis/CSCAMM/Applied Mathematics Seminar

    TUESDAY, February 8th at 3:30PM Math Colloquium Room #3206


    Professor Alex Mahalov, Arizona State University

    Global Regularity of the 3D Navier-Stokes Equations with Uniformly Large Initial Vorticity

    We prove existence on infinite time intervals of regular solutions to the classical incompressible 3D Navier-Stokes Equations for fully three-dimensional periodic and almost periodic initial data characterized by uniformly large vorticity in R^3 and in bounded cylindrical domains; smoothness assumptions for initial data are the same as in local existence theorems. There are no conditional assumptions on the properties of solutions at later times, nor are the global solutions close to any 2D manifold. The global existence is proven using techniques of fast singular oscillating limits and the Littlewood-Paley dyadic decomposition. The approach is based on the computation of singular limits of rapidly oscillating operators and cancellation of oscillations in the nonlinear interactions for the vorticity field. With nonlinear averaging methods in the context of almost periodic functions, we obtain fully 3D limit resonant Navier-Stokes equations. We establish the global regularity of the latter without any restriction on the size of 3D initial data. With strong convergence theorems, we bootstrap this into the global regularity of the 3D Navier-Stokes Equations with uniformly large initial vorticity. We review applications of our mathematical techniques to numerical analysis of highly oscillatory PDE's arising in geophysical fluid dynamics. Global regularity of the 3D Navier-Stokes Equations of Geophysics is proven for all domain aspect ratios and all small Froude and Rossby numbers.



    February 10

    3:30 PM
    2400 CSS Bldg


    Joint Meteorology/CSCAMM Seminar

    THURSDAY, February 10th at 3:30PM Auditorium (Rm. 2400),
    2nd floor of the New Wing of the CSS Bldg


    Professor Alex Mahalov, Arizona State University

    Characterization and High Resolution Numerical Simulations of Stratospheric Clear Air and Optical Turbulence

    We present high resolution (1024 vertical levels) numerical simulations on massively parallel architectures of stratospheric optical and clear air turbulence (CAT). The stratospheric CAT for altitudes from 10 to 30 km is characterized by patchy high frequency fluctuations in the stratospheric wind fields and long-lived energetic vortex structures with several hundred meters scale. The main mechanism of formation of stratospheric anisotropic CAT is wave-induced windshears in synergy with saturated inertio-gravity wave fields; lateral directional shear induced by gravity waves is a key instability mechanism in layers as thin as a few hundred meters. From the fundamental fluid dynamics perspective, this is related to 3D instabilities and turbulent dynamics of helical velocity profiles (U(z),V(z),0) embedded in a vertically variable backgroud stratification N(z); the conventional Ri_g=0.25 criterion does not hold for such flows for which lateral shear is the key instability mechanism. The structute of the turbulent velocity, temperature and vorticity fields are analyzed, and are compared with existing observational and numerical studies of stably stratified shear flows in the atmosphere.



    February 16

    2.00 PM,
    4122 CSIC Bldg

    Professor William Hase, Texas Tech University

    Scientific Computing in Chemistry and Materials Science:
    Algorithms for Direct Dynamics Simulations and the Non-Equilibrium Dynamics of Sliding Surfaces

    Scientific computing is an important approach for studying the atomistic dynamics of chemical reactions and of a broad range of problems in materials science. Atomic-level simulations require a potential energy surface for the system of interest, and recently it has become possible to obtain this surface and its gradient directly from an electronic structure theory calculation. Such simulations, referred to as direct dynamics, require substantial computational resources and there is a need to enhance numerical integration algorithms used in the simulations to make the application of this computational approach more practical. The details of these simulations, the enhancements which are needed, and an example chemical reaction application will be described. An important problem for nano-materials is the friction at the interface of sliding surfaces. An atomic-level simulation of the friction at the interface of sliding hydroxylated alumina surfaces will be discussed. Significant non-Boltzmann characteristics are found in the heat flow from the sliding interface.



    February 23

    2.00 PM,
    4122 CSIC Bldg

    Professor Eric Vanden-Eijnden, Courant Institute of Mathematical Sciences

    Metastability in complex systems

    The evolution of many complex system can be represented as a navigation over some energy landscape in the presence of small noise. The system stays confined for a long time within a metastable basin corresponding to a region of rather low energy, then suddenly hops over an energy barrier to another basin, etc. This is the mechanism by which e.g. conformation changes in molecules, chemical reactions, or phase transitions arise. Direct numerical simulations fail in these situations because of the huge separation between the time-scale that needs to be resolved in the simulations, and the time-scale over which the transitions occur. For systems with relatively smooth energy landscapes, the corresponding effective dynamics can be described within the framework of large deviation theory which provides the most probable transition path between the metastable basins and the rates of the transitions. When the energy landscape is non-smooth and entropic effects are important, large deviation theory becomes inadequate. I will describe generalization of the theory to these situations. I will also describe numerical techniques that can be developed based on such framework to explicitly obtain the transition pathways, the free energy, and the rates. These techniques will be illustrated on examples arising from materials science, molecular dynamics, and biology.



    March 2

    2.00 PM,
    4122 CSIC Bldg

    Dr. Irina Popovici, US Naval Academy

    A new transform for improved lossy compression of color images(*)

    We introduce the eidochromatic transform as a tool for improved lossy coding of color images. Many current image-coding formats (such as JPEG 2000) utilize both a color-component transform and a wavelet or other spatial transform (relating values of a single image component at proximate, but different image locations). The eidochromatic transform further reduces redundancy by relating image values simultaneously across color components and in the two spatial dimensions. Our approach is to introduce an additional transform step following the color-component and spatial transforms. In tests, this step reduced the overall static entropy of the chrominance components of quantized transformed images by up to 40% or more. Combined with JPEG 2000's modeling and coding method, the eidochromatic transform was found to reduce the size of lossily coded color images by up to 27% overall.


    (*) Joint work with Prof. Wm. D. Withers

    March 9

    2.00 PM,
    4122 CSIC Bldg

    Professor Giovanni Russo, Department of Mathematics, University of Catania

    Computation of Strained Epitaxial Growth in Three Dimensions by Kinetic Monte Carlo

    A numerical method for computation of heteroepitaxial growth in the presence of strain is presented. The model used is based on a solid-on-solid model with a cubic lattice. Elastic effects are incorporated using a ball and spring type model. The growing film is evolved using Kinetic Monte Carlo (KMC) and it is assumed that the film is in mechanical equilibrium. The strain field in the substrate is computed by an exact solution which is efficiently evaluated using the fast Fourier transform. The strain field in the growing film is computed directly. The resulting coupled system is solved iteratively using the conjugate gradient method. Finally we introduce various approximations in the implementation of KMC to improve the computation speed. Numerical results show that layer-by-layer growth is unstable if the misfit is large enough resulting in the formation of three dimensional islands. Further development using multigrid approach will be addressed.



    March 16

    2.00 PM,
    4122 CSIC Bldg


    Dr. Vladimir Krasnopolsky, ESSIC, University of Maryland and NCEP/NOAA

    in collaboration with
    Michael Fox-Rabinovitz and Dmitry Chalikov, ESSIC, University of Maryland


    Application of Neural Network Techniques for Approximating Complex Multidimensional Mappings: NN Emulations of Time Consuming Components in Numerical Models

    Neural Network (NN) techniques provide an effective tool for fast and accurate approximations of complex multidimensional mappings. This approach has been applied to develop accurate and fast NN emulations of time consuming components in numerical climate and weather prediction models. Calculation of model physics is the “bottleneck” of any climate and weather prediction model. Model physics calculations take from 70% to 90+% of the total calculation time. Model physics parameterizations can be considered as continuous or almost continuous mappings and accurately emulated by NNs, which are 102 - 105 times faster than the original parameterizations of model physics.

    Efficient NN emulations are developed and tested/validated under the condition of preserving the quality and accuracy of the original parameterizations. In addition to fast and accurate emulation of the original parameterizations, NN also provides the entire Jacobian for a very little computational cost.

    The NN emulations for the National Center for Atmospheric Research (NCAR) Community Atmospheric Model (CAM) radiation parameterizations are presented and discussed as examples of the developed approach. High accuracy and greatly improved, as compared with the original parameterization, computational efficiency of the NN emulations are demonstrated (they are about 80 times faster). The results of climate simulations obtained for parallel runs of NCAR CAM with the original radiation parameterizations and with their NN emulations are discussed. Both simulations show very close results. The major properties of simulated climate are well preserved.

    Developing NN emulations for other more complex model physics components like the nonlinear wave-wave interaction in the ocean wind-wave model is discussed. The developed NN emulation is 105 times faster than the original parameterization in this case.

    These successful experiments show the potential of using adaptive machine learning techniques for emulating complex and time consuming components of numerical models.



    March 23

    2.00 PM,
    4122 CSIC Bldg

    NO SEMINAR

    University of Maryland Spring Break


    March 30

    2.00 PM,
    4122 CSIC Bldg

    Dr. Vladislav Panferov, Department of Mathematics and Statistics, McMaster University

    Regular small data solutions of the Boltzmann equation in one space dimension

    In this talk I will present a recent work about propagation of L bounds for spatially one-dimensional (plane-wave) solutions of the nonlinear Boltzmann equation. This problem is relevant for the study of regularity of general weak solutions of the Boltzmann equation, as provided by the DiPerna-Lions theory. The result is based on a remarkable regulatization property of the averaged ``gain'' term. Essentially, we find that, under certain truncations, propagation of Lp, p>1 bounds of the averages, implies the propagation of the L bounds of the solutions. It is curious that the logarithmic-type condition given by the bound on the Boltzmann entropy functional appears as a sort of a critical case in this estimate. As a consequence of this approach we are able to show the global in time existence (and uniqueness) of small data regular solutions in a bounded interval for the Boltzmann equation with cut-off hard potentials, subject to a large velocity truncation in the collision term. The important case of large data prompts further investigation.



    April 6

    NO SEMINAR SCHEDULED

    April 13

    NO SEMINAR SCHEDULED


    CSCAMM WORKSHOP

    Sparse Data Representation: The Role of Redundancy in Data Processing. Oversampling and Coarse Quantization for Signals



    April 20
     
    NO SEMINAR SCHEDULED

    April 27

    1.00 PM,
    4122 CSIC Bldg

    Richard Baraniuk, Rice University

    The Multiscale Structure of Non-Differentiable Image Appearance Manifolds

    The images generated by varying the underlying articulation parameters of an object (pose, attitude, light source position, and so on) can be viewed as points on a low-dimensional "image appearance manifold" (IAM) in a high-dimensional ambient space. In this talk, we will expand on the observation that typical IAMs are not differentiable, in particular if the images contain sharp edges. However, all is not lost, since IAMs have an intrinsic multiscale geometric structure. In fact, each IAM has a family of approximate tangent spaces, each one good at a certain resolution. We will focus on the particular inverse problem of estimating, from a given image on or near an IAM, the underlying parameters that produced it. Putting the multiscale structural aspect to work, we develop a new algorithm for high-accuracy parameter estimation based on a coarse-to-fine Newton iteration through the family of approximate tangent spaces. This algorithm is reminiscent of recently proposed algorithms for multiscale image registration and super-resolution.       This is joint work with Michael Wakin, Hyeokho Choi, David Donoho, and Jonathan Kaplan.



    May 4

    2.00 PM,
    4122 CSIC Bldg

    Professor Patrick Wolfe, Division of Engineering and Applied Sciences, Harvard University

    Stochastic Computation and Applications to Statistical Signal Processing

    Many problems arising in science and engineering are effectively ones of statistical inference, and in all but the simplest cases the associated models may not admit analytical solutions. In this talk I will describe simulation-based Monte Carlo methods for inference, in particular two important classes of algorithms for stochastic computation: a batch methodology known as Markov chain Monte Carlo and an on-line one termed sequential Monte Carlo. Many interpretations are possible, but I shall frame my discussion in terms of the Bayesian paradigm, whereby all inference stems from a description of the (posterior) probability distribution associated with a given model after having observed the data in question. I will illustrate these simulation methodologies with examples of my own research into statistical audio signal processing, in which case they are used to obtain point estimates of salient parameters. This application area is not only interesting and important in its own right, but also provides a convenient test bed for more generally applicable techniques of time series modeling.



    May 11

    2.00 PM,
    4122 CSIC Bldg

    Professor M. Gregory Forest, Institute for Advanced Materials, NanoScience and Technology, University of North Carolina at Chapel Hill

    Nematic Nano-composites: Flowing Toward Performance Properties

    Nematic polymers are high aspect ratio macromolecules, either rods or platelets. They are utilized in high performance materials for a diversity of properties including mechanical, thermal, barrier, and electrical. Such macromolecule ensembles in solutions, and similar geometric colloidal suspensions, exhibit remarkable response to shear-dominated flow. Bulk phases undergo an isotropic-nematic first order phase transition. Weak flows such as shear drive this transition to create a myriad of responses, including steady and unsteady bulk modes. The result for these nano-composite films is a combination of anisotropy and heterogeneity---in the performance features of materials, which is somehow dictated by the morphology of the high aspect ratio macromolecular ensemble. This "map" between composition, flow conditions, and ultimate material properties is the basis of the lecture. Progress to date, and the challenges ahead to theory and computation, will be highlighted.



    CSCAMM WORKSHOP

    Sparse Data Representation: The Role of Redundancy in Data Processing. Sparse Representation in Redundant Systems


    [an error occurred while processing this directive]