Uintah and Related C-SAFE Publications

2012


P.K. Notz, R. Pawlowski, J. Sutherland. “Graph-Based Software Design for Managing Complexity and Enabling Concurrency in Multiphysics PDE Software,” In ACM Transactions on Mathematical Software, Vol. 39, No. 1, November, 2012.
DOI: 10.1145/2382585.2382586

ABSTRACT

Multiphysics simulation software is plagued by complexity stemming from nonlinearly coupled systems of Partial Differential Equations (PDEs). Such software typically supports many models, which may require different transport equations, constitutive laws, and equations of state. Strong coupling and a multiplicity of models leads to complex algorithms (i.e., the properly ordered sequence of steps to assemble a discretized set of coupled PDEs) and rigid software.

This work presents a design strategy that shifts focus away from high-level algorithmic concerns to low- level data dependencies. Mathematical expressions are represented as software objects that directly expose data dependencies. The entire system of expressions forms a directed acyclic graph and the high-level as- sembly algorithm is generated automatically through standard graph algorithms. This approach makes problems with complex dependencies entirely tractable, and removes virtually all logic from the algorithm itself. Changes are highly localized, allowing developers to implement models without detailed understand- ing of any algorithms (i.e., the overall assembly process). Furthermore, this approach complements existing MPI-based frameworks and can be implemented within them easily.

Finally, this approach enables algorithmic parallelization via threads. By exposing dependencies in the algorithm explicitly, thread-based parallelism is implemented through algorithm decomposition, providing a basis for exploiting parallelism independent from domain decomposition approaches.



J.R. Peterson, J.C. Beckvermit, T. Harman, M. Berzins, C.A. Wight. “Multiscale Modeling of High Explosives for Transportation Accidents,” In Proceedings of the 1st Conference of the Extreme Science and Engineering Discovery Environment: Bridging from the eXtreme to the campus and beyond, 2012.
DOI: 10.1145/2335755.2335828

ABSTRACT

The development of a reaction model to simulate the accidental detonation of a large array of seismic boosters in a semi-truck subject to fire is considered. To test this model large scale simulations of explosions and detonations were performed by leveraging the massively parallel capabilities of the Uintah Computational Framework and the XSEDE computational resources. Computed stress profiles in bulk-scale explosive materials were validated using compaction simulations of hundred micron scale particles and found to compare favorably with experimental data. A validation study of reaction models for deflagration and detonation showed that computational grid cell sizes up to 10 mm could be used without loss of fidelity. The Uintah Computational Framework shows linear scaling up to 180K cores which combined with coarse resolution and validated models will now enable simulations of semi-truck scale transportation accidents for the first time.



J. Van Rij, T. Harman, T. Ameel. “Slip Flow Fluid-Structure-Interaction,” In International Journal of Thermal Sciences, Vol. 58, pp. 9--19. August, 2012.
DOI: 10.1016/j.ijthermalsci.2012.03.001

ABSTRACT

While many microscale systems are subject to both rarefaction and fluid-structure-interaction (FSI) effects, most commercial algorithms cannot model both, if either, of these for general applications. This study modifies the momentum and thermal energy exchange models of an existing, continuum based, multifield, compressible, unsteady, Eulerian-Lagrangian FSI algorithm, such that the equivalent of first-order slip velocity and temperature jump boundary conditions are achieved at fluid-solid surfaces, which may move with time. Following the development and implementation of the slip flow momentum and energy exchange models, several basic configurations are considered and compared to established data to verify the resulting algorithm's capabilities.



J. Schmidt, M. Berzins, J. Thornock, T. Saad, J. Sutherland. “Large Scale Parallel Solution of Incompressible Flow Problems using Uintah and hypre,” SCI Technical Report, No. UUSCI-2012-002, SCI Institute, University of Utah, 2012.

ABSTRACT

The Uintah Software framework was developed to provide an environment for solving fluid-structure interaction problems on structured adaptive grids on large-scale, long-running, data-intensive problems. Uintah uses a combination of fluid-flow solvers and particle-based methods for solids together with a novel asynchronous task-based approach with fully automated load balancing. As Uintah is often used to solve compressible, low-Mach combustion applications, it is important to have a scalable linear solver. While there are many such solvers available, the scalability of those codes varies greatly. The hypre software offers a range of solvers and pre-conditioners for different types of grids. The weak scalability of Uintah and hypre is addressed for particular examples when applied to an incompressible flow problem relevant to combustion applications. After careful software engineering to reduce start-up costs, much better than expected weak scalability is seen for up to 100K cores on NSFs Kraken architecture and up to 200K+ cores, on DOEs new Titan machine.

Keywords: uintah, csafe



J. Sutherland. “Graph-Based Parallel Task Scheduling and Algorithm Generation for Multiphysics PDE Software,” In Proceedings of the 2012 SIAM Parallel Processing Conference, Savannah, GA, 2012.



L.T. Tran. “Numerical Study and Improvement of the Methods in Uintah Framework: The Material Point Method and the Implicit Continuous-Fluid Eulerian Method,” Note: Advisor: Martin Berzins, School of Computing, University of Utah, December, 2012.

ABSTRACT

The Material Point Method (MPM) and the Implicit Continuous-fluid Eulerian method (ICE) have been used to simulate and solve many challenging problems in engineering applications, especially those involving large deformations in materials and multimaterial interactions. These methods were implemented within the Uintah Computational Framework (UCF) to simulate explosions, fires, and other fluids and fluid-structure interaction. For the purpose of knowing if the simulations represent the solutions of the actual mathematical models, it is important to fully understand the accuracy of these methods. At the time this research was initiated, there were hardly any error analysis being done on these two methods, though the range of their applications was impressive. This dissertation undertakes an analysis of the errors in computational properties of MPM and ICE in the context of model problems from compressible gas dynamics which are governed by the one-dimensional Euler system. The analysis for MPM includes the analysis of errors introduced when the information is projected from particles onto the grid and when the particles cross the grid cells. The analysis for ICE includes the analysis of spatial and temporal errors in the method, which can then be used to improve the method's accuracy in both space and time. The implementation of ICE in UCF, which is referred to as Production ICE, does not perform as well as many current methods for compressible flow problems governed by the one-dimensional Euler equations – which we know because the obtained numerical solutions exhibit unphysical oscillations and discrepancies in the shock speeds. By examining different choices in the implementation of ICE in this dissertation, we propose a method to eliminate the discrepancies and suppress the nonphysical oscillations in the numerical solutions of Production ICE – this improved Production ICE method (IMPICE) is extended to solve the multidimensional Euler equations. The discussion of the IMPICE method for multidimensional compressible flow problems includes the method’s detailed implementation and embedded boundary implementation. Finally, we propose a discrete adjoint-based approach to estimate the spatial and temporal errors in the numerical solutions obtained from IMPICE.


2011


C. Brownlee, V. Pegoraro, S. Shankar, P.S. McCormick, C.D. Hansen. “Physically-Based Interactive Flow Visualization Based on Schlieren and Interferometry Experimental Techniques,” In IEEE Transactions on Visualization and Computer Graphics, Vol. 17, No. 11, pp. 1574--1586. 2011.

ABSTRACT

Understanding fluid flow is a difficult problem and of increasing importance as computational fluid dynamics (CFD) produces an abundance of simulation data. Experimental flow analysis has employed techniques such as shadowgraph, interferometry, and schlieren imaging for centuries, which allow empirical observation of inhomogeneous flows. Shadowgraphs provide an intuitive way of looking at small changes in flow dynamics through caustic effects while schlieren cutoffs introduce an intensity gradation for observing large scale directional changes in the flow. Interferometry tracks changes in phase-shift resulting in bands appearing. The combination of these shading effects provides an informative global analysis of overall fluid flow. Computational solutions for these methods have proven too complex until recently due to the fundamental physical interaction of light refracting through the flow field. In this paper, we introduce a novel method to simulate the refraction of light to generate synthetic shadowgraph, schlieren and interferometry images of time-varying scalar fields derived from computational fluid dynamics data. Our method computes physically accurate schlieren and shadowgraph images at interactive rates by utilizing a combination of GPGPU programming, acceleration methods, and data-dependent probabilistic schlieren cutoffs. Applications of our method to multifield data and custom application-dependent color filter creation are explored. Results comparing this method to previous schlieren approximations are finally presented.

Keywords: uintah, c-safe



I. Hunsaker, T. Harman, J. Thornock, P.J. Smith. “Efficient Parallelization of RMCRT for Large Scale LES Combustion Simulations,” In Proceedings of the AIAA 20th Computational Fluids Dynamics Conference, 2011.
DOI: 10.2514/6.2011-3770



K. Kamojjala, R.M. Brannon. “Verification Of Frame Indifference For Complicated Numerical Constitutive Models,” In Proceedings of the ASME Early Career Technical Conference, 2011.

ABSTRACT

The principle of material frame indifference requires spatial stresses to rotate with the material, whereas reference stresses must be insensitive to rotation. Testing of a classical uniaxial strain problem with superimposed rotation reveals that a very common approach to strong incremental objectivity taken in finite element codes to satisfy frame indifference (namely working in an approximate un-rotated frame) fails this simplistic test. A more complicated verification example is constructed based on the method of manufactured solutions (MMS) which involves the same character of loading at all points, providing a means to test any nonlinear-elastic arbitrarily anisotropic constitutive model.



J.P. Luitjens. “The Scalability of Parallel Adaptive Mesh Refinement Within Uintah,” Note: Advisor: Martin Berzins, School of Computing, University of Utah, 2011.

ABSTRACT

Solutions to Partial Differential Equations (PDEs) are often computed by discretizing the domain into a collection of computational elements referred to as a mesh. This solution is an approximation with an error that decreases as the mesh spacing decreases. However, decreasing the mesh spacing also increases the computational requirements. Adaptive mesh refinement (AMR) attempts to reduce the error while limiting the increase in computational requirements by refining the mesh locally in regions of the domain that have large error while maintaining a coarse mesh in other portions of the domain. This approach often provides a solution that is as accurate as that obtained from a much larger fixed mesh simulation, thus saving on both computational time and memory. However, historically, these AMR operations often limit the overall scalability of the application.

Adapting the mesh at runtime necessitates scalable regridding and load balancing algorithms. This dissertation analyzes the performance bottlenecks for a widely used regridding algorithm and presents two new algorithms which exhibit ideal scalability. In addition, a scalable space-filling curve generation algorithm for dynamic load balancing is also presented. The performance of these algorithms is analyzed by determining their theoretical complexity, deriving performance models, and comparing the observed performance to those performance models. The models are then used to predict performance on larger numbers of processors. This analysis demonstrates the necessity of these algorithms at larger numbers of processors. This dissertation also investigates methods to more accurately predict workloads based on measurements taken at runtime. While the methods used are not new, the application of these methods to the load balancing process is. These methods are shown to be highly accurate and able to predict the workload within 3% error. By improving the accuracy of these estimations, the load imbalance of the simulation can be reduced, thereby increasing the overall performance.



J. Luitjens, M. Berzins. “Scalable parallel regridding algorithms for block-structured adaptive mesh refinement,” In Concurrency and Computation: Practice and Experience, Vol. 23, No. 13, pp. 1522--1537. September, 2011.
DOI: 10.1002/cpe.1719

ABSTRACT

Block-structured adaptive mesh refinement (BSAMR) is widely used within simulation software because it improves the utilization of computing resources by refining the mesh only where necessary. For BSAMR to scale onto existing petascale and eventually exascale computers all portions of the simulation need to weak scale ideally. Any portions of the simulation that do not will become a bottleneck at larger numbers of cores. The challenge is to design algorithms that will make it possible to avoid these bottlenecks on exascale computers. One step of existing BSAMR algorithms involves determining where to create new patches of refinement. The Berger–Rigoutsos algorithm is commonly used to perform this task. This paper provides a detailed analysis of the performance of two existing parallel implementations of the Berger– Rigoutsos algorithm and develops a new parallel implementation of the Berger–Rigoutsos algorithm and a tiled algorithm that exhibits ideal scalability. The analysis and computational results up to 98 304 cores are used to design performance models which are then used to predict how these algorithms will perform on 100 M cores.



Q. Meng, M. Berzins, J. Schmidt. “Using Hybrid Parallelism to improve memory use in Uintah,” In Proceedings of the TeraGrid 2011 Conference, Salt Lake City, Utah, ACM, July, 2011.
DOI: 10.1145/2016741.2016767

ABSTRACT

The Uintah Software framework was developed to provide an environment for solving fluid-structure interaction problems on structured adaptive grids on large-scale, long-running, data-intensive problems. Uintah uses a combination of fluid-flow solvers and particle-based methods for solids together with a novel asynchronous task-based approach with fully automated load balancing. Uintah's memory use associated with ghost cells and global meta-data has become a barrier to scalability beyond O(100K) cores. A hybrid memory approach that addresses this issue is described and evaluated. The new approach based on a combination of Pthreads and MPI is shown to greatly reduce memory usage as predicted by a simple theoretical model, with comparable CPU performance.

Keywords: Uintah, C-SAFE, parallel computing



A. Sadeghirad, R.M. Brannon, J. Burghardt. “A Convected Particle Domain Interpolation Technique To Extend Applicability of the Material Point Method for Problems Involving Massive Deformations,” In International Journal for Numerical Methods in Engineering, Vol. 86, No. 12, pp. 1435--1456. 2011.
DOI: 10.1002/nme.3110

ABSTRACT

A new algorithm is developed to improve the accuracy and efficiency of the material point method for problems involving extremely large tensile deformations and rotations. In the proposed procedure, particle domains are convected with the material motion more accurately than in the generalized interpolation material point method. This feature is crucial to eliminate instability in extension, which is a common shortcoming of most particle methods. Also, a novel alternative set of grid basis functions is proposed for efficiently calculating nodal force and consistent mass integrals on the grid. Specifically, by taking advantage of initially parallelogram-shaped particle domains, and treating the deformation gradient as constant over the particle domain, the convected particle domain is a reshaped parallelogram in the deformed configuration. Accordingly, an alternative grid basis function over the particle domain is constructed by a standard 4-node finite element interpolation on the parallelogram. Effectiveness of the proposed modifications is demonstrated using several large deformation solid mechanics problems.



A. Sadeghirad, R.M. Brannon, J. Guilkey. “Enriched Convected Particle Domain Interpolation (CPDI) Method for Analyzing Weak Discontinuities,” In Particles, 2011.



P.J. Smith, M. Hradisky, J. Thornock, J. Spinti, D. Nguyen. “Large eddy simulation of a turbulent buoyant helium plume,” In Proceedings of Supercomputing 2011 Companion, pp. 135--136. 2011.
DOI: 10.1145/2148600.2148671

ABSTRACT

At the Institute for Clean and Secure Energy at the University of Utah we are focused on education through interdisciplinary research on high-temperature fuel-utilization processes for energy generation, and the associated health, environmental, policy and performance issues. We also work closely with the government agencies and private industry companies to promote rapid deployment of new technologies through the use of high performance computational tools.

Buoyant flows are encountered in many situations of engineering and environmental importance, including fires, subsea and atmospheric exhaust phenomena, gas releases and geothermal events. Buoyancy-driven flows also play a key role in such physical processes as the spread of smoke or toxic gases from fires. As such, buoyant flow experiments are an important step in developing and validating simulation tools for numerical techniques such as Large Eddy Simulation (LES) for predictive use of complex systems. Large Eddy Simulation is a turbulence model that provides a much greater degree of resolution of physical scales than the more common Reynolds-Averaged Navier Stokes models. The validation activity requires increasing levels of complexity to sequentially quantify the effects of coupling increased physics, and to explore the effects of scale on the objectives of the simulation.

In this project we are using buoyant flows to examine the validity and accuracy of numerical techniques. By using the non-reacting buoyant helium plume flow we can study the generation of turbulence due to buoyancy, uncoupled from the complexities of combustion chemistry.

We are performing Large Eddy Simulation of a one-meter diameter buoyancy-driven helium plume using two software simulation tools -- ARCHES and Star-CCM+. ARCHES is a finite-volume Large Eddy Simulation code built within the Uintah framework, which is a set of software components and libraries that facilitate the solution of partial differential equations on structured adaptive mesh refinement grids using thousands of processors. Uintah is the product of a ten-year partnership with the Department of Energy's Advanced Simulation and Computing (ASC) program through the University of Utah's Center for Simulation of Accidental Fires and Explosions (C-SAFE). The ARCHES component was initially designed for predicting the heat-flux from large buoyant pool fires with potential hazards immersed in or near a pool fire of transportation fuel. Since then, this component has been extended to solve many industrially relevant problems such as industrial flares, oxy-coal combustion processes, and fuel gasification.

The second simulation tool, Star-CCM+, is a commercial, integrated software environment developed by CD-adapco, that can be used to simulate the entire engineering simulation process. The engineering process can be started with CAD preparation, meshing, model setup, and continued with running simulations, post-processing, and visualizing the results. This allows for faster development and design turn-over time, especially for industry-type application. Star-CCM+ was build from ground up to provide scalable parallel performance. Furthermore, it is not only supported on the industry-standard Linux HPC platforms, but also on Windows HPC, allowing us to explore computational demands on both Linux as well as Windows-based HPC clusters.



P.J. Smith, J. Thornock, J., D. Hinckley, M. Hradisky. “Large Eddy Simulation Of Industrial Flares,” In Proceedings of Supercomputing 2011 Companion, pp. 137--138. 2011.
DOI: 10.1145/2148600.2148672

ABSTRACT

At the Institute for Clean and Secure Energy at the University of Utah we are focused on education through interdisciplinary research on high-temperature fuel-utilization processes for energy generation, and the associated health, environmental, policy and performance issues. We also work closely with the government agencies and private industry companies to promote rapid deployment of new technologies through the use of high performance computational tools.

Industrial flare simulation can provide important information on combustion efficiency, pollutant emissions, and operational parameter sensitivities for design or operation that cannot be measured. These simulations provide information that may help design or operate flares so as to reduce or eliminate harmful pollutants and increase combustion efficiency.

Fires and flares have been particularly difficult to simulate with traditional computational fluid dynamics (CFD) simulation tools that are based on Reynolds-Averaged Navier-Stokes (RANS) approaches. The large-scale mixing due to vortical coherent structures in these flames is not readily reduced to steady-state CFD calculations with RANS.

Simulation of combustion using Large Eddy Simulations (LES) has made it possible to more accurately simulate the complex combustion seen in these flares. Resolution of all length and time scales is not possible even for the largest supercomputers. LES gives a numerical technique which resolves the large length and time scales while using models for more homogenous smaller scales. By using LES, the combustion dynamics capture the puffing created by buoyancy in industrial flare simulation.

All of our simulations were performed using either the University of Utah's ARCHES simulation tool or the commercially available Star-CCM+ software. ARCHES is a finite-volume Large Eddy Simulation code built within the Uintah framework, which is a set of software components and libraries that facilitate the solution of partial differential equations on structured adaptive mesh refinement grids using thousands of processors. Uintah is the product of a ten-year partnership with the Department of Energy's Advanced Simulation and Computing (ASC) program through the University of Utah's Center for Simulation of Accidental Fires and Explosions (C-SAFE). The ARCHES component was initially designed for predicting the heat-flux from large buoyant pool fires with potential hazards immersed in or near a pool fire of transportation fuel. Since then, this component has been extended to solve many industrially relevant problems such as industrial flares, oxy-coal combustion processes, and fuel gasification.

In this report we showcase selected results that help us visualize and understand the physical processes occurring in the simulated systems.

Most of the simulations were completed on the University of Utah's Updraft and Ember high performance computing clusters, which are managed by the Center for High Performance Computing. High performance computational tools are essential in our effort to successfully answer all aspects of our research areas and we promote the use of high performance computational tools beyond the research environment by directly working with our industry partners.



J. Sutherland, T. Saad. “The Discrete Operator Approach to the Numerical Solution of Partial Differential Equations,” In Proceedings of the 20th AIAA Computational Fluid Dynamics Conference, Honolulu, Hawaii, pp. AIAA-2011-3377. 2011.
DOI: 10.2514/6.2011-3377



J. Sutherland, T. Saad. “A Novel Computational Framework for Reactive Flow and Multiphysics Simulations,” Note: AIChE Annual Meeting, 2011.



L.T. Tran, M. Berzins. “IMPICE Method for Compressible Flow Problems in Uintah,” In International Journal For Numerical Methods In Fluids, Note: Published online 20 July, 2011.


2010


M. Berzins, J. Luitjens, Q. Meng, T. Harman, C.A. Wight, J.R. Peterson. “Uintah: A Scalable Framework for Hazard Analysis,” In Proceedings of the Teragrid 2010 Conference, TG 10, Note: Awarded Best Paper in the Science Track!, pp. (published online). July, 2010.
ISBN: 978-1-60558-818-6
DOI: 10.1145/1838574.1838577

ABSTRACT

The Uintah Software system was developed to provide an environment for solving a fluid-structure interaction problems on structured adaptive grids on large-scale, long-running, data-intensive problems. Uintah uses a novel asynchronous task-based approach with fully automated load balancing. The application of Uintah to a petascale problem in hazard analysis arising from "sympathetic" explosions in which the collective interactions of a large ensemble of explosives results in dramatically increased explosion violence, is considered. The advances in scalability and combustion modeling needed to begin to solve this problem are discussed and illustrated by prototypical computational results.

Keywords: Uintah, csafe