Modernizing SW4 for massively parallel time-domain simulations of earthquake ground motions in 3D earth models increases resolution and provides ground motion estimates for critical infrastructure risk evaluations. Simulations of ground motions from large (M ≥ 7.0) earthquakes require domains on the order of 100 to500 km and spatial granularity on the order of 1 to5 m resulting in hundreds of billions of grid points. Surface-focused structured mesh refinement (SMR) allows for more constant grid point per wavelength scaling in typical Earth models, where wavespeeds increase with depth. In fact, MR allows for simulations to double the frequency content relative to a fixed grid calculation on a given resource. The authors report improvements to the SW4 algorithm developed while porting the code to the Cori Phase 2 (Intel Xeon Phi) systems at the National Energy Research Scientific Computing Center (NERSC) at Lawrence Berkeley National Laboratory. Investigations of the performance of the innermost loop of the calculations found that reorganizing the order of operations can improve performance for massive problems.
Containers are an emerging technology that holds promise for improving productivity and code portability in scientific computing. The authors examine Linux container technology for the distribution of a nontrivial scientific computing software stack and its execution on a spectrum of platforms from laptop computers through high-performance computing systems. For Python code run on large parallel computers, the runtime is reduced inside a container due to faster library imports. The software distribution approach and data that the authors present will help developers and users decide on whether container technology is appropriate for them. The article also provides guidance for vendors of HPC systems that rely on proprietary libraries for performance on what they can do to make containers work seamlessly and without performance penalty.
Columnist Charles Day examines the 1970s and some of the popular findings related to computational physics.
The goal of the CREATE program is to develop and deploy physics-based computational engineering tools that can be used to develop virtual prototypes of ships, air vehicles, ground vehicles, and radio frequency antennas to accurately predict their performance in support of the US Department of Defense acquisition process, DoD 5000. The purpose of this article is to describe the approach taken to address the verification and validation of the CREATE software products. The approach is based on the adoption of a set of practices aligned with the recommendations of the National Academy of Sciences to promote a test-driven development culture.
CREATE: Acceptance and Adoption of Virtual Prototyping across the Defense R&D and Acquisition Communities
With an annual budget of nearly US600 billion, the US Department of Defense (DoD) is tasked with protecting the US and its allies and interests abroad against potential adversaries. It must accomplish these tasks in a globalized and highly interconnected world where the pace of technology is moving faster than the current time-consuming DoD acquisition systems can keep up with. Moving forward, the DoD must change these time-consuming paradigms and accelerate the pace of innovation. One way to do this is to shift to a virtual prototyping environment where the benefits of high-performance supercomputing and complex physics-based engineering software can expand the decision space and enable the DoD to field better systems faster, cheaper, and at less risk than previous methods. This is the goal of the Computational Research and Engineering Acquisition Tools and Environments (CREATE) program. This issue of CiSE provides the third installment of articles relating to the history, formation, and ongoing work of the High Performance Computing Modernization Program (HPCMP) CREATE program.
Silicon Valley is the home of many of the most innovative high-technology industries that have ever existed. Their level of innovation gives them a competitive economic advantage that sustains a significant portion of the US economy. While there have been dozens of attempts in the US and abroad to replicate this success, few have been very successful. A relatively new approach to product development is emerging, computational engineering. It is based on the use of computing, especially high-performance computing, to design, construct, and analyze virtual prototypes of new products.
One of the problems with which researchers of different domains, such as chemistry and fluid dynamics, are concerned is the optimization of coal combustion processes to increase the efficiency, safety, and cleanliness of such systems. The coal combustion process is reproduced by using complex simulations that normally produce highly complex data comprising many characteristics. Such datasets are employed by scientists to validate their hypotheses or to present new hypotheses, and the data analysis is mostly restricted to time-consuming workflows only capable of a portion of the data’s full spectrum. To support the experts, interactive visualization and analysis tools have been developed by different suppliers to manage and understand multivariate data.
Teaching Scenario-Based Programming: An Additional Paradigm for the High School Computer Science Curriculum, Part 2
This is the second part of a two-part series that describes a pilot programming course in which high school students majoring in computer science were introduced to the visual, scenario-based programming language of live sequence charts. The main rationale for the course was that computer science students should be exposed to at least two very different programming paradigms and that LSCs, with their unique characteristics, can be a good vehicle for that. Part 1 (see the previous issue) focused on the pedagogic rationale of the pilot, on introducing LSC, and on the structure of the course. Part 2 centers on the evaluation of the pilot’s results.
The need for faster, more efficient algorithms is an important aspect of scientific computing. Generally, scientists are only exposed to computational issues that arise in their field. Thus, collaboration between a numerical analyst and a scientist is becoming necessary for scientific computing. The purpose of this case study is to expose computer scientists to processes that an astronomer would use to obtain useful results from raw data.
Michael Jay Schillaci reviews “Computational Modeling and Visualization of Physical Systems with Python” by Jay Wang, declaring it deserving of becoming a standard in undergraduate and graduate curricula where scientific computing plays a role.
A Scalable and Extensible Computational Fluid Dynamics Software Framework for Ship Hydrodynamics Applications: NavyFOAM
The main challenge facing simulation-based hydrodynamic design of naval ships comes from the complexity of the salient physics involved around ships, which is further compounded by the multidisciplinary nature of ship applications. Simulation of the flow physics using “first principles” is computationally very expensive and time-consuming. Other challenges largely pertain to software engineering, ranging from software architecture, verification and validation (V & V), and quality assurance. This article presents a computational fluid dynamics (CFD) framework called NavyFOAM that has been built around OpenFOAM, an open source CFD library written in C that heavily draws upon object-oriented programming. In the article, the design philosophy, features, and capabilities of the software framework, and computational approaches underlying NavyFOAM are described, followed by a description of the V&V effort and application examples selected from Navy’s recent R&D and acquisition programs.
The development of high-fidelity, physics-based software for analyzing ground vehicle concept designs and the mobility performance of wheeled and tracked ground vehicles is increasing important. The CREATE-GV toolset’s three modules are integrated to provide a complete performance evaluation of vehicle concept designs.
Changing from physical prototype–based product design to computational (virtual) prototype–based product design requires more than leading edge computational engineering codes, joint R&D efforts with a Department of Energy Laboratory, brilliant and dedicated researchers, and meticulous verification, validation, and uncertainty quantification. For the most difficult engineering design problems, most of these are necessary but not sufficient. In every case, the designers, engineers, scientists, and their management must also believe that the benefits in time savings and more creative new products justify the cost and risk of conversion. In the extreme case, an impending crisis may force conversion even in the face of strong resistance. The Goodyear Tire & Rubber Company’s experience illustrates a successful, crisis-driven transition to virtual prototype–based product design.
This article presents recent progress in understanding solar wind–Mars interaction using a sophisticated global magnetohydrodynamic (MHD) model. Mars has localized crustal magnetic fields, so the solar wind plasma flow interacts directly with the Mars atmosphere/ionosphere system. Such an interaction generates an induced current in the ionosphere, modifies the magnetic field environment around Mars, and more importantly, causes the erosion of the Mars atmosphere. The nonuniformly distributed crustal magnetic field also plays an important role in the interaction process, which is modulated by planetary rotation. Recent advances in computing power allow the inclusion of the continuous crustal field rotation in the simulation with a time-dependent MHD model. Model results have been validated with observations from previous and ongoing Mars missions. The validated time-dependent MHD model is useful in quantifying the variation of ion loss rates with planet rotation and the internal response time scale of the Martian ionosphere.
IEEE VIS 2016 brought together researchers and practitioners to discuss the latest developments in visualization and visual analytics research and their applications. The authors describe the highlights of the 2016 event.
Semisupervised DR techniques using virtual label regression have attracted considerable attention, but they suffer from two restrictions: the number of discriminant directions available is constrained to the number of classes, and they’re nonorthogonal. Traditional methods easily address these problems. However, an interesting problem is how to address these restrictions in label regression modelings. To do this, the authors developed Recursive Orthogonal Label Regression (ROLR), a regression framework of semisupervised dimension reduction that uses label propagation and label regression in a recursive procedure. Here, they illustrate the formulation of ROLR using semisupervised regression encoding. ROLR provides an unified view to understand and explain a large family of label regression techniques. Experimental results show the approach’s feasibility and effectiveness.
Columnist Charles Day describes a new dating app called Blur and how it stacks up to the human brain in matching potential mates.
From the development of proxy applications to rapidly explore algorithmic and programming model changes and co-design of hardware and software features between vendors and application teams well in advance of delivery to early access to hardware and software stacks through dedicated hands-on activities with vendors, the articles in this special issue provide others with a starting point for their own roadmap to application modernization efforts.
Associate EIC Barry Schneider looks at the history of computational physics and chemistry.
Blockchain is a new technology, based on hashing, which is at the foundation of the platforms for trading cryptocurrencies and executing smart contracts. This article reviews the basic ideas of this technology and provides a sample minimalist implementation in Python.
Physics, medicine, astronomy — these and other hard sciences share a common need for efficient algorithms, system software, and computer architecture to address large computational problems. And yet, useful advances in computational techniques that could benefit many researchers are rarely shared. To meet that need, Computing in Science & Engineering (CiSE) presents scientific and computational contributions in a clear and accessible format.Subscribe to IEEE NEWS feed