What We've Learned about Assessing Hands-On Science.
ERIC Educational Resources Information Center
Shavelson, Richard J.; Baxter, Gail P.
1992-01-01
A recent study compared hands-on scientific inquiry assessment to assessments involving lab notebooks, computer simulations, short-answer paper-and-pencil problems, and multiple-choice questions. Creating high quality performance assessments is a costly, time-consuming process requiring considerable scientific and technological know-how. Improved…
Center for Center for Technology for Advanced Scientific Component Software (TASCS)
DOE Office of Scientific and Technical Information (OSTI.GOV)
Kostadin, Damevski
A resounding success of the Scientific Discovery through Advanced Computing (SciDAC) program is that high-performance computational science is now universally recognized as a critical aspect of scientific discovery [71], complementing both theoretical and experimental research. As scientific communities prepare to exploit unprecedented computing capabilities of emerging leadership-class machines for multi-model simulations at the extreme scale [72], it is more important than ever to address the technical and social challenges of geographically distributed teams that combine expertise in domain science, applied mathematics, and computer science to build robust and flexible codes that can incorporate changes over time. The Center for Technologymore » for Advanced Scientific Component Software (TASCS)1 tackles these these issues by exploiting component-based software development to facilitate collaborative high-performance scientific computing.« less
A high performance scientific cloud computing environment for materials simulations
NASA Astrophysics Data System (ADS)
Jorissen, K.; Vila, F. D.; Rehr, J. J.
2012-09-01
We describe the development of a scientific cloud computing (SCC) platform that offers high performance computation capability. The platform consists of a scientific virtual machine prototype containing a UNIX operating system and several materials science codes, together with essential interface tools (an SCC toolset) that offers functionality comparable to local compute clusters. In particular, our SCC toolset provides automatic creation of virtual clusters for parallel computing, including tools for execution and monitoring performance, as well as efficient I/O utilities that enable seamless connections to and from the cloud. Our SCC platform is optimized for the Amazon Elastic Compute Cloud (EC2). We present benchmarks for prototypical scientific applications and demonstrate performance comparable to local compute clusters. To facilitate code execution and provide user-friendly access, we have also integrated cloud computing capability in a JAVA-based GUI. Our SCC platform may be an alternative to traditional HPC resources for materials science or quantum chemistry applications.
Component Technology for High-Performance Scientific Simulation Software
DOE Office of Scientific and Technical Information (OSTI.GOV)
Epperly, T; Kohn, S; Kumfert, G
2000-11-09
We are developing scientific software component technology to manage the complexity of modem, parallel simulation software and increase the interoperability and re-use of scientific software packages. In this paper, we describe a language interoperability tool named Babel that enables the creation and distribution of language-independent software libraries using interface definition language (IDL) techniques. We have created a scientific IDL that focuses on the unique interface description needs of scientific codes, such as complex numbers, dense multidimensional arrays, complicated data types, and parallelism. Preliminary results indicate that in addition to language interoperability, this approach provides useful tools for thinking about themore » design of modem object-oriented scientific software libraries. Finally, we also describe a web-based component repository called Alexandria that facilitates the distribution, documentation, and re-use of scientific components and libraries.« less
Scientific Visualization in High Speed Network Environments
NASA Technical Reports Server (NTRS)
Vaziri, Arsi; Kutler, Paul (Technical Monitor)
1997-01-01
In several cases, new visualization techniques have vastly increased the researcher's ability to analyze and comprehend data. Similarly, the role of networks in providing an efficient supercomputing environment have become more critical and continue to grow at a faster rate than the increase in the processing capabilities of supercomputers. A close relationship between scientific visualization and high-speed networks in providing an important link to support efficient supercomputing is identified. The two technologies are driven by the increasing complexities and volume of supercomputer data. The interaction of scientific visualization and high-speed networks in a Computational Fluid Dynamics simulation/visualization environment are given. Current capabilities supported by high speed networks, supercomputers, and high-performance graphics workstations at the Numerical Aerodynamic Simulation Facility (NAS) at NASA Ames Research Center are described. Applied research in providing a supercomputer visualization environment to support future computational requirements are summarized.
A Queue Simulation Tool for a High Performance Scientific Computing Center
NASA Technical Reports Server (NTRS)
Spear, Carrie; McGalliard, James
2007-01-01
The NASA Center for Computational Sciences (NCCS) at the Goddard Space Flight Center provides high performance highly parallel processors, mass storage, and supporting infrastructure to a community of computational Earth and space scientists. Long running (days) and highly parallel (hundreds of CPUs) jobs are common in the workload. NCCS management structures batch queues and allocates resources to optimize system use and prioritize workloads. NCCS technical staff use a locally developed discrete event simulation tool to model the impacts of evolving workloads, potential system upgrades, alternative queue structures and resource allocation policies.
NASA Astrophysics Data System (ADS)
Carmack, Gay Lynn Dickinson
2000-10-01
This two-part quasi-experimental repeated measures study examined whether computer simulated experiments have an effect on the problem solving skills of high school biology students in a school-within-a-school magnet program. Specifically, the study identified episodes in a simulation sequence where problem solving skills improved. In the Fall academic semester, experimental group students (n = 30) were exposed to two simulations: CaseIt! and EVOLVE!. Control group students participated in an internet research project and a paper Hardy-Weinberg activity. In the Spring academic semester, experimental group students were exposed to three simulations: Genetics Construction Kit, CaseIt! and EVOLVE! . Spring control group students participated in a Drosophila lab, an internet research project, and Advanced Placement lab 8. Results indicate that the Fall and Spring experimental groups experienced significant gains in scientific problem solving after the second simulation in the sequence. These gains were independent of the simulation sequence or the amount of time spent on the simulations. These gains were significantly greater than control group scores in the Fall. The Spring control group significantly outscored all other study groups on both pretest measures. Even so, the Spring experimental group problem solving performance caught up to the Spring control group performance after the third simulation. There were no significant differences between control and experimental groups on content achievement. Results indicate that CSE is as effective as traditional laboratories in promoting scientific problem solving and that CSE is a useful tool for improving students' scientific problem solving skills. Moreover, retention of problem solving skills is enhanced by utilizing more than one simulation.
High-performance scientific computing in the cloud
NASA Astrophysics Data System (ADS)
Jorissen, Kevin; Vila, Fernando; Rehr, John
2011-03-01
Cloud computing has the potential to open up high-performance computational science to a much broader class of researchers, owing to its ability to provide on-demand, virtualized computational resources. However, before such approaches can become commonplace, user-friendly tools must be developed that hide the unfamiliar cloud environment and streamline the management of cloud resources for many scientific applications. We have recently shown that high-performance cloud computing is feasible for parallelized x-ray spectroscopy calculations. We now present benchmark results for a wider selection of scientific applications focusing on electronic structure and spectroscopic simulation software in condensed matter physics. These applications are driven by an improved portable interface that can manage virtual clusters and run various applications in the cloud. We also describe a next generation of cluster tools, aimed at improved performance and a more robust cluster deployment. Supported by NSF grant OCI-1048052.
Tools for 3D scientific visualization in computational aerodynamics
NASA Technical Reports Server (NTRS)
Bancroft, Gordon; Plessel, Todd; Merritt, Fergus; Watson, Val
1989-01-01
The purpose is to describe the tools and techniques in use at the NASA Ames Research Center for performing visualization of computational aerodynamics, for example visualization of flow fields from computer simulations of fluid dynamics about vehicles such as the Space Shuttle. The hardware used for visualization is a high-performance graphics workstation connected to a super computer with a high speed channel. At present, the workstation is a Silicon Graphics IRIS 3130, the supercomputer is a CRAY2, and the high speed channel is a hyperchannel. The three techniques used for visualization are post-processing, tracking, and steering. Post-processing analysis is done after the simulation. Tracking analysis is done during a simulation but is not interactive, whereas steering analysis involves modifying the simulation interactively during the simulation. Using post-processing methods, a flow simulation is executed on a supercomputer and, after the simulation is complete, the results of the simulation are processed for viewing. The software in use and under development at NASA Ames Research Center for performing these types of tasks in computational aerodynamics is described. Workstation performance issues, benchmarking, and high-performance networks for this purpose are also discussed as well as descriptions of other hardware for digital video and film recording.
Stone, John E; Hallock, Michael J; Phillips, James C; Peterson, Joseph R; Luthey-Schulten, Zaida; Schulten, Klaus
2016-05-01
Many of the continuing scientific advances achieved through computational biology are predicated on the availability of ongoing increases in computational power required for detailed simulation and analysis of cellular processes on biologically-relevant timescales. A critical challenge facing the development of future exascale supercomputer systems is the development of new computing hardware and associated scientific applications that dramatically improve upon the energy efficiency of existing solutions, while providing increased simulation, analysis, and visualization performance. Mobile computing platforms have recently become powerful enough to support interactive molecular visualization tasks that were previously only possible on laptops and workstations, creating future opportunities for their convenient use for meetings, remote collaboration, and as head mounted displays for immersive stereoscopic viewing. We describe early experiences adapting several biomolecular simulation and analysis applications for emerging heterogeneous computing platforms that combine power-efficient system-on-chip multi-core CPUs with high-performance massively parallel GPUs. We present low-cost power monitoring instrumentation that provides sufficient temporal resolution to evaluate the power consumption of individual CPU algorithms and GPU kernels. We compare the performance and energy efficiency of scientific applications running on emerging platforms with results obtained on traditional platforms, identify hardware and algorithmic performance bottlenecks that affect the usability of these platforms, and describe avenues for improving both the hardware and applications in pursuit of the needs of molecular modeling tasks on mobile devices and future exascale computers.
The Centre of High-Performance Scientific Computing, Geoverbund, ABC/J - Geosciences enabled by HPSC
NASA Astrophysics Data System (ADS)
Kollet, Stefan; Görgen, Klaus; Vereecken, Harry; Gasper, Fabian; Hendricks-Franssen, Harrie-Jan; Keune, Jessica; Kulkarni, Ketan; Kurtz, Wolfgang; Sharples, Wendy; Shrestha, Prabhakar; Simmer, Clemens; Sulis, Mauro; Vanderborght, Jan
2016-04-01
The Centre of High-Performance Scientific Computing (HPSC TerrSys) was founded 2011 to establish a centre of competence in high-performance scientific computing in terrestrial systems and the geosciences enabling fundamental and applied geoscientific research in the Geoverbund ABC/J (geoscientfic research alliance of the Universities of Aachen, Cologne, Bonn and the Research Centre Jülich, Germany). The specific goals of HPSC TerrSys are to achieve relevance at the national and international level in (i) the development and application of HPSC technologies in the geoscientific community; (ii) student education; (iii) HPSC services and support also to the wider geoscientific community; and in (iv) the industry and public sectors via e.g., useful applications and data products. A key feature of HPSC TerrSys is the Simulation Laboratory Terrestrial Systems, which is located at the Jülich Supercomputing Centre (JSC) and provides extensive capabilities with respect to porting, profiling, tuning and performance monitoring of geoscientific software in JSC's supercomputing environment. We will present a summary of success stories of HPSC applications including integrated terrestrial model development, parallel profiling and its application from watersheds to the continent; massively parallel data assimilation using physics-based models and ensemble methods; quasi-operational terrestrial water and energy monitoring; and convection permitting climate simulations over Europe. The success stories stress the need for a formalized education of students in the application of HPSC technologies in future.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Bailey, David H.
The NAS Parallel Benchmarks (NPB) are a suite of parallel computer performance benchmarks. They were originally developed at the NASA Ames Research Center in 1991 to assess high-end parallel supercomputers. Although they are no longer used as widely as they once were for comparing high-end system performance, they continue to be studied and analyzed a great deal in the high-performance computing community. The acronym 'NAS' originally stood for the Numerical Aeronautical Simulation Program at NASA Ames. The name of this organization was subsequently changed to the Numerical Aerospace Simulation Program, and more recently to the NASA Advanced Supercomputing Center, althoughmore » the acronym remains 'NAS.' The developers of the original NPB suite were David H. Bailey, Eric Barszcz, John Barton, David Browning, Russell Carter, LeoDagum, Rod Fatoohi, Samuel Fineberg, Paul Frederickson, Thomas Lasinski, Rob Schreiber, Horst Simon, V. Venkatakrishnan and Sisira Weeratunga. The original NAS Parallel Benchmarks consisted of eight individual benchmark problems, each of which focused on some aspect of scientific computing. The principal focus was in computational aerophysics, although most of these benchmarks have much broader relevance, since in a much larger sense they are typical of many real-world scientific computing applications. The NPB suite grew out of the need for a more rational procedure to select new supercomputers for acquisition by NASA. The emergence of commercially available highly parallel computer systems in the late 1980s offered an attractive alternative to parallel vector supercomputers that had been the mainstay of high-end scientific computing. However, the introduction of highly parallel systems was accompanied by a regrettable level of hype, not only on the part of the commercial vendors but even, in some cases, by scientists using the systems. As a result, it was difficult to discern whether the new systems offered any fundamental performance advantage over vector supercomputers, and, if so, which of the parallel offerings would be most useful in real-world scientific computation. In part to draw attention to some of the performance reporting abuses prevalent at the time, the present author wrote a humorous essay 'Twelve Ways to Fool the Masses,' which described in a light-hearted way a number of the questionable ways in which both vendor marketing people and scientists were inflating and distorting their performance results. All of this underscored the need for an objective and scientifically defensible measure to compare performance on these systems.« less
Automatic Beam Path Analysis of Laser Wakefield Particle Acceleration Data
DOE Office of Scientific and Technical Information (OSTI.GOV)
Rubel, Oliver; Geddes, Cameron G.R.; Cormier-Michel, Estelle
2009-10-19
Numerical simulations of laser wakefield particle accelerators play a key role in the understanding of the complex acceleration process and in the design of expensive experimental facilities. As the size and complexity of simulation output grows, an increasingly acute challenge is the practical need for computational techniques that aid in scientific knowledge discovery. To that end, we present a set of data-understanding algorithms that work in concert in a pipeline fashion to automatically locate and analyze high energy particle bunches undergoing acceleration in very large simulation datasets. These techniques work cooperatively by first identifying features of interest in individual timesteps,more » then integrating features across timesteps, and based on the information derived perform analysis of temporally dynamic features. This combination of techniques supports accurate detection of particle beams enabling a deeper level of scientific understanding of physical phenomena than hasbeen possible before. By combining efficient data analysis algorithms and state-of-the-art data management we enable high-performance analysis of extremely large particle datasets in 3D. We demonstrate the usefulness of our methods for a variety of 2D and 3D datasets and discuss the performance of our analysis pipeline.« less
Department of Defense In-House RDT and E Activities: Management Analysis Report for Fiscal Year 1993
1994-11-01
A worldwide unique lab because it houses a high - speed modeling and simulation system, a prototype...E Division, San Diego, CA: High Performance Computing Laboratory providing a wide range of advanced computer systems for the scientific investigation...Machines CM-200 and a 256-node Thinking Machines CM-S. The CM-5 is in a very large memory, ( high performance 32 Gbytes, >4 0 OFlop) coafiguration,
Stone, John E.; Hallock, Michael J.; Phillips, James C.; Peterson, Joseph R.; Luthey-Schulten, Zaida; Schulten, Klaus
2016-01-01
Many of the continuing scientific advances achieved through computational biology are predicated on the availability of ongoing increases in computational power required for detailed simulation and analysis of cellular processes on biologically-relevant timescales. A critical challenge facing the development of future exascale supercomputer systems is the development of new computing hardware and associated scientific applications that dramatically improve upon the energy efficiency of existing solutions, while providing increased simulation, analysis, and visualization performance. Mobile computing platforms have recently become powerful enough to support interactive molecular visualization tasks that were previously only possible on laptops and workstations, creating future opportunities for their convenient use for meetings, remote collaboration, and as head mounted displays for immersive stereoscopic viewing. We describe early experiences adapting several biomolecular simulation and analysis applications for emerging heterogeneous computing platforms that combine power-efficient system-on-chip multi-core CPUs with high-performance massively parallel GPUs. We present low-cost power monitoring instrumentation that provides sufficient temporal resolution to evaluate the power consumption of individual CPU algorithms and GPU kernels. We compare the performance and energy efficiency of scientific applications running on emerging platforms with results obtained on traditional platforms, identify hardware and algorithmic performance bottlenecks that affect the usability of these platforms, and describe avenues for improving both the hardware and applications in pursuit of the needs of molecular modeling tasks on mobile devices and future exascale computers. PMID:27516922
On the energy footprint of I/O management in Exascale HPC systems
Dorier, Matthieu; Yildiz, Orcun; Ibrahim, Shadi; ...
2016-03-21
The advent of unprecedentedly scalable yet energy hungry Exascale supercomputers poses a major challenge in sustaining a high performance-per-watt ratio. With I/O management acquiring a crucial role in supporting scientific simulations, various I/O management approaches have been proposed to achieve high performance and scalability. But, the details of how these approaches affect energy consumption have not been studied yet. Therefore, this paper aims to explore how much energy a supercomputer consumes while running scientific simulations when adopting various I/O management approaches. In particular, we closely examine three radically different I/O schemes including time partitioning, dedicated cores, and dedicated nodes. Tomore » accomplish this, we implement the three approaches within the Damaris I/O middleware and perform extensive experiments with one of the target HPC applications of the Blue Waters sustained-petaflop supercomputer project: the CM1 atmospheric model. Our experimental results obtained on the French Grid'5000 platform highlight the differences among these three approaches and illustrate in which way various configurations of the application and of the system can impact performance and energy consumption. Moreover, we propose and validate a mathematical model that estimates the energy consumption of a HPC simulation under different I/O approaches. This proposed model gives hints to pre-select the most energy-efficient I/O approach for a particular simulation on a particular HPC system and therefore provides a step towards energy-efficient HPC simulations in Exascale systems. To the best of our knowledge, our work provides the first in-depth look into the energy-performance tradeoffs of I/O management approaches.« less
A Computational Framework for Efficient Low Temperature Plasma Simulations
NASA Astrophysics Data System (ADS)
Verma, Abhishek Kumar; Venkattraman, Ayyaswamy
2016-10-01
Over the past years, scientific computing has emerged as an essential tool for the investigation and prediction of low temperature plasmas (LTP) applications which includes electronics, nanomaterial synthesis, metamaterials etc. To further explore the LTP behavior with greater fidelity, we present a computational toolbox developed to perform LTP simulations. This framework will allow us to enhance our understanding of multiscale plasma phenomenon using high performance computing tools mainly based on OpenFOAM FVM distribution. Although aimed at microplasma simulations, the modular framework is able to perform multiscale, multiphysics simulations of physical systems comprises of LTP. Some salient introductory features are capability to perform parallel, 3D simulations of LTP applications on unstructured meshes. Performance of the solver is tested based on numerical results assessing accuracy and efficiency of benchmarks for problems in microdischarge devices. Numerical simulation of microplasma reactor at atmospheric pressure with hemispherical dielectric coated electrodes will be discussed and hence, provide an overview of applicability and future scope of this framework.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Clouse, C. J.; Edwards, M. J.; McCoy, M. G.
2015-07-07
Through its Advanced Scientific Computing (ASC) and Inertial Confinement Fusion (ICF) code development efforts, Lawrence Livermore National Laboratory (LLNL) provides a world leading numerical simulation capability for the National HED/ICF program in support of the Stockpile Stewardship Program (SSP). In addition the ASC effort provides high performance computing platform capabilities upon which these codes are run. LLNL remains committed to, and will work with, the national HED/ICF program community to help insure numerical simulation needs are met and to make those capabilities available, consistent with programmatic priorities and available resources.
Protein Simulation Data in the Relational Model.
Simms, Andrew M; Daggett, Valerie
2012-10-01
High performance computing is leading to unprecedented volumes of data. Relational databases offer a robust and scalable model for storing and analyzing scientific data. However, these features do not come without a cost-significant design effort is required to build a functional and efficient repository. Modeling protein simulation data in a relational database presents several challenges: the data captured from individual simulations are large, multi-dimensional, and must integrate with both simulation software and external data sites. Here we present the dimensional design and relational implementation of a comprehensive data warehouse for storing and analyzing molecular dynamics simulations using SQL Server.
Protein Simulation Data in the Relational Model
Simms, Andrew M.; Daggett, Valerie
2011-01-01
High performance computing is leading to unprecedented volumes of data. Relational databases offer a robust and scalable model for storing and analyzing scientific data. However, these features do not come without a cost—significant design effort is required to build a functional and efficient repository. Modeling protein simulation data in a relational database presents several challenges: the data captured from individual simulations are large, multi-dimensional, and must integrate with both simulation software and external data sites. Here we present the dimensional design and relational implementation of a comprehensive data warehouse for storing and analyzing molecular dynamics simulations using SQL Server. PMID:23204646
NASA Astrophysics Data System (ADS)
Kaplinger, Brian Douglas
For the past few decades, both the scientific community and the general public have been becoming more aware that the Earth lives in a shooting gallery of small objects. We classify all of these asteroids and comets, known or unknown, that cross Earth's orbit as near-Earth objects (NEOs). A look at our geologic history tells us that NEOs have collided with Earth in the past, and we expect that they will continue to do so. With thousands of known NEOs crossing the orbit of Earth, there has been significant scientific interest in developing the capability to deflect an NEO from an impacting trajectory. This thesis applies the ideas of Smoothed Particle Hydrodynamics (SPH) theory to the NEO disruption problem. A simulation package was designed that allows efficacy simulation to be integrated into the mission planning and design process. This is done by applying ideas in high-performance computing (HPC) on the computer graphics processing unit (GPU). Rather than prove a concept through large standalone simulations on a supercomputer, a highly parallel structure allows for flexible, target dependent questions to be resolved. Built around nonclassified data and analysis, this computer package will allow academic institutions to better tackle the issue of NEO mitigation effectiveness.
Combining high performance simulation, data acquisition, and graphics display computers
NASA Technical Reports Server (NTRS)
Hickman, Robert J.
1989-01-01
Issues involved in the continuing development of an advanced simulation complex are discussed. This approach provides the capability to perform the majority of tests on advanced systems, non-destructively. The controlled test environments can be replicated to examine the response of the systems under test to alternative treatments of the system control design, or test the function and qualification of specific hardware. Field tests verify that the elements simulated in the laboratories are sufficient. The digital computer is hosted by a Digital Equipment Corp. MicroVAX computer with an Aptec Computer Systems Model 24 I/O computer performing the communication function. An Applied Dynamics International AD100 performs the high speed simulation computing and an Evans and Sutherland PS350 performs on-line graphics display. A Scientific Computer Systems SCS40 acts as a high performance FORTRAN program processor to support the complex, by generating numerous large files from programs coded in FORTRAN that are required for the real time processing. Four programming languages are involved in the process, FORTRAN, ADSIM, ADRIO, and STAPLE. FORTRAN is employed on the MicroVAX host to initialize and terminate the simulation runs on the system. The generation of the data files on the SCS40 also is performed with FORTRAN programs. ADSIM and ADIRO are used to program the processing elements of the AD100 and its IOCP processor. STAPLE is used to program the Aptec DIP and DIA processors.
NASA Astrophysics Data System (ADS)
Donà, G.; Faletra, M.
2015-09-01
This paper presents the TT&C performance simulator toolkit developed internally at Thales Alenia Space Italia (TAS-I) to support the design of TT&C subsystems for space exploration and scientific satellites. The simulator has a modular architecture and has been designed using a model-based approach using standard engineering tools such as MATLAB/SIMULINK and mission analysis tools (e.g. STK). The simulator is easily reconfigurable to fit different types of satellites, different mission requirements and different scenarios parameters. This paper provides a brief description of the simulator architecture together with two examples of applications used to demonstrate some of the simulator’s capabilities.
NASA Astrophysics Data System (ADS)
Schulthess, Thomas C.
2013-03-01
The continued thousand-fold improvement in sustained application performance per decade on modern supercomputers keeps opening new opportunities for scientific simulations. But supercomputers have become very complex machines, built with thousands or tens of thousands of complex nodes consisting of multiple CPU cores or, most recently, a combination of CPU and GPU processors. Efficient simulations on such high-end computing systems require tailored algorithms that optimally map numerical methods to particular architectures. These intricacies will be illustrated with simulations of strongly correlated electron systems, where the development of quantum cluster methods, Monte Carlo techniques, as well as their optimal implementation by means of algorithms with improved data locality and high arithmetic density have gone hand in hand with evolving computer architectures. The present work would not have been possible without continued access to computing resources at the National Center for Computational Science of Oak Ridge National Laboratory, which is funded by the Facilities Division of the Office of Advanced Scientific Computing Research, and the Swiss National Supercomputing Center (CSCS) that is funded by ETH Zurich.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Gong, Zhenhuan; Boyuka, David; Zou, X
Download Citation Email Print Request Permissions Save to Project The size and scope of cutting-edge scientific simulations are growing much faster than the I/O and storage capabilities of their run-time environments. The growing gap is exacerbated by exploratory, data-intensive analytics, such as querying simulation data with multivariate, spatio-temporal constraints, which induces heterogeneous access patterns that stress the performance of the underlying storage system. Previous work addresses data layout and indexing techniques to improve query performance for a single access pattern, which is not sufficient for complex analytics jobs. We present PARLO a parallel run-time layout optimization framework, to achieve multi-levelmore » data layout optimization for scientific applications at run-time before data is written to storage. The layout schemes optimize for heterogeneous access patterns with user-specified priorities. PARLO is integrated with ADIOS, a high-performance parallel I/O middleware for large-scale HPC applications, to achieve user-transparent, light-weight layout optimization for scientific datasets. It offers simple XML-based configuration for users to achieve flexible layout optimization without the need to modify or recompile application codes. Experiments show that PARLO improves performance by 2 to 26 times for queries with heterogeneous access patterns compared to state-of-the-art scientific database management systems. Compared to traditional post-processing approaches, its underlying run-time layout optimization achieves a 56% savings in processing time and a reduction in storage overhead of up to 50%. PARLO also exhibits a low run-time resource requirement, while also limiting the performance impact on running applications to a reasonable level.« less
Van Dongen, Hans P A; Caldwell, John A; Caldwell, J Lynn
2006-05-01
Laboratory research has revealed considerable systematic variability in the degree to which individuals' alertness and performance are affected by sleep deprivation. However, little is known about whether or not different populations exhibit similar levels of individual variability. In the present study, we examined individual variability in performance impairment due to sleep loss in a highly select population of militaryjet pilots. Ten active-duty F-117 pilots were deprived of sleep for 38 h and studied repeatedly in a high-fidelity flight simulator. Data were analyzed with a mixed-model ANOVA to quantify individual variability. Statistically significant, systematic individual differences in the effects of sleep deprivation were observed, even when baseline differences were accounted for. The findings suggest that highly select populations may exhibit individual differences in vulnerability to performance impairment from sleep loss just as the general population does. Thus, the scientific and operational communities' reliance on group data as opposed to individual data may entail substantial misestimation of the impact of job-related stressors on safety and performance.
NASA Astrophysics Data System (ADS)
Bird, Robert; Nystrom, David; Albright, Brian
2017-10-01
The ability of scientific simulations to effectively deliver performant computation is increasingly being challenged by successive generations of high-performance computing architectures. Code development to support efficient computation on these modern architectures is both expensive, and highly complex; if it is approached without due care, it may also not be directly transferable between subsequent hardware generations. Previous works have discussed techniques to support the process of adapting a legacy code for modern hardware generations, but despite the breakthroughs in the areas of mini-app development, portable-performance, and cache oblivious algorithms the problem still remains largely unsolved. In this work we demonstrate how a focus on platform agnostic modern code-development can be applied to Particle-in-Cell (PIC) simulations to facilitate effective scientific delivery. This work builds directly on our previous work optimizing VPIC, in which we replaced intrinsic based vectorisation with compile generated auto-vectorization to improve the performance and portability of VPIC. In this work we present the use of a specialized SIMD queue for processing some particle operations, and also preview a GPU capable OpenMP variant of VPIC. Finally we include a lessons learnt. Work performed under the auspices of the U.S. Dept. of Energy by the Los Alamos National Security, LLC Los Alamos National Laboratory under contract DE-AC52-06NA25396 and supported by the LANL LDRD program.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Lingerfelt, Eric J; Endeve, Eirik; Hui, Yawei
Improvements in scientific instrumentation allow imaging at mesoscopic to atomic length scales, many spectroscopic modes, and now--with the rise of multimodal acquisition systems and the associated processing capability--the era of multidimensional, informationally dense data sets has arrived. Technical issues in these combinatorial scientific fields are exacerbated by computational challenges best summarized as a necessity for drastic improvement in the capability to transfer, store, and analyze large volumes of data. The Bellerophon Environment for Analysis of Materials (BEAM) platform provides material scientists the capability to directly leverage the integrated computational and analytical power of High Performance Computing (HPC) to perform scalablemore » data analysis and simulation and manage uploaded data files via an intuitive, cross-platform client user interface. This framework delivers authenticated, "push-button" execution of complex user workflows that deploy data analysis algorithms and computational simulations utilizing compute-and-data cloud infrastructures and HPC environments like Titan at the Oak Ridge Leadershp Computing Facility (OLCF).« less
Toward Transparent Data Management in Multi-layer Storage Hierarchy for HPC Systems
Wadhwa, Bharti; Byna, Suren; Butt, Ali R.
2018-04-17
Upcoming exascale high performance computing (HPC) systems are expected to comprise multi-tier storage hierarchy, and thus will necessitate innovative storage and I/O mechanisms. Traditional disk and block-based interfaces and file systems face severe challenges in utilizing capabilities of storage hierarchies due to the lack of hierarchy support and semantic interfaces. Object-based and semantically-rich data abstractions for scientific data management on large scale systems offer a sustainable solution to these challenges. Such data abstractions can also simplify users involvement in data movement. Here, we take the first steps of realizing such an object abstraction and explore storage mechanisms for these objectsmore » to enhance I/O performance, especially for scientific applications. We explore how an object-based interface can facilitate next generation scalable computing systems by presenting the mapping of data I/O from two real world HPC scientific use cases: a plasma physics simulation code (VPIC) and a cosmology simulation code (HACC). Our storage model stores data objects in different physical organizations to support data movement across layers of memory/storage hierarchy. Our implementation sclaes well to 16K parallel processes, and compared to the state of the art, such as MPI-IO and HDF5, our object-based data abstractions and data placement strategy in multi-level storage hierarchy achieves up to 7 X I/O performance improvement for scientific data.« less
Toward Transparent Data Management in Multi-layer Storage Hierarchy for HPC Systems
DOE Office of Scientific and Technical Information (OSTI.GOV)
Wadhwa, Bharti; Byna, Suren; Butt, Ali R.
Upcoming exascale high performance computing (HPC) systems are expected to comprise multi-tier storage hierarchy, and thus will necessitate innovative storage and I/O mechanisms. Traditional disk and block-based interfaces and file systems face severe challenges in utilizing capabilities of storage hierarchies due to the lack of hierarchy support and semantic interfaces. Object-based and semantically-rich data abstractions for scientific data management on large scale systems offer a sustainable solution to these challenges. Such data abstractions can also simplify users involvement in data movement. Here, we take the first steps of realizing such an object abstraction and explore storage mechanisms for these objectsmore » to enhance I/O performance, especially for scientific applications. We explore how an object-based interface can facilitate next generation scalable computing systems by presenting the mapping of data I/O from two real world HPC scientific use cases: a plasma physics simulation code (VPIC) and a cosmology simulation code (HACC). Our storage model stores data objects in different physical organizations to support data movement across layers of memory/storage hierarchy. Our implementation sclaes well to 16K parallel processes, and compared to the state of the art, such as MPI-IO and HDF5, our object-based data abstractions and data placement strategy in multi-level storage hierarchy achieves up to 7 X I/O performance improvement for scientific data.« less
PerSEUS: Ultra-Low-Power High Performance Computing for Plasma Simulations
NASA Astrophysics Data System (ADS)
Doxas, I.; Andreou, A.; Lyon, J.; Angelopoulos, V.; Lu, S.; Pritchett, P. L.
2017-12-01
Peta-op SupErcomputing Unconventional System (PerSEUS) aims to explore the use for High Performance Scientific Computing (HPC) of ultra-low-power mixed signal unconventional computational elements developed by Johns Hopkins University (JHU), and demonstrate that capability on both fluid and particle Plasma codes. We will describe the JHU Mixed-signal Unconventional Supercomputing Elements (MUSE), and report initial results for the Lyon-Fedder-Mobarry (LFM) global magnetospheric MHD code, and a UCLA general purpose relativistic Particle-In-Cell (PIC) code.
Modeling and analysis of hybrid pixel detector deficiencies for scientific applications
NASA Astrophysics Data System (ADS)
Fahim, Farah; Deptuch, Grzegorz W.; Hoff, James R.; Mohseni, Hooman
2015-08-01
Semiconductor hybrid pixel detectors often consist of a pixellated sensor layer bump bonded to a matching pixelated readout integrated circuit (ROIC). The sensor can range from high resistivity Si to III-V materials, whereas a Si CMOS process is typically used to manufacture the ROIC. Independent, device physics and electronic design automation (EDA) tools are used to determine sensor characteristics and verify functional performance of ROICs respectively with significantly different solvers. Some physics solvers provide the capability of transferring data to the EDA tool. However, single pixel transient simulations are either not feasible due to convergence difficulties or are prohibitively long. A simplified sensor model, which includes a current pulse in parallel with detector equivalent capacitor, is often used; even then, spice type top-level (entire array) simulations range from days to weeks. In order to analyze detector deficiencies for a particular scientific application, accurately defined transient behavioral models of all the functional blocks are required. Furthermore, various simulations, such as transient, noise, Monte Carlo, inter-pixel effects, etc. of the entire array need to be performed within a reasonable time frame without trading off accuracy. The sensor and the analog front-end can be modeling using a real number modeling language, as complex mathematical functions or detailed data can be saved to text files, for further top-level digital simulations. Parasitically aware digital timing is extracted in a standard delay format (sdf) from the pixel digital back-end layout as well as the periphery of the ROIC. For any given input, detector level worst-case and best-case simulations are performed using a Verilog simulation environment to determine the output. Each top-level transient simulation takes no more than 10-15 minutes. The impact of changing key parameters such as sensor Poissonian shot noise, analog front-end bandwidth, jitter due to clock distribution etc. can be accurately analyzed to determine ROIC architectural viability and bottlenecks. Hence the impact of the detector parameters on the scientific application can be studied.
NASA Technical Reports Server (NTRS)
Kikuchi, Hideaki; Kalia, Rajiv K.; Nakano, Aiichiro; Vashishta, Priya; Shimojo, Fuyuki; Saini, Subhash
2003-01-01
Scalability of a low-cost, Intel Xeon-based, multi-Teraflop Linux cluster is tested for two high-end scientific applications: Classical atomistic simulation based on the molecular dynamics method and quantum mechanical calculation based on the density functional theory. These scalable parallel applications use space-time multiresolution algorithms and feature computational-space decomposition, wavelet-based adaptive load balancing, and spacefilling-curve-based data compression for scalable I/O. Comparative performance tests are performed on a 1,024-processor Linux cluster and a conventional higher-end parallel supercomputer, 1,184-processor IBM SP4. The results show that the performance of the Linux cluster is comparable to that of the SP4. We also study various effects, such as the sharing of memory and L2 cache among processors, on the performance.
gadfly: A pandas-based Framework for Analyzing GADGET Simulation Data
NASA Astrophysics Data System (ADS)
Hummel, Jacob A.
2016-11-01
We present the first public release (v0.1) of the open-source gadget Dataframe Library: gadfly. The aim of this package is to leverage the capabilities of the broader python scientific computing ecosystem by providing tools for analyzing simulation data from the astrophysical simulation codes gadget and gizmo using pandas, a thoroughly documented, open-source library providing high-performance, easy-to-use data structures that is quickly becoming the standard for data analysis in python. Gadfly is a framework for analyzing particle-based simulation data stored in the HDF5 format using pandas DataFrames. The package enables efficient memory management, includes utilities for unit handling, coordinate transformations, and parallel batch processing, and provides highly optimized routines for visualizing smoothed-particle hydrodynamics data sets.
Magnetic field simulation and shimming analysis of 3.0T superconducting MRI system
NASA Astrophysics Data System (ADS)
Yue, Z. K.; Liu, Z. Z.; Tang, G. S.; Zhang, X. C.; Duan, L. J.; Liu, W. C.
2018-04-01
3.0T superconducting magnetic resonance imaging (MRI) system has become the mainstream of modern clinical MRI system because of its high field intensity and high degree of uniformity and stability. It has broad prospects in scientific research and other fields. We analyze the principle of magnet designing in this paper. We also perform the magnetic field simulation and shimming analysis of the first 3.0T/850 superconducting MRI system in the world using the Ansoft Maxwell simulation software. We guide the production and optimization of the prototype based on the results of simulation analysis. Thus the magnetic field strength, magnetic field uniformity and magnetic field stability of the prototype is guided to achieve the expected target.
Tools for 3D scientific visualization in computational aerodynamics at NASA Ames Research Center
NASA Technical Reports Server (NTRS)
Bancroft, Gordon; Plessel, Todd; Merritt, Fergus; Watson, Val
1989-01-01
Hardware, software, and techniques used by the Fluid Dynamics Division (NASA) for performing visualization of computational aerodynamics, which can be applied to the visualization of flow fields from computer simulations of fluid dynamics about the Space Shuttle, are discussed. Three visualization techniques applied, post-processing, tracking, and steering, are described, as well as the post-processing software packages used, PLOT3D, SURF (Surface Modeller), GAS (Graphical Animation System), and FAST (Flow Analysis software Toolkit). Using post-processing methods a flow simulation was executed on a supercomputer and, after the simulation was complete, the results were processed for viewing. It is shown that the high-resolution, high-performance three-dimensional workstation combined with specially developed display and animation software provides a good tool for analyzing flow field solutions obtained from supercomputers.
Automating NEURON Simulation Deployment in Cloud Resources.
Stockton, David B; Santamaria, Fidel
2017-01-01
Simulations in neuroscience are performed on local servers or High Performance Computing (HPC) facilities. Recently, cloud computing has emerged as a potential computational platform for neuroscience simulation. In this paper we compare and contrast HPC and cloud resources for scientific computation, then report how we deployed NEURON, a widely used simulator of neuronal activity, in three clouds: Chameleon Cloud, a hybrid private academic cloud for cloud technology research based on the OpenStack software; Rackspace, a public commercial cloud, also based on OpenStack; and Amazon Elastic Cloud Computing, based on Amazon's proprietary software. We describe the manual procedures and how to automate cloud operations. We describe extending our simulation automation software called NeuroManager (Stockton and Santamaria, Frontiers in Neuroinformatics, 2015), so that the user is capable of recruiting private cloud, public cloud, HPC, and local servers simultaneously with a simple common interface. We conclude by performing several studies in which we examine speedup, efficiency, total session time, and cost for sets of simulations of a published NEURON model.
Automating NEURON Simulation Deployment in Cloud Resources
Santamaria, Fidel
2016-01-01
Simulations in neuroscience are performed on local servers or High Performance Computing (HPC) facilities. Recently, cloud computing has emerged as a potential computational platform for neuroscience simulation. In this paper we compare and contrast HPC and cloud resources for scientific computation, then report how we deployed NEURON, a widely used simulator of neuronal activity, in three clouds: Chameleon Cloud, a hybrid private academic cloud for cloud technology research based on the Open-Stack software; Rackspace, a public commercial cloud, also based on OpenStack; and Amazon Elastic Cloud Computing, based on Amazon’s proprietary software. We describe the manual procedures and how to automate cloud operations. We describe extending our simulation automation software called NeuroManager (Stockton and Santamaria, Frontiers in Neuroinformatics, 2015), so that the user is capable of recruiting private cloud, public cloud, HPC, and local servers simultaneously with a simple common interface. We conclude by performing several studies in which we examine speedup, efficiency, total session time, and cost for sets of simulations of a published NEURON model. PMID:27655341
TADSim: Discrete Event-based Performance Prediction for Temperature Accelerated Dynamics
Mniszewski, Susan M.; Junghans, Christoph; Voter, Arthur F.; ...
2015-04-16
Next-generation high-performance computing will require more scalable and flexible performance prediction tools to evaluate software--hardware co-design choices relevant to scientific applications and hardware architectures. Here, we present a new class of tools called application simulators—parameterized fast-running proxies of large-scale scientific applications using parallel discrete event simulation. Parameterized choices for the algorithmic method and hardware options provide a rich space for design exploration and allow us to quickly find well-performing software--hardware combinations. We demonstrate our approach with a TADSim simulator that models the temperature-accelerated dynamics (TAD) method, an algorithmically complex and parameter-rich member of the accelerated molecular dynamics (AMD) family ofmore » molecular dynamics methods. The essence of the TAD application is captured without the computational expense and resource usage of the full code. We accomplish this by identifying the time-intensive elements, quantifying algorithm steps in terms of those elements, abstracting them out, and replacing them by the passage of time. We use TADSim to quickly characterize the runtime performance and algorithmic behavior for the otherwise long-running simulation code. We extend TADSim to model algorithm extensions, such as speculative spawning of the compute-bound stages, and predict performance improvements without having to implement such a method. Validation against the actual TAD code shows close agreement for the evolution of an example physical system, a silver surface. Finally, focused parameter scans have allowed us to study algorithm parameter choices over far more scenarios than would be possible with the actual simulation. This has led to interesting performance-related insights and suggested extensions.« less
Optimizing CyberShake Seismic Hazard Workflows for Large HPC Resources
NASA Astrophysics Data System (ADS)
Callaghan, S.; Maechling, P. J.; Juve, G.; Vahi, K.; Deelman, E.; Jordan, T. H.
2014-12-01
The CyberShake computational platform is a well-integrated collection of scientific software and middleware that calculates 3D simulation-based probabilistic seismic hazard curves and hazard maps for the Los Angeles region. Currently each CyberShake model comprises about 235 million synthetic seismograms from about 415,000 rupture variations computed at 286 sites. CyberShake integrates large-scale parallel and high-throughput serial seismological research codes into a processing framework in which early stages produce files used as inputs by later stages. Scientific workflow tools are used to manage the jobs, data, and metadata. The Southern California Earthquake Center (SCEC) developed the CyberShake platform using USC High Performance Computing and Communications systems and open-science NSF resources.CyberShake calculations were migrated to the NSF Track 1 system NCSA Blue Waters when it became operational in 2013, via an interdisciplinary team approach including domain scientists, computer scientists, and middleware developers. Due to the excellent performance of Blue Waters and CyberShake software optimizations, we reduced the makespan (a measure of wallclock time-to-solution) of a CyberShake study from 1467 to 342 hours. We will describe the technical enhancements behind this improvement, including judicious introduction of new GPU software, improved scientific software components, increased workflow-based automation, and Blue Waters-specific workflow optimizations.Our CyberShake performance improvements highlight the benefits of scientific workflow tools. The CyberShake workflow software stack includes the Pegasus Workflow Management System (Pegasus-WMS, which includes Condor DAGMan), HTCondor, and Globus GRAM, with Pegasus-mpi-cluster managing the high-throughput tasks on the HPC resources. The workflow tools handle data management, automatically transferring about 13 TB back to SCEC storage.We will present performance metrics from the most recent CyberShake study, executed on Blue Waters. We will compare the performance of CPU and GPU versions of our large-scale parallel wave propagation code, AWP-ODC-SGT. Finally, we will discuss how these enhancements have enabled SCEC to move forward with plans to increase the CyberShake simulation frequency to 1.0 Hz.
A web portal for hydrodynamical, cosmological simulations
NASA Astrophysics Data System (ADS)
Ragagnin, A.; Dolag, K.; Biffi, V.; Cadolle Bel, M.; Hammer, N. J.; Krukau, A.; Petkova, M.; Steinborn, D.
2017-07-01
This article describes a data centre hosting a web portal for accessing and sharing the output of large, cosmological, hydro-dynamical simulations with a broad scientific community. It also allows users to receive related scientific data products by directly processing the raw simulation data on a remote computing cluster. The data centre has a multi-layer structure: a web portal, a job control layer, a computing cluster and a HPC storage system. The outer layer enables users to choose an object from the simulations. Objects can be selected by visually inspecting 2D maps of the simulation data, by performing highly compounded and elaborated queries or graphically by plotting arbitrary combinations of properties. The user can run analysis tools on a chosen object. These services allow users to run analysis tools on the raw simulation data. The job control layer is responsible for handling and performing the analysis jobs, which are executed on a computing cluster. The innermost layer is formed by a HPC storage system which hosts the large, raw simulation data. The following services are available for the users: (I) CLUSTERINSPECT visualizes properties of member galaxies of a selected galaxy cluster; (II) SIMCUT returns the raw data of a sub-volume around a selected object from a simulation, containing all the original, hydro-dynamical quantities; (III) SMAC creates idealized 2D maps of various, physical quantities and observables of a selected object; (IV) PHOX generates virtual X-ray observations with specifications of various current and upcoming instruments.
WUVS simulator: detectability of spectral lines with the WSO-UV spectrographs
NASA Astrophysics Data System (ADS)
Marcos-Arenal, Pablo; de Castro, Ana I. Gómez; Abarca, Belén Perea; Sachkov, Mikhail
2017-04-01
The World Space Observatory Ultraviolet telescope is equipped with high dispersion (55,000) spectrographs working in the 1150 to 3100 Å spectral range. To evaluate the impact of the design on the scientific objectives of the mission, a simulation software tool has been developed. This simulator builds on the development made for the PLATO space mission and it is designed to generate synthetic time-series of images by including models of all important noise sources. We describe its design and performance. Moreover, its application to the detectability of important spectral features for star formation and exoplanetary research is addressed.
Mechanism change in a simulation of peer review: from junk support to elitism.
Paolucci, Mario; Grimaldo, Francisco
2014-01-01
Peer review works as the hinge of the scientific process, mediating between research and the awareness/acceptance of its results. While it might seem obvious that science would regulate itself scientifically, the consensus on peer review is eroding; a deeper understanding of its workings and potential alternatives is sorely needed. Employing a theoretical approach supported by agent-based simulation, we examined computational models of peer review, performing what we propose to call redesign , that is, the replication of simulations using different mechanisms . Here, we show that we are able to obtain the high sensitivity to rational cheating that is present in literature. In addition, we also show how this result appears to be fragile against small variations in mechanisms. Therefore, we argue that exploration of the parameter space is not enough if we want to support theoretical statements with simulation, and that exploration at the level of mechanisms is needed. These findings also support prudence in the application of simulation results based on single mechanisms, and endorse the use of complex agent platforms that encourage experimentation of diverse mechanisms.
Surveys with Athena: results from detailed SIXTE simulations
NASA Astrophysics Data System (ADS)
Lanzuisi, G.; Comastri, A.; Aird, J.; Brusa, M.; Cappelluti, N.; Gilli, R.; Matute, I.
2017-10-01
"Formation and early growth of BH' and "Accretion by supermassive BH through cosmic time' are two of the scientific objectives of the Athena mission. To these and other topics (i.e. first galaxy groups, cold and warm obscuration and feedback signatures in AGN at high z), a large fraction (20-25%) of the Athena Mock Observing Plan is devoted, in the form of a multi-tiered (deep-medium-wide) survey with the WFI. We used the flexible SIXTE simulator to study the impact of different instrumental configurations, in terms of WFI FOV, mirror psf, background levels, on the performance in the three layers of the WFI survey. We mainly focus on the scientific objective that drives the survey configuration: the detection of at least 10 AGN at z=6-8 with Log(LX)=43-43.5 erg/s and 10 at z=8.10 with Log(LX)=44-44.5 erg/s. Implications for other scientific objectives involved in the survey are also discussed.
GPU Implementation of High Rayleigh Number Three-Dimensional Mantle Convection
NASA Astrophysics Data System (ADS)
Sanchez, D. A.; Yuen, D. A.; Wright, G. B.; Barnett, G. A.
2010-12-01
Although we have entered the age of petascale computing, many factors are still prohibiting high-performance computing (HPC) from infiltrating all suitable scientific disciplines. For this reason and others, application of GPU to HPC is gaining traction in the scientific world. With its low price point, high performance potential, and competitive scalability, GPU has been an option well worth considering for the last few years. Moreover with the advent of NVIDIA's Fermi architecture, which brings ECC memory, better double-precision performance, and more RAM to GPU, there is a strong message of corporate support for GPU in HPC. However many doubts linger concerning the practicality of using GPU for scientific computing. In particular, GPU has a reputation for being difficult to program and suitable for only a small subset of problems. Although inroads have been made in addressing these concerns, for many scientists GPU still has hurdles to clear before becoming an acceptable choice. We explore the applicability of GPU to geophysics by implementing a three-dimensional, second-order finite-difference model of Rayleigh-Benard thermal convection on an NVIDIA GPU using C for CUDA. Our code reaches sufficient resolution, on the order of 500x500x250 evenly-spaced finite-difference gridpoints, on a single GPU. We make extensive use of highly optimized CUBLAS routines, allowing us to achieve performance on the order of O( 0.1 ) µs per timestep*gridpoint at this resolution. This performance has allowed us to study high Rayleigh number simulations, on the order of 2x10^7, on a single GPU.
NASA Technical Reports Server (NTRS)
Talbot, Bryan; Zhou, Shu-Jia; Higgins, Glenn
2002-01-01
One of the most significant challenges in large-scale climate modeling, as well as in high-performance computing in other scientific fields, is that of effectively integrating many software models from multiple contributors. A software framework facilitates the integration task. both in the development and runtime stages of the simulation. Effective software frameworks reduce the programming burden for the investigators, freeing them to focus more on the science and less on the parallel communication implementation, while maintaining high performance across numerous supercomputer and workstation architectures. This document proposes a strawman framework design for the climate community based on the integration of Cactus, from the relativistic physics community, and UCLA/UCB Distributed Data Broker (DDB) from the climate community. This design is the result of an extensive survey of climate models and frameworks in the climate community as well as frameworks from many other scientific communities. The design addresses fundamental development and runtime needs using Cactus, a framework with interfaces for FORTRAN and C-based languages, and high-performance model communication needs using DDB. This document also specifically explores object-oriented design issues in the context of climate modeling as well as climate modeling issues in terms of object-oriented design.
NASA Astrophysics Data System (ADS)
Corona, Thomas
The Karlsruhe Tritium Neutrino (KATRIN) experiment is a tritium beta decay experiment designed to make a direct, model independent measurement of the electron neutrino mass. The experimental apparatus employs strong ( O[T]) magnetostatic and (O[10 5 V/m]) electrostatic fields in regions of ultra high (O[10-11 mbar]) vacuum in order to obtain precise measurements of the electron energy spectrum near the endpoint of tritium beta-decay. The electrostatic fields in KATRIN are formed by multiscale electrode geometries, necessitating the development of high performance field simulation software. To this end, we present a Boundary Element Method (BEM) with analytic boundary integral terms in conjunction with the Robin Hood linear algebraic solver, a nonstationary successive subspace correction (SSC) method. We describe an implementation of these techniques for high performance computing environments in the software KEMField, along with the geometry modeling and discretization software KGeoBag. We detail the application of KEMField and KGeoBag to KATRIN's spectrometer and detector sections, and demonstrate its use in furthering several of KATRIN's scientific goals. Finally, we present the results of a measurement designed to probe the electrostatic profile of KATRIN's main spectrometer in comparison to simulated results.
Simulation of Martian EVA at the Mars Society Arctic Research Station
NASA Astrophysics Data System (ADS)
Pletser, V.; Zubrin, R.; Quinn, K.
The Mars Society has established a Mars Arctic Research Station (M.A.R.S.) on Devon Island, North of Canada, in the middle of the Haughton crater formed by the impact of a large meteorite several million years ago. The site was selected for its similarities with the surface of the Mars planet. During the Summer 2001, the MARS Flashline Research Station supported an extended international simulation campaign of human Mars exploration operations. Six rotations of six person crews spent up to ten days each at the MARS Flashline Research Station. International crews, of mixed gender and professional qualifications, conducted various tasks as a Martian crew would do and performed scientific experiments in several fields (Geophysics, Biology, Psychology). One of the goals of this simulation campaign was to assess the operational and technical feasibility of sustaining a crew in an autonomous habitat, conducting a field scientific research program. Operations were conducted as they would be during a Martian mission, including Extra-Vehicular Activities (EVA) with specially designed unpressurized suits. The second rotation crew conducted seven simulated EVAs for a total of 17 hours, including motorized EVAs with All Terrain Vehicles, to perform field scientific experiments in Biology and Geophysics. Some EVAs were highly successful. For some others, several problems were encountered related to hardware technical failures and to bad weather conditions. The paper will present the experiment programme conducted at the Mars Flashline Research Station, the problems encountered and the lessons learned from an EVA operational point of view. Suggestions to improve foreseen Martian EVA operations will be discussed.
A Collaborative Extensible User Environment for Simulation and Knowledge Management
DOE Office of Scientific and Technical Information (OSTI.GOV)
Freedman, Vicky L.; Lansing, Carina S.; Porter, Ellen A.
2015-06-01
In scientific simulation, scientists use measured data to create numerical models, execute simulations and analyze results from advanced simulators executing on high performance computing platforms. This process usually requires a team of scientists collaborating on data collection, model creation and analysis, and on authorship of publications and data. This paper shows that scientific teams can benefit from a user environment called Akuna that permits subsurface scientists in disparate locations to collaborate on numerical modeling and analysis projects. The Akuna user environment is built on the Velo framework that provides both a rich client environment for conducting and analyzing simulations andmore » a Web environment for data sharing and annotation. Akuna is an extensible toolset that integrates with Velo, and is designed to support any type of simulator. This is achieved through data-driven user interface generation, use of a customizable knowledge management platform, and an extensible framework for simulation execution, monitoring and analysis. This paper describes how the customized Velo content management system and the Akuna toolset are used to integrate and enhance an effective collaborative research and application environment. The extensible architecture of Akuna is also described and demonstrates its usage for creation and execution of a 3D subsurface simulation.« less
Parallel Tensor Compression for Large-Scale Scientific Data.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Kolda, Tamara G.; Ballard, Grey; Austin, Woody Nathan
As parallel computing trends towards the exascale, scientific data produced by high-fidelity simulations are growing increasingly massive. For instance, a simulation on a three-dimensional spatial grid with 512 points per dimension that tracks 64 variables per grid point for 128 time steps yields 8 TB of data. By viewing the data as a dense five way tensor, we can compute a Tucker decomposition to find inherent low-dimensional multilinear structure, achieving compression ratios of up to 10000 on real-world data sets with negligible loss in accuracy. So that we can operate on such massive data, we present the first-ever distributed memorymore » parallel implementation for the Tucker decomposition, whose key computations correspond to parallel linear algebra operations, albeit with nonstandard data layouts. Our approach specifies a data distribution for tensors that avoids any tensor data redistribution, either locally or in parallel. We provide accompanying analysis of the computation and communication costs of the algorithms. To demonstrate the compression and accuracy of the method, we apply our approach to real-world data sets from combustion science simulations. We also provide detailed performance results, including parallel performance in both weak and strong scaling experiments.« less
High performance hybrid functional Petri net simulations of biological pathway models on CUDA.
Chalkidis, Georgios; Nagasaki, Masao; Miyano, Satoru
2011-01-01
Hybrid functional Petri nets are a wide-spread tool for representing and simulating biological models. Due to their potential of providing virtual drug testing environments, biological simulations have a growing impact on pharmaceutical research. Continuous research advancements in biology and medicine lead to exponentially increasing simulation times, thus raising the demand for performance accelerations by efficient and inexpensive parallel computation solutions. Recent developments in the field of general-purpose computation on graphics processing units (GPGPU) enabled the scientific community to port a variety of compute intensive algorithms onto the graphics processing unit (GPU). This work presents the first scheme for mapping biological hybrid functional Petri net models, which can handle both discrete and continuous entities, onto compute unified device architecture (CUDA) enabled GPUs. GPU accelerated simulations are observed to run up to 18 times faster than sequential implementations. Simulating the cell boundary formation by Delta-Notch signaling on a CUDA enabled GPU results in a speedup of approximately 7x for a model containing 1,600 cells.
ORNL Cray X1 evaluation status report
DOE Office of Scientific and Technical Information (OSTI.GOV)
Agarwal, P.K.; Alexander, R.A.; Apra, E.
2004-05-01
On August 15, 2002 the Department of Energy (DOE) selected the Center for Computational Sciences (CCS) at Oak Ridge National Laboratory (ORNL) to deploy a new scalable vector supercomputer architecture for solving important scientific problems in climate, fusion, biology, nanoscale materials and astrophysics. ''This program is one of the first steps in an initiative designed to provide U.S. scientists with the computational power that is essential to 21st century scientific leadership,'' said Dr. Raymond L. Orbach, director of the department's Office of Science. In FY03, CCS procured a 256-processor Cray X1 to evaluate the processors, memory subsystem, scalability of themore » architecture, software environment and to predict the expected sustained performance on key DOE applications codes. The results of the micro-benchmarks and kernel bench marks show the architecture of the Cray X1 to be exceptionally fast for most operations. The best results are shown on large problems, where it is not possible to fit the entire problem into the cache of the processors. These large problems are exactly the types of problems that are important for the DOE and ultra-scale simulation. Application performance is found to be markedly improved by this architecture: - Large-scale simulations of high-temperature superconductors run 25 times faster than on an IBM Power4 cluster using the same number of processors. - Best performance of the parallel ocean program (POP v1.4.3) is 50 percent higher than on Japan s Earth Simulator and 5 times higher than on an IBM Power4 cluster. - A fusion application, global GYRO transport, was found to be 16 times faster on the X1 than on an IBM Power3. The increased performance allowed simulations to fully resolve questions raised by a prior study. - The transport kernel in the AGILE-BOLTZTRAN astrophysics code runs 15 times faster than on an IBM Power4 cluster using the same number of processors. - Molecular dynamics simulations related to the phenomenon of photon echo run 8 times faster than previously achieved. Even at 256 processors, the Cray X1 system is already outperforming other supercomputers with thousands of processors for a certain class of applications such as climate modeling and some fusion applications. This evaluation is the outcome of a number of meetings with both high-performance computing (HPC) system vendors and application experts over the past 9 months and has received broad-based support from the scientific community and other agencies.« less
NASA Astrophysics Data System (ADS)
Okaya, D.; Deelman, E.; Maechling, P.; Wong-Barnum, M.; Jordan, T. H.; Meyers, D.
2007-12-01
Large scientific collaborations, such as the SCEC Petascale Cyberfacility for Physics-based Seismic Hazard Analysis (PetaSHA) Project, involve interactions between many scientists who exchange ideas and research results. These groups must organize, manage, and make accessible their community materials of observational data, derivative (research) results, computational products, and community software. The integration of scientific workflows as a paradigm to solve complex computations provides advantages of efficiency, reliability, repeatability, choices, and ease of use. The underlying resource needed for a scientific workflow to function and create discoverable and exchangeable products is the construction, tracking, and preservation of metadata. In the scientific workflow environment there is a two-tier structure of metadata. Workflow-level metadata and provenance describe operational steps, identity of resources, execution status, and product locations and names. Domain-level metadata essentially define the scientific meaning of data, codes and products. To a large degree the metadata at these two levels are separate. However, between these two levels is a subset of metadata produced at one level but is needed by the other. This crossover metadata suggests that some commonality in metadata handling is needed. SCEC researchers are collaborating with computer scientists at SDSC, the USC Information Sciences Institute, and Carnegie Mellon Univ. in order to perform earthquake science using high-performance computational resources. A primary objective of the "PetaSHA" collaboration is to perform physics-based estimations of strong ground motion associated with real and hypothetical earthquakes located within Southern California. Construction of 3D earth models, earthquake representations, and numerical simulation of seismic waves are key components of these estimations. Scientific workflows are used to orchestrate the sequences of scientific tasks and to access distributed computational facilities such as the NSF TeraGrid. Different types of metadata are produced and captured within the scientific workflows. One workflow within PetaSHA ("Earthworks") performs a linear sequence of tasks with workflow and seismological metadata preserved. Downstream scientific codes ingest these metadata produced by upstream codes. The seismological metadata uses attribute-value pairing in plain text; an identified need is to use more advanced handling methods. Another workflow system within PetaSHA ("Cybershake") involves several complex workflows in order to perform statistical analysis of ground shaking due to thousands of hypothetical but plausible earthquakes. Metadata management has been challenging due to its construction around a number of legacy scientific codes. We describe difficulties arising in the scientific workflow due to the lack of this metadata and suggest corrective steps, which in some cases include the cultural shift of domain science programmers coding for metadata.
SCEC Earthquake System Science Using High Performance Computing
NASA Astrophysics Data System (ADS)
Maechling, P. J.; Jordan, T. H.; Archuleta, R.; Beroza, G.; Bielak, J.; Chen, P.; Cui, Y.; Day, S.; Deelman, E.; Graves, R. W.; Minster, J. B.; Olsen, K. B.
2008-12-01
The SCEC Community Modeling Environment (SCEC/CME) collaboration performs basic scientific research using high performance computing with the goal of developing a predictive understanding of earthquake processes and seismic hazards in California. SCEC/CME research areas including dynamic rupture modeling, wave propagation modeling, probabilistic seismic hazard analysis (PSHA), and full 3D tomography. SCEC/CME computational capabilities are organized around the development and application of robust, re- usable, well-validated simulation systems we call computational platforms. The SCEC earthquake system science research program includes a wide range of numerical modeling efforts and we continue to extend our numerical modeling codes to include more realistic physics and to run at higher and higher resolution. During this year, the SCEC/USGS OpenSHA PSHA computational platform was used to calculate PSHA hazard curves and hazard maps using the new UCERF2.0 ERF and new 2008 attenuation relationships. Three SCEC/CME modeling groups ran 1Hz ShakeOut simulations using different codes and computer systems and carefully compared the results. The DynaShake Platform was used to calculate several dynamic rupture-based source descriptions equivalent in magnitude and final surface slip to the ShakeOut 1.2 kinematic source description. A SCEC/CME modeler produced 10Hz synthetic seismograms for the ShakeOut 1.2 scenario rupture by combining 1Hz deterministic simulation results with 10Hz stochastic seismograms. SCEC/CME modelers ran an ensemble of seven ShakeOut-D simulations to investigate the variability of ground motions produced by dynamic rupture-based source descriptions. The CyberShake Platform was used to calculate more than 15 new probabilistic seismic hazard analysis (PSHA) hazard curves using full 3D waveform modeling and the new UCERF2.0 ERF. The SCEC/CME group has also produced significant computer science results this year. Large-scale SCEC/CME high performance codes were run on NSF TeraGrid sites including simulations that use the full PSC Big Ben supercomputer (4096 cores) and simulations that ran on more than 10K cores at TACC Ranger. The SCEC/CME group used scientific workflow tools and grid-computing to run more than 1.5 million jobs at NCSA for the CyberShake project. Visualizations produced by a SCEC/CME researcher of the 10Hz ShakeOut 1.2 scenario simulation data were used by USGS in ShakeOut publications and public outreach efforts. OpenSHA was ported onto an NSF supercomputer and was used to produce very high resolution hazard PSHA maps that contained more than 1.6 million hazard curves.
Prediction and characterization of application power use in a high-performance computing environment
Bugbee, Bruce; Phillips, Caleb; Egan, Hilary; ...
2017-02-27
Power use in data centers and high-performance computing (HPC) facilities has grown in tandem with increases in the size and number of these facilities. Substantial innovation is needed to enable meaningful reduction in energy footprints in leadership-class HPC systems. In this paper, we focus on characterizing and investigating application-level power usage. We demonstrate potential methods for predicting power usage based on a priori and in situ characteristics. Lastly, we highlight a potential use case of this method through a simulated power-aware scheduler using historical jobs from a real scientific HPC system.
Enabling Diverse Software Stacks on Supercomputers using High Performance Virtual Clusters.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Younge, Andrew J.; Pedretti, Kevin; Grant, Ryan
While large-scale simulations have been the hallmark of the High Performance Computing (HPC) community for decades, Large Scale Data Analytics (LSDA) workloads are gaining attention within the scientific community not only as a processing component to large HPC simulations, but also as standalone scientific tools for knowledge discovery. With the path towards Exascale, new HPC runtime systems are also emerging in a way that differs from classical distributed com- puting models. However, system software for such capabilities on the latest extreme-scale DOE supercomputing needs to be enhanced to more appropriately support these types of emerging soft- ware ecosystems. In thismore » paper, we propose the use of Virtual Clusters on advanced supercomputing resources to enable systems to support not only HPC workloads, but also emerging big data stacks. Specifi- cally, we have deployed the KVM hypervisor within Cray's Compute Node Linux on a XC-series supercomputer testbed. We also use libvirt and QEMU to manage and provision VMs directly on compute nodes, leveraging Ethernet-over-Aries network emulation. To our knowledge, this is the first known use of KVM on a true MPP supercomputer. We investigate the overhead our solution using HPC benchmarks, both evaluating single-node performance as well as weak scaling of a 32-node virtual cluster. Overall, we find single node performance of our solution using KVM on a Cray is very efficient with near-native performance. However overhead increases by up to 20% as virtual cluster size increases, due to limitations of the Ethernet-over-Aries bridged network. Furthermore, we deploy Apache Spark with large data analysis workloads in a Virtual Cluster, ef- fectively demonstrating how diverse software ecosystems can be supported by High Performance Virtual Clusters.« less
ERIC Educational Resources Information Center
Rodrigues, João P. G. L. M.; Melquiond, Adrien S. J.; Bonvin, Alexandre M. J. J.
2016-01-01
Molecular modelling and simulations are nowadays an integral part of research in areas ranging from physics to chemistry to structural biology, as well as pharmaceutical drug design. This popularity is due to the development of high-performance hardware and of accurate and efficient molecular mechanics algorithms by the scientific community. These…
Modern Scientific Visualization is more than Just Pretty Pictures
DOE Office of Scientific and Technical Information (OSTI.GOV)
Bethel, E Wes; Rubel, Oliver; Wu, Kesheng
2008-12-05
While the primary product of scientific visualization is images and movies, its primary objective is really scientific insight. Too often, the focus of visualization research is on the product, not the mission. This paper presents two case studies, both that appear in previous publications, that focus on using visualization technology to produce insight. The first applies"Query-Driven Visualization" concepts to laser wakefield simulation data to help identify and analyze the process of beam formation. The second uses topological analysis to provide a quantitative basis for (i) understanding the mixing process in hydrodynamic simulations, and (ii) performing comparative analysis of data frommore » two different types of simulations that model hydrodynamic instability.« less
National Laboratory for Advanced Scientific Visualization at UNAM - Mexico
NASA Astrophysics Data System (ADS)
Manea, Marina; Constantin Manea, Vlad; Varela, Alfredo
2016-04-01
In 2015, the National Autonomous University of Mexico (UNAM) joined the family of Universities and Research Centers where advanced visualization and computing plays a key role to promote and advance missions in research, education, community outreach, as well as business-oriented consulting. This initiative provides access to a great variety of advanced hardware and software resources and offers a range of consulting services that spans a variety of areas related to scientific visualization, among which are: neuroanatomy, embryonic development, genome related studies, geosciences, geography, physics and mathematics related disciplines. The National Laboratory for Advanced Scientific Visualization delivers services through three main infrastructure environments: the 3D fully immersive display system Cave, the high resolution parallel visualization system Powerwall, the high resolution spherical displays Earth Simulator. The entire visualization infrastructure is interconnected to a high-performance-computing-cluster (HPCC) called ADA in honor to Ada Lovelace, considered to be the first computer programmer. The Cave is an extra large 3.6m wide room with projected images on the front, left and right, as well as floor walls. Specialized crystal eyes LCD-shutter glasses provide a strong stereo depth perception, and a variety of tracking devices allow software to track the position of a user's hand, head and wand. The Powerwall is designed to bring large amounts of complex data together through parallel computing for team interaction and collaboration. This system is composed by 24 (6x4) high-resolution ultra-thin (2 mm) bezel monitors connected to a high-performance GPU cluster. The Earth Simulator is a large (60") high-resolution spherical display used for global-scale data visualization like geophysical, meteorological, climate and ecology data. The HPCC-ADA, is a 1000+ computing core system, which offers parallel computing resources to applications that requires large quantity of memory as well as large and fast parallel storage systems. The entire system temperature is controlled by an energy and space efficient cooling solution, based on large rear door liquid cooled heat exchangers. This state-of-the-art infrastructure will boost research activities in the region, offer a powerful scientific tool for teaching at undergraduate and graduate levels, and enhance association and cooperation with business-oriented organizations.
Effects of Students' Prior Knowledge on Scientific Reasoning in Density.
ERIC Educational Resources Information Center
Yang, Il-Ho; Kwon, Yong-Ju; Kim, Young-Shin; Jang, Myoung-Duk; Jeong, Jin-Woo; Park, Kuk-Tae
2002-01-01
Investigates the effects of students' prior knowledge on the scientific reasoning processes of performing the task of controlling variables with computer simulation and identifies a number of problems that students encounter in scientific discovery. Involves (n=27) 5th grade students and (n=33) 7th grade students. Indicates that students' prior…
Numerical simulation of turbulent combustion: Scientific challenges
NASA Astrophysics Data System (ADS)
Ren, ZhuYin; Lu, Zhen; Hou, LingYun; Lu, LiuYan
2014-08-01
Predictive simulation of engine combustion is key to understanding the underlying complicated physicochemical processes, improving engine performance, and reducing pollutant emissions. Critical issues as turbulence modeling, turbulence-chemistry interaction, and accommodation of detailed chemical kinetics in complex flows remain challenging and essential for high-fidelity combustion simulation. This paper reviews the current status of the state-of-the-art large eddy simulation (LES)/prob-ability density function (PDF)/detailed chemistry approach that can address the three challenging modelling issues. PDF as a subgrid model for LES is formulated and the hybrid mesh-particle method for LES/PDF simulations is described. Then the development need in micro-mixing models for the PDF simulations of turbulent premixed combustion is identified. Finally the different acceleration methods for detailed chemistry are reviewed and a combined strategy is proposed for further development.
Constructing Scientific Arguments Using Evidence from Dynamic Computational Climate Models
ERIC Educational Resources Information Center
Pallant, Amy; Lee, Hee-Sun
2015-01-01
Modeling and argumentation are two important scientific practices students need to develop throughout school years. In this paper, we investigated how middle and high school students (N = 512) construct a scientific argument based on evidence from computational models with which they simulated climate change. We designed scientific argumentation…
Beowulf Distributed Processing and the United States Geological Survey
Maddox, Brian G.
2002-01-01
Introduction In recent years, the United States Geological Survey's (USGS) National Mapping Discipline (NMD) has expanded its scientific and research activities. Work is being conducted in areas such as emergency response research, scientific visualization, urban prediction, and other simulation activities. Custom-produced digital data have become essential for these types of activities. High-resolution, remotely sensed datasets are also seeing increased use. Unfortunately, the NMD is also finding that it lacks the resources required to perform some of these activities. Many of these projects require large amounts of computer processing resources. Complex urban-prediction simulations, for example, involve large amounts of processor-intensive calculations on large amounts of input data. This project was undertaken to learn and understand the concepts of distributed processing. Experience was needed in developing these types of applications. The idea was that this type of technology could significantly aid the needs of the NMD scientific and research programs. Porting a numerically intensive application currently being used by an NMD science program to run in a distributed fashion would demonstrate the usefulness of this technology. There are several benefits that this type of technology can bring to the USGS's research programs. Projects can be performed that were previously impossible due to a lack of computing resources. Other projects can be performed on a larger scale than previously possible. For example, distributed processing can enable urban dynamics research to perform simulations on larger areas without making huge sacrifices in resolution. The processing can also be done in a more reasonable amount of time than with traditional single-threaded methods (a scaled version of Chester County, Pennsylvania, took about fifty days to finish its first calibration phase with a single-threaded program). This paper has several goals regarding distributed processing technology. It will describe the benefits of the technology. Real data about a distributed application will be presented as an example of the benefits that this technology can bring to USGS scientific programs. Finally, some of the issues with distributed processing that relate to USGS work will be discussed.
Computer Series, 52: Scientific Exploration with a Microcomputer: Simulations for Nonscientists.
ERIC Educational Resources Information Center
Whisnant, David M.
1984-01-01
Describes two simulations, written for Apple II microcomputers, focusing on scientific methodology. The first is based on the tendency of colloidal iron in high concentrations to stick to fish gills and cause breathing difficulties. The second, modeled after the dioxin controversy, examines a hypothetical chemical thought to cause cancer. (JN)
Intelligence in Scientific Computing.
1993-12-31
simulation) a high-performance controller for a magnetic levitation system - the German Transrapid system. The new control system can stabilize maglev ...techniques. A paper by Feng Zhao and Richard Thornton about the maglev controller designed by his program was presented at the 31st IEEE conference on...Massachusetts Insti- tute of Technology, 1991. Also availible as MIT AITR 1385. Zhao, F. and Thornton, R. "Automatic Design of a Maglev Controller in
Scientific Assistant Virtual Laboratory (SAVL)
NASA Astrophysics Data System (ADS)
Alaghband, Gita; Fardi, Hamid; Gnabasik, David
2007-03-01
The Scientific Assistant Virtual Laboratory (SAVL) is a scientific discovery environment, an interactive simulated virtual laboratory, for learning physics and mathematics. The purpose of this computer-assisted intervention is to improve middle and high school student interest, insight and scores in physics and mathematics. SAVL develops scientific and mathematical imagination in a visual, symbolic, and experimental simulation environment. It directly addresses the issues of scientific and technological competency by providing critical thinking training through integrated modules. This on-going research provides a virtual laboratory environment in which the student directs the building of the experiment rather than observing a packaged simulation. SAVL: * Engages the persistent interest of young minds in physics and math by visually linking simulation objects and events with mathematical relations. * Teaches integrated concepts by the hands-on exploration and focused visualization of classic physics experiments within software. * Systematically and uniformly assesses and scores students by their ability to answer their own questions within the context of a Master Question Network. We will demonstrate how the Master Question Network uses polymorphic interfaces and C# lambda expressions to manage simulation objects.
Supporting observation campaigns with high resolution modeling
NASA Astrophysics Data System (ADS)
Klocke, Daniel; Brueck, Matthias; Voigt, Aiko
2017-04-01
High resolution simulation in support of measurement campaigns offers a promising and emerging way to create large-scale context for small-scale observations of clouds and precipitation processes. As these simulation include the coupling of measured small-scale processes with the circulation, they also help to integrate the research communities from modeling and observations and allow for detailed model evaluations against dedicated observations. In connection with the measurement campaign NARVAL (August 2016 and December 2013) simulations with a grid-spacing of 2.5 km for the tropical Atlantic region (9000x3300 km), with local refinement to 1.2 km for the western part of the domain, were performed using the icosahedral non-hydrostatic (ICON) general circulation model. These simulations are again used to drive large eddy resolving simulations with the same model for selected days in the high definition clouds and precipitation for advancing climate prediction (HD(CP)2) project. The simulations are presented with the focus on selected results showing the benefit for the scientific communities doing atmospheric measurements and numerical modeling of climate and weather. Additionally, an outlook will be given on how similar simulations will support the NAWDEX measurement campaign in the North Atlantic and AC3 measurement campaign in the Arctic.
DoSSiER: Database of scientific simulation and experimental results
DOE Office of Scientific and Technical Information (OSTI.GOV)
Wenzel, Hans; Yarba, Julia; Genser, Krzystof
The Geant4, GeantV and GENIE collaborations regularly perform validation and regression tests for simulation results. DoSSiER (Database of Scientific Simulation and Experimental Results) is being developed as a central repository to store the simulation results as well as the experimental data used for validation. DoSSiER can be easily accessed via a web application. In addition, a web service allows for programmatic access to the repository to extract records in json or xml exchange formats. In this paper, we describe the functionality and the current status of various components of DoSSiER as well as the technology choices we made.
DoSSiER: Database of scientific simulation and experimental results
Wenzel, Hans; Yarba, Julia; Genser, Krzystof; ...
2016-08-01
The Geant4, GeantV and GENIE collaborations regularly perform validation and regression tests for simulation results. DoSSiER (Database of Scientific Simulation and Experimental Results) is being developed as a central repository to store the simulation results as well as the experimental data used for validation. DoSSiER can be easily accessed via a web application. In addition, a web service allows for programmatic access to the repository to extract records in json or xml exchange formats. In this paper, we describe the functionality and the current status of various components of DoSSiER as well as the technology choices we made.
NASA/ESACV-990 spacelab simulation. Appendix B: Experiment development and performance
NASA Technical Reports Server (NTRS)
Reller, J. O., Jr.; Neel, C. B.; Haughney, L. C.
1976-01-01
Eight experiments flown on the CV-990 airborne laboratory during the NASA/ESA joint Spacelab simulation mission are described in terms of their physical arrangement in the aircraft, their scientific objectives, developmental considerations dictated by mission requirements, checkout, integration into the aircraft, and the inflight operation and performance of the experiments.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Habib, Salman; Roser, Robert; Gerber, Richard
The U.S. Department of Energy (DOE) Office of Science (SC) Offices of High Energy Physics (HEP) and Advanced Scientific Computing Research (ASCR) convened a programmatic Exascale Requirements Review on June 10–12, 2015, in Bethesda, Maryland. This report summarizes the findings, results, and recommendations derived from that meeting. The high-level findings and observations are as follows. Larger, more capable computing and data facilities are needed to support HEP science goals in all three frontiers: Energy, Intensity, and Cosmic. The expected scale of the demand at the 2025 timescale is at least two orders of magnitude — and in some cases greatermore » — than that available currently. The growth rate of data produced by simulations is overwhelming the current ability of both facilities and researchers to store and analyze it. Additional resources and new techniques for data analysis are urgently needed. Data rates and volumes from experimental facilities are also straining the current HEP infrastructure in its ability to store and analyze large and complex data volumes. Appropriately configured leadership-class facilities can play a transformational role in enabling scientific discovery from these datasets. A close integration of high-performance computing (HPC) simulation and data analysis will greatly aid in interpreting the results of HEP experiments. Such an integration will minimize data movement and facilitate interdependent workflows. Long-range planning between HEP and ASCR will be required to meet HEP’s research needs. To best use ASCR HPC resources, the experimental HEP program needs (1) an established, long-term plan for access to ASCR computational and data resources, (2) the ability to map workflows to HPC resources, (3) the ability for ASCR facilities to accommodate workflows run by collaborations potentially comprising thousands of individual members, (4) to transition codes to the next-generation HPC platforms that will be available at ASCR facilities, (5) to build up and train a workforce capable of developing and using simulations and analysis to support HEP scientific research on next-generation systems.« less
DOE Office of Scientific and Technical Information (OSTI.GOV)
Underwood, Keith D; Ulmer, Craig D.; Thompson, David
Field programmable gate arrays (FPGAs) have been used as alternative computational de-vices for over a decade; however, they have not been used for traditional scientific com-puting due to their perceived lack of floating-point performance. In recent years, there hasbeen a surge of interest in alternatives to traditional microprocessors for high performancecomputing. Sandia National Labs began two projects to determine whether FPGAs wouldbe a suitable alternative to microprocessors for high performance scientific computing and,if so, how they should be integrated into the system. We present results that indicate thatFPGAs could have a significant impact on future systems. FPGAs have thepotentialtohave ordermore » of magnitude levels of performance wins on several key algorithms; however,there are serious questions as to whether the system integration challenge can be met. Fur-thermore, there remain challenges in FPGA programming and system level reliability whenusing FPGA devices.4 AcknowledgmentArun Rodrigues provided valuable support and assistance in the use of the Structural Sim-ulation Toolkit within an FPGA context. Curtis Janssen and Steve Plimpton provided valu-able insights into the workings of two Sandia applications (MPQC and LAMMPS, respec-tively).5« less
DOE Office of Scientific and Technical Information (OSTI.GOV)
Fahim, Farah; Deptuch, Grzegorz W.; Hoff, James R.
Semiconductor hybrid pixel detectors often consist of a pixellated sensor layer bump bonded to a matching pixelated readout integrated circuit (ROIC). The sensor can range from high resistivity Si to III-V materials, whereas a Si CMOS process is typically used to manufacture the ROIC. Independent, device physics and electronic design automation (EDA) tools are used to determine sensor characteristics and verify functional performance of ROICs respectively with significantly different solvers. Some physics solvers provide the capability of transferring data to the EDA tool. However, single pixel transient simulations are either not feasible due to convergence difficulties or are prohibitively long.more » A simplified sensor model, which includes a current pulse in parallel with detector equivalent capacitor, is often used; even then, spice type top-level (entire array) simulations range from days to weeks. In order to analyze detector deficiencies for a particular scientific application, accurately defined transient behavioral models of all the functional blocks are required. Furthermore, various simulations, such as transient, noise, Monte Carlo, inter-pixel effects, etc. of the entire array need to be performed within a reasonable time frame without trading off accuracy. The sensor and the analog front-end can be modeling using a real number modeling language, as complex mathematical functions or detailed data can be saved to text files, for further top-level digital simulations. Parasitically aware digital timing is extracted in a standard delay format (sdf) from the pixel digital back-end layout as well as the periphery of the ROIC. For any given input, detector level worst-case and best-case simulations are performed using a Verilog simulation environment to determine the output. Each top-level transient simulation takes no more than 10-15 minutes. The impact of changing key parameters such as sensor Poissonian shot noise, analog front-end bandwidth, jitter due to clock distribution etc. can be accurately analyzed to determine ROIC architectural viability and bottlenecks. Hence the impact of the detector parameters on the scientific application can be studied.« less
DOE Office of Scientific and Technical Information (OSTI.GOV)
Langer, S; Rotman, D; Schwegler, E
The Institutional Computing Executive Group (ICEG) review of FY05-06 Multiprogrammatic and Institutional Computing (M and IC) activities is presented in the attached report. In summary, we find that the M and IC staff does an outstanding job of acquiring and supporting a wide range of institutional computing resources to meet the programmatic and scientific goals of LLNL. The responsiveness and high quality of support given to users and the programs investing in M and IC reflects the dedication and skill of the M and IC staff. M and IC has successfully managed serial capacity, parallel capacity, and capability computing resources.more » Serial capacity computing supports a wide range of scientific projects which require access to a few high performance processors within a shared memory computer. Parallel capacity computing supports scientific projects that require a moderate number of processors (up to roughly 1000) on a parallel computer. Capability computing supports parallel jobs that push the limits of simulation science. M and IC has worked closely with Stockpile Stewardship, and together they have made LLNL a premier institution for computational and simulation science. Such a standing is vital to the continued success of laboratory science programs and to the recruitment and retention of top scientists. This report provides recommendations to build on M and IC's accomplishments and improve simulation capabilities at LLNL. We recommend that institution fully fund (1) operation of the atlas cluster purchased in FY06 to support a few large projects; (2) operation of the thunder and zeus clusters to enable 'mid-range' parallel capacity simulations during normal operation and a limited number of large simulations during dedicated application time; (3) operation of the new yana cluster to support a wide range of serial capacity simulations; (4) improvements to the reliability and performance of the Lustre parallel file system; (5) support for the new GDO petabyte-class storage facility on the green network for use in data intensive external collaborations; and (6) continued support for visualization and other methods for analyzing large simulations. We also recommend that M and IC begin planning in FY07 for the next upgrade of its parallel clusters. LLNL investments in M and IC have resulted in a world-class simulation capability leading to innovative science. We thank the LLNL management for its continued support and thank the M and IC staff for its vision and dedicated efforts to make it all happen.« less
Final Report. Institute for Ultralscale Visualization
DOE Office of Scientific and Technical Information (OSTI.GOV)
Ma, Kwan-Liu; Galli, Giulia; Gygi, Francois
The SciDAC Institute for Ultrascale Visualization brought together leading experts from visualization, high-performance computing, and science application areas to make advanced visualization solutions for SciDAC scientists and the broader community. Over the five-year project, the Institute introduced many new enabling visualization techniques, which have significantly enhanced scientists’ ability to validate their simulations, interpret their data, and communicate with others about their work and findings. This Institute project involved a large number of junior and student researchers, who received the opportunities to work on some of the most challenging science applications and gain access to the most powerful high-performance computing facilitiesmore » in the world. They were readily trained and prepared for facing the greater challenges presented by extreme-scale computing. The Institute’s outreach efforts, through publications, workshops and tutorials, successfully disseminated the new knowledge and technologies to the SciDAC and the broader scientific communities. The scientific findings and experience of the Institute team helped plan the SciDAC3 program.« less
Warp-X: A new exascale computing platform for beam–plasma simulations
Vay, J. -L.; Almgren, A.; Bell, J.; ...
2018-01-31
Turning the current experimental plasma accelerator state-of-the-art from a promising technology into mainstream scientific tools depends critically on high-performance, high-fidelity modeling of complex processes that develop over a wide range of space and time scales. As part of the U.S. Department of Energy's Exascale Computing Project, a team from Lawrence Berkeley National Laboratory, in collaboration with teams from SLAC National Accelerator Laboratory and Lawrence Livermore National Laboratory, is developing a new plasma accelerator simulation tool that will harness the power of future exascale supercomputers for high-performance modeling of plasma accelerators. We present the various components of the codes such asmore » the new Particle-In-Cell Scalable Application Resource (PICSAR) and the redesigned adaptive mesh refinement library AMReX, which are combined with redesigned elements of the Warp code, in the new WarpX software. Lastly, the code structure, status, early examples of applications and plans are discussed.« less
Warp-X: A new exascale computing platform for beam–plasma simulations
DOE Office of Scientific and Technical Information (OSTI.GOV)
Vay, J. -L.; Almgren, A.; Bell, J.
Turning the current experimental plasma accelerator state-of-the-art from a promising technology into mainstream scientific tools depends critically on high-performance, high-fidelity modeling of complex processes that develop over a wide range of space and time scales. As part of the U.S. Department of Energy's Exascale Computing Project, a team from Lawrence Berkeley National Laboratory, in collaboration with teams from SLAC National Accelerator Laboratory and Lawrence Livermore National Laboratory, is developing a new plasma accelerator simulation tool that will harness the power of future exascale supercomputers for high-performance modeling of plasma accelerators. We present the various components of the codes such asmore » the new Particle-In-Cell Scalable Application Resource (PICSAR) and the redesigned adaptive mesh refinement library AMReX, which are combined with redesigned elements of the Warp code, in the new WarpX software. Lastly, the code structure, status, early examples of applications and plans are discussed.« less
Motion Simulation Analysis of Rail Weld CNC Fine Milling Machine
NASA Astrophysics Data System (ADS)
Mao, Huajie; Shu, Min; Li, Chao; Zhang, Baojun
CNC fine milling machine is a new advanced equipment of rail weld precision machining with high precision, high efficiency, low environmental pollution and other technical advantages. The motion performance of this machine directly affects its machining accuracy and stability, which makes it an important consideration for its design. Based on the design drawings, this article completed 3D modeling of 60mm/kg rail weld CNC fine milling machine by using Solidworks. After that, the geometry was imported into Adams to finish the motion simulation analysis. The displacement, velocity, angular velocity and some other kinematical parameters curves of the main components were obtained in the post-processing and these are the scientific basis for the design and development for this machine.
NASA Astrophysics Data System (ADS)
Mills, R. T.; Rupp, K.; Smith, B. F.; Brown, J.; Knepley, M.; Zhang, H.; Adams, M.; Hammond, G. E.
2017-12-01
As the high-performance computing community pushes towards the exascale horizon, power and heat considerations have driven the increasing importance and prevalence of fine-grained parallelism in new computer architectures. High-performance computing centers have become increasingly reliant on GPGPU accelerators and "manycore" processors such as the Intel Xeon Phi line, and 512-bit SIMD registers have even been introduced in the latest generation of Intel's mainstream Xeon server processors. The high degree of fine-grained parallelism and more complicated memory hierarchy considerations of such "manycore" processors present several challenges to existing scientific software. Here, we consider how the massively parallel, open-source hydrologic flow and reactive transport code PFLOTRAN - and the underlying Portable, Extensible Toolkit for Scientific Computation (PETSc) library on which it is built - can best take advantage of such architectures. We will discuss some key features of these novel architectures and our code optimizations and algorithmic developments targeted at them, and present experiences drawn from working with a wide range of PFLOTRAN benchmark problems on these architectures.
Implementing Molecular Dynamics on Hybrid High Performance Computers - Three-Body Potentials
DOE Office of Scientific and Technical Information (OSTI.GOV)
Brown, W Michael; Yamada, Masako
The use of coprocessors or accelerators such as graphics processing units (GPUs) has become popular in scientific computing applications due to their low cost, impressive floating-point capabilities, high memory bandwidth, and low electrical power re- quirements. Hybrid high-performance computers, defined as machines with nodes containing more than one type of floating-point processor (e.g. CPU and GPU), are now becoming more prevalent due to these advantages. Although there has been extensive research into methods to efficiently use accelerators to improve the performance of molecular dynamics (MD) employing pairwise potential energy models, little is reported in the literature for models that includemore » many-body effects. 3-body terms are required for many popular potentials such as MEAM, Tersoff, REBO, AIREBO, Stillinger-Weber, Bond-Order Potentials, and others. Because the per-atom simulation times are much higher for models incorporating 3-body terms, there is a clear need for efficient algo- rithms usable on hybrid high performance computers. Here, we report a shared-memory force-decomposition for 3-body potentials that avoids memory conflicts to allow for a deterministic code with substantial performance improvements on hybrid machines. We describe modifications necessary for use in distributed memory MD codes and show results for the simulation of water with Stillinger-Weber on the hybrid Titan supercomputer. We compare performance of the 3-body model to the SPC/E water model when using accelerators. Finally, we demonstrate that our approach can attain a speedup of 5.1 with acceleration on Titan for production simulations to study water droplet freezing on a surface.« less
plasmaFoam: An OpenFOAM framework for computational plasma physics and chemistry
NASA Astrophysics Data System (ADS)
Venkattraman, Ayyaswamy; Verma, Abhishek Kumar
2016-09-01
As emphasized in the 2012 Roadmap for low temperature plasmas (LTP), scientific computing has emerged as an essential tool for the investigation and prediction of the fundamental physical and chemical processes associated with these systems. While several in-house and commercial codes exist, with each having its own advantages and disadvantages, a common framework that can be developed by researchers from all over the world will likely accelerate the impact of computational studies on advances in low-temperature plasma physics and chemistry. In this regard, we present a finite volume computational toolbox to perform high-fidelity simulations of LTP systems. This framework, primarily based on the OpenFOAM solver suite, allows us to enhance our understanding of multiscale plasma phenomenon by performing massively parallel, three-dimensional simulations on unstructured meshes using well-established high performance computing tools that are widely used in the computational fluid dynamics community. In this talk, we will present preliminary results obtained using the OpenFOAM-based solver suite with benchmark three-dimensional simulations of microplasma devices including both dielectric and plasma regions. We will also discuss the future outlook for the solver suite.
Self-Consistent Monte Carlo Study of the Coulomb Interaction under Nano-Scale Device Structures
NASA Astrophysics Data System (ADS)
Sano, Nobuyuki
2011-03-01
It has been pointed that the Coulomb interaction between the electrons is expected to be of crucial importance to predict reliable device characteristics. In particular, the device performance is greatly degraded due to the plasmon excitation represented by dynamical potential fluctuations in high-doped source and drain regions by the channel electrons. We employ the self-consistent 3D Monte Carlo (MC) simulations, which could reproduce both the correct mobility under various electron concentrations and the collective plasma waves, to study the physical impact of dynamical potential fluctuations on device performance under the Double-gate MOSFETs. The average force experienced by an electron due to the Coulomb interaction inside the device is evaluated by performing the self-consistent MC simulations and the fixed-potential MC simulations without the Coulomb interaction. Also, the band-tailing associated with the local potential fluctuations in high-doped source region is quantitatively evaluated and it is found that the band-tailing becomes strongly dependent of position in real space even inside the uniform source region. This work was partially supported by Grants-in-Aid for Scientific Research B (No. 2160160) from the Ministry of Education, Culture, Sports, Science and Technology in Japan.
The end-to-end simulator for the E-ELT HIRES high resolution spectrograph
NASA Astrophysics Data System (ADS)
Genoni, M.; Landoni, M.; Riva, M.; Pariani, G.; Mason, E.; Di Marcantonio, P.; Disseau, K.; Di Varano, I.; Gonzalez, O.; Huke, P.; Korhonen, H.; Li Causi, Gianluca
2017-06-01
We present the design, architecture and results of the End-to-End simulator model of the high resolution spectrograph HIRES for the European Extremely Large Telescope (E-ELT). This system can be used as a tool to characterize the spectrograph both by engineers and scientists. The model allows to simulate the behavior of photons starting from the scientific object (modeled bearing in mind the main science drivers) to the detector, considering also calibration light sources, and allowing to perform evaluation of the different parameters of the spectrograph design. In this paper, we will detail the architecture of the simulator and the computational model which are strongly characterized by modularity and flexibility that will be crucial in the next generation astronomical observation projects like E-ELT due to of the high complexity and long-time design and development. Finally, we present synthetic images obtained with the current version of the End-to-End simulator based on the E-ELT HIRES requirements (especially high radial velocity accuracy). Once ingested in the Data reduction Software (DRS), they will allow to verify that the instrument design can achieve the radial velocity accuracy needed by the HIRES science cases.
NASA Astrophysics Data System (ADS)
Yamamoto, H.; Nakajima, K.; Zhang, K.; Nanai, S.
2015-12-01
Powerful numerical codes that are capable of modeling complex coupled processes of physics and chemistry have been developed for predicting the fate of CO2 in reservoirs as well as its potential impacts on groundwater and subsurface environments. However, they are often computationally demanding for solving highly non-linear models in sufficient spatial and temporal resolutions. Geological heterogeneity and uncertainties further increase the challenges in modeling works. Two-phase flow simulations in heterogeneous media usually require much longer computational time than that in homogeneous media. Uncertainties in reservoir properties may necessitate stochastic simulations with multiple realizations. Recently, massively parallel supercomputers with more than thousands of processors become available in scientific and engineering communities. Such supercomputers may attract attentions from geoscientist and reservoir engineers for solving the large and non-linear models in higher resolutions within a reasonable time. However, for making it a useful tool, it is essential to tackle several practical obstacles to utilize large number of processors effectively for general-purpose reservoir simulators. We have implemented massively-parallel versions of two TOUGH2 family codes (a multi-phase flow simulator TOUGH2 and a chemically reactive transport simulator TOUGHREACT) on two different types (vector- and scalar-type) of supercomputers with a thousand to tens of thousands of processors. After completing implementation and extensive tune-up on the supercomputers, the computational performance was measured for three simulations with multi-million grid models, including a simulation of the dissolution-diffusion-convection process that requires high spatial and temporal resolutions to simulate the growth of small convective fingers of CO2-dissolved water to larger ones in a reservoir scale. The performance measurement confirmed that the both simulators exhibit excellent scalabilities showing almost linear speedup against number of processors up to over ten thousand cores. Generally this allows us to perform coupled multi-physics (THC) simulations on high resolution geologic models with multi-million grid in a practical time (e.g., less than a second per time step).
Computational steering of GEM based detector simulations
NASA Astrophysics Data System (ADS)
Sheharyar, Ali; Bouhali, Othmane
2017-10-01
Gas based detector R&D relies heavily on full simulation of detectors and their optimization before final prototypes can be built and tested. These simulations in particular those with complex scenarios such as those involving high detector voltages or gas with larger gains are computationally intensive may take several days or weeks to complete. These long-running simulations usually run on the high-performance computers in batch mode. If the results lead to unexpected behavior, then the simulation might be rerun with different parameters. However, the simulations (or jobs) may have to wait in a queue until they get a chance to run again because the supercomputer is a shared resource that maintains a queue of other user programs as well and executes them as time and priorities permit. It may result in inefficient resource utilization and increase in the turnaround time for the scientific experiment. To overcome this issue, the monitoring of the behavior of a simulation, while it is running (or live), is essential. In this work, we employ the computational steering technique by coupling the detector simulations with a visualization package named VisIt to enable the exploration of the live data as it is produced by the simulation.
An Array Library for Microsoft SQL Server with Astrophysical Applications
NASA Astrophysics Data System (ADS)
Dobos, L.; Szalay, A. S.; Blakeley, J.; Falck, B.; Budavári, T.; Csabai, I.
2012-09-01
Today's scientific simulations produce output on the 10-100 TB scale. This unprecedented amount of data requires data handling techniques that are beyond what is used for ordinary files. Relational database systems have been successfully used to store and process scientific data, but the new requirements constantly generate new challenges. Moving terabytes of data among servers on a timely basis is a tough problem, even with the newest high-throughput networks. Thus, moving the computations as close to the data as possible and minimizing the client-server overhead are absolutely necessary. At least data subsetting and preprocessing have to be done inside the server process. Out of the box commercial database systems perform very well in scientific applications from the prospective of data storage optimization, data retrieval, and memory management but lack basic functionality like handling scientific data structures or enabling advanced math inside the database server. The most important gap in Microsoft SQL Server is the lack of a native array data type. Fortunately, the technology exists to extend the database server with custom-written code that enables us to address these problems. We present the prototype of a custom-built extension to Microsoft SQL Server that adds array handling functionality to the database system. With our Array Library, fix-sized arrays of all basic numeric data types can be created and manipulated efficiently. Also, the library is designed to be able to be seamlessly integrated with the most common math libraries, such as BLAS, LAPACK, FFTW, etc. With the help of these libraries, complex operations, such as matrix inversions or Fourier transformations, can be done on-the-fly, from SQL code, inside the database server process. We are currently testing the prototype with two different scientific data sets: The Indra cosmological simulation will use it to store particle and density data from N-body simulations, and the Milky Way Laboratory project will use it to store galaxy simulation data.
Immersive visualization of rail simulation data.
DOT National Transportation Integrated Search
2016-01-01
The prime objective of this project was to create scientific, immersive visualizations of a Rail-simulation. This project is a part of a larger initiative that consists of three distinct parts. The first step consists of performing a finite element a...
THE VIRTUAL INSTRUMENT: SUPPORT FOR GRID-ENABLED MCELL SIMULATIONS
Casanova, Henri; Berman, Francine; Bartol, Thomas; Gokcay, Erhan; Sejnowski, Terry; Birnbaum, Adam; Dongarra, Jack; Miller, Michelle; Ellisman, Mark; Faerman, Marcio; Obertelli, Graziano; Wolski, Rich; Pomerantz, Stuart; Stiles, Joel
2010-01-01
Ensembles of widely distributed, heterogeneous resources, or Grids, have emerged as popular platforms for large-scale scientific applications. In this paper we present the Virtual Instrument project, which provides an integrated application execution environment that enables end-users to run and interact with running scientific simulations on Grids. This work is performed in the specific context of MCell, a computational biology application. While MCell provides the basis for running simulations, its capabilities are currently limited in terms of scale, ease-of-use, and interactivity. These limitations preclude usage scenarios that are critical for scientific advances. Our goal is to create a scientific “Virtual Instrument” from MCell by allowing its users to transparently access Grid resources while being able to steer running simulations. In this paper, we motivate the Virtual Instrument project and discuss a number of relevant issues and accomplishments in the area of Grid software development and application scheduling. We then describe our software design and report on the current implementation. We verify and evaluate our design via experiments with MCell on a real-world Grid testbed. PMID:20689618
Science Classroom Inquiry (SCI) Simulations: A Novel Method to Scaffold Science Learning
Peffer, Melanie E.; Beckler, Matthew L.; Schunn, Christian; Renken, Maggie; Revak, Amanda
2015-01-01
Science education is progressively more focused on employing inquiry-based learning methods in the classroom and increasing scientific literacy among students. However, due to time and resource constraints, many classroom science activities and laboratory experiments focus on simple inquiry, with a step-by-step approach to reach predetermined outcomes. The science classroom inquiry (SCI) simulations were designed to give students real life, authentic science experiences within the confines of a typical classroom. The SCI simulations allow students to engage with a science problem in a meaningful, inquiry-based manner. Three discrete SCI simulations were created as website applications for use with middle school and high school students. For each simulation, students were tasked with solving a scientific problem through investigation and hypothesis testing. After completion of the simulation, 67% of students reported a change in how they perceived authentic science practices, specifically related to the complex and dynamic nature of scientific research and how scientists approach problems. Moreover, 80% of the students who did not report a change in how they viewed the practice of science indicated that the simulation confirmed or strengthened their prior understanding. Additionally, we found a statistically significant positive correlation between students’ self-reported changes in understanding of authentic science practices and the degree to which each simulation benefitted learning. Since SCI simulations were effective in promoting both student learning and student understanding of authentic science practices with both middle and high school students, we propose that SCI simulations are a valuable and versatile technology that can be used to educate and inspire a wide range of science students on the real-world complexities inherent in scientific study. PMID:25786245
Science classroom inquiry (SCI) simulations: a novel method to scaffold science learning.
Peffer, Melanie E; Beckler, Matthew L; Schunn, Christian; Renken, Maggie; Revak, Amanda
2015-01-01
Science education is progressively more focused on employing inquiry-based learning methods in the classroom and increasing scientific literacy among students. However, due to time and resource constraints, many classroom science activities and laboratory experiments focus on simple inquiry, with a step-by-step approach to reach predetermined outcomes. The science classroom inquiry (SCI) simulations were designed to give students real life, authentic science experiences within the confines of a typical classroom. The SCI simulations allow students to engage with a science problem in a meaningful, inquiry-based manner. Three discrete SCI simulations were created as website applications for use with middle school and high school students. For each simulation, students were tasked with solving a scientific problem through investigation and hypothesis testing. After completion of the simulation, 67% of students reported a change in how they perceived authentic science practices, specifically related to the complex and dynamic nature of scientific research and how scientists approach problems. Moreover, 80% of the students who did not report a change in how they viewed the practice of science indicated that the simulation confirmed or strengthened their prior understanding. Additionally, we found a statistically significant positive correlation between students' self-reported changes in understanding of authentic science practices and the degree to which each simulation benefitted learning. Since SCI simulations were effective in promoting both student learning and student understanding of authentic science practices with both middle and high school students, we propose that SCI simulations are a valuable and versatile technology that can be used to educate and inspire a wide range of science students on the real-world complexities inherent in scientific study.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Spentzouris, Panagiotis; /Fermilab; Cary, John
The design and performance optimization of particle accelerators are essential for the success of the DOE scientific program in the next decade. Particle accelerators are very complex systems whose accurate description involves a large number of degrees of freedom and requires the inclusion of many physics processes. Building on the success of the SciDAC-1 Accelerator Science and Technology project, the SciDAC-2 Community Petascale Project for Accelerator Science and Simulation (ComPASS) is developing a comprehensive set of interoperable components for beam dynamics, electromagnetics, electron cooling, and laser/plasma acceleration modelling. ComPASS is providing accelerator scientists the tools required to enable the necessarymore » accelerator simulation paradigm shift from high-fidelity single physics process modeling (covered under SciDAC1) to high-fidelity multiphysics modeling. Our computational frameworks have been used to model the behavior of a large number of accelerators and accelerator R&D experiments, assisting both their design and performance optimization. As parallel computational applications, the ComPASS codes have been shown to make effective use of thousands of processors.« less
Remote control system for high-perfomance computer simulation of crystal growth by the PFC method
NASA Astrophysics Data System (ADS)
Pavlyuk, Evgeny; Starodumov, Ilya; Osipov, Sergei
2017-04-01
Modeling of crystallization process by the phase field crystal method (PFC) - one of the important directions of modern computational materials science. In this paper, the practical side of the computer simulation of the crystallization process by the PFC method is investigated. To solve problems using this method, it is necessary to use high-performance computing clusters, data storage systems and other often expensive complex computer systems. Access to such resources is often limited, unstable and accompanied by various administrative problems. In addition, the variety of software and settings of different computing clusters sometimes does not allow researchers to use unified program code. There is a need to adapt the program code for each configuration of the computer complex. The practical experience of the authors has shown that the creation of a special control system for computing with the possibility of remote use can greatly simplify the implementation of simulations and increase the performance of scientific research. In current paper we show the principal idea of such a system and justify its efficiency.
NASA Technical Reports Server (NTRS)
Hughes, David W.; Hedgeland, Randy J.
1994-01-01
A mechanical simulator of the Hubble Space Telescope (HST) Aft Shroud was built to perform verification testing of the Servicing Mission Scientific Instruments (SI's) and to provide a facility for astronaut training. All assembly, integration, and test activities occurred under the guidance of a contamination control plan, and all work was reviewed by a contamination engineer prior to implementation. An integrated approach was followed in which materials selection, manufacturing, assembly, subsystem integration, and end product use were considered and controlled to ensure that the use of the High Fidelity Mechanical Simulator (HFMS) as a verification tool would not contaminate mission critical hardware. Surfaces were cleaned throughout manufacturing, assembly, and integration, and reverification was performed following major activities. Direct surface sampling was the preferred method of verification, but access and material constraints led to the use of indirect methods as well. Although surface geometries and coatings often made contamination verification difficult, final contamination sampling and monitoring demonstrated the ability to maintain a class M5.5 environment with surface levels less than 400B inside the HFMS.
GPU Particle Tracking and MHD Simulations with Greatly Enhanced Computational Speed
NASA Astrophysics Data System (ADS)
Ziemba, T.; O'Donnell, D.; Carscadden, J.; Cash, M.; Winglee, R.; Harnett, E.
2008-12-01
GPUs are intrinsically highly parallelized systems that provide more than an order of magnitude computing speed over a CPU based systems, for less cost than a high end-workstation. Recent advancements in GPU technologies allow for full IEEE float specifications with performance up to several hundred GFLOPs per GPU, and new software architectures have recently become available to ease the transition from graphics based to scientific applications. This allows for a cheap alternative to standard supercomputing methods and should increase the time to discovery. 3-D particle tracking and MHD codes have been developed using NVIDIA's CUDA and have demonstrated speed up of nearly a factor of 20 over equivalent CPU versions of the codes. Such a speed up enables new applications to develop, including real time running of radiation belt simulations and real time running of global magnetospheric simulations, both of which could provide important space weather prediction tools.
GPU accelerated simulations of 3D deterministic particle transport using discrete ordinates method
NASA Astrophysics Data System (ADS)
Gong, Chunye; Liu, Jie; Chi, Lihua; Huang, Haowei; Fang, Jingyue; Gong, Zhenghu
2011-07-01
Graphics Processing Unit (GPU), originally developed for real-time, high-definition 3D graphics in computer games, now provides great faculty in solving scientific applications. The basis of particle transport simulation is the time-dependent, multi-group, inhomogeneous Boltzmann transport equation. The numerical solution to the Boltzmann equation involves the discrete ordinates ( Sn) method and the procedure of source iteration. In this paper, we present a GPU accelerated simulation of one energy group time-independent deterministic discrete ordinates particle transport in 3D Cartesian geometry (Sweep3D). The performance of the GPU simulations are reported with the simulations of vacuum boundary condition. The discussion of the relative advantages and disadvantages of the GPU implementation, the simulation on multi GPUs, the programming effort and code portability are also reported. The results show that the overall performance speedup of one NVIDIA Tesla M2050 GPU ranges from 2.56 compared with one Intel Xeon X5670 chip to 8.14 compared with one Intel Core Q6600 chip for no flux fixup. The simulation with flux fixup on one M2050 is 1.23 times faster than on one X5670.
Active Flash: Out-of-core Data Analytics on Flash Storage
DOE Office of Scientific and Technical Information (OSTI.GOV)
Boboila, Simona; Kim, Youngjae; Vazhkudai, Sudharshan S
2012-01-01
Next generation science will increasingly come to rely on the ability to perform efficient, on-the-fly analytics of data generated by high-performance computing (HPC) simulations, modeling complex physical phenomena. Scientific computing workflows are stymied by the traditional chaining of simulation and data analysis, creating multiple rounds of redundant reads and writes to the storage system, which grows in cost with the ever-increasing gap between compute and storage speeds in HPC clusters. Recent HPC acquisitions have introduced compute node-local flash storage as a means to alleviate this I/O bottleneck. We propose a novel approach, Active Flash, to expedite data analysis pipelines bymore » migrating to the location of the data, the flash device itself. We argue that Active Flash has the potential to enable true out-of-core data analytics by freeing up both the compute core and the associated main memory. By performing analysis locally, dependence on limited bandwidth to a central storage system is reduced, while allowing this analysis to proceed in parallel with the main application. In addition, offloading work from the host to the more power-efficient controller reduces peak system power usage, which is already in the megawatt range and poses a major barrier to HPC system scalability. We propose an architecture for Active Flash, explore energy and performance trade-offs in moving computation from host to storage, demonstrate the ability of appropriate embedded controllers to perform data analysis and reduction tasks at speeds sufficient for this application, and present a simulation study of Active Flash scheduling policies. These results show the viability of the Active Flash model, and its capability to potentially have a transformative impact on scientific data analysis.« less
Expected Navigation Flight Performance for the Magnetospheric Multiscale (MMS) Mission
NASA Technical Reports Server (NTRS)
Olson, Corwin; Wright, Cinnamon; Long, Anne
2012-01-01
The Magnetospheric Multiscale (MMS) mission consists of four formation-flying spacecraft placed in highly eccentric elliptical orbits about the Earth. The primary scientific mission objective is to study magnetic reconnection within the Earth s magnetosphere. The baseline navigation concept is the independent estimation of each spacecraft state using GPS pseudorange measurements (referenced to an onboard Ultra Stable Oscillator) and accelerometer measurements during maneuvers. State estimation for the MMS spacecraft is performed onboard each vehicle using the Goddard Enhanced Onboard Navigation System, which is embedded in the Navigator GPS receiver. This paper describes the latest efforts to characterize expected navigation flight performance using upgraded simulation models derived from recent analyses.
The Roland Maze Project school-based extensive air shower network
NASA Astrophysics Data System (ADS)
Feder, J.; Jȩdrzejczak, K.; Karczmarczyk, J.; Lewandowski, R.; Swarzyński, J.; Szabelska, B.; Szabelski, J.; Wibig, T.
2006-01-01
We plan to construct the large area network of extensive air shower detectors placed on the roofs of high school buildings in the city of Łódź. Detection points will be connected by INTERNET to the central server and their work will be synchronized by GPS. The main scientific goal of the project are studies of ultra high energy cosmic rays. Using existing town infrastructure (INTERNET, power supply, etc.) will significantly reduce the cost of the experiment. Engaging high school students in the research program should significantly increase their knowledge of science and modern technologies, and can be a very efficient way of science popularisation. We performed simulations of the projected network capabilities of registering Extensive Air Showers and reconstructing energies of primary particles. Results of the simulations and the current status of project realisation will be presented.
The TeraShake Computational Platform for Large-Scale Earthquake Simulations
NASA Astrophysics Data System (ADS)
Cui, Yifeng; Olsen, Kim; Chourasia, Amit; Moore, Reagan; Maechling, Philip; Jordan, Thomas
Geoscientific and computer science researchers with the Southern California Earthquake Center (SCEC) are conducting a large-scale, physics-based, computationally demanding earthquake system science research program with the goal of developing predictive models of earthquake processes. The computational demands of this program continue to increase rapidly as these researchers seek to perform physics-based numerical simulations of earthquake processes for larger meet the needs of this research program, a multiple-institution team coordinated by SCEC has integrated several scientific codes into a numerical modeling-based research tool we call the TeraShake computational platform (TSCP). A central component in the TSCP is a highly scalable earthquake wave propagation simulation program called the TeraShake anelastic wave propagation (TS-AWP) code. In this chapter, we describe how we extended an existing, stand-alone, wellvalidated, finite-difference, anelastic wave propagation modeling code into the highly scalable and widely used TS-AWP and then integrated this code into the TeraShake computational platform that provides end-to-end (initialization to analysis) research capabilities. We also describe the techniques used to enhance the TS-AWP parallel performance on TeraGrid supercomputers, as well as the TeraShake simulations phases including input preparation, run time, data archive management, and visualization. As a result of our efforts to improve its parallel efficiency, the TS-AWP has now shown highly efficient strong scaling on over 40K processors on IBM’s BlueGene/L Watson computer. In addition, the TSCP has developed into a computational system that is useful to many members of the SCEC community for performing large-scale earthquake simulations.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Tikotekar, Anand A; Vallee, Geoffroy R; Naughton III, Thomas J
2008-01-01
The topic of system-level virtualization has recently begun to receive interest for high performance computing (HPC). This is in part due to the isolation and encapsulation offered by the virtual machine. These traits enable applications to customize their environments and maintain consistent software configurations in their virtual domains. Additionally, there are mechanisms that can be used for fault tolerance like live virtual machine migration. Given these attractive benefits to virtualization, a fundamental question arises, how does this effect my scientific application? We use this as the premise for our paper and observe a real-world scientific code running on a Xenmore » virtual machine. We studied the effects of running a radiative transfer simulation, Hydrolight, on a virtual machine. We discuss our methodology and report observations regarding the usage of virtualization with this application.« less
Deri, Robert J.; DeGroot, Anthony J.; Haigh, Ronald E.
2002-01-01
As the performance of individual elements within parallel processing systems increases, increased communication capability between distributed processor and memory elements is required. There is great interest in using fiber optics to improve interconnect communication beyond that attainable using electronic technology. Several groups have considered WDM, star-coupled optical interconnects. The invention uses a fiber optic transceiver to provide low latency, high bandwidth channels for such interconnects using a robust multimode fiber technology. Instruction-level simulation is used to quantify the bandwidth, latency, and concurrency required for such interconnects to scale to 256 nodes, each operating at 1 GFLOPS performance. Performance scales have been shown to .apprxeq.100 GFLOPS for scientific application kernels using a small number of wavelengths (8 to 32), only one wavelength received per node, and achievable optoelectronic bandwidth and latency.
Are Cloud Environments Ready for Scientific Applications?
NASA Astrophysics Data System (ADS)
Mehrotra, P.; Shackleford, K.
2011-12-01
Cloud computing environments are becoming widely available both in the commercial and government sectors. They provide flexibility to rapidly provision resources in order to meet dynamic and changing computational needs without the customers incurring capital expenses and/or requiring technical expertise. Clouds also provide reliable access to resources even though the end-user may not have in-house expertise for acquiring or operating such resources. Consolidation and pooling in a cloud environment allow organizations to achieve economies of scale in provisioning or procuring computing resources and services. Because of these and other benefits, many businesses and organizations are migrating their business applications (e.g., websites, social media, and business processes) to cloud environments-evidenced by the commercial success of offerings such as the Amazon EC2. In this paper, we focus on the feasibility of utilizing cloud environments for scientific workloads and workflows particularly of interest to NASA scientists and engineers. There is a wide spectrum of such technical computations. These applications range from small workstation-level computations to mid-range computing requiring small clusters to high-performance simulations requiring supercomputing systems with high bandwidth/low latency interconnects. Data-centric applications manage and manipulate large data sets such as satellite observational data and/or data previously produced by high-fidelity modeling and simulation computations. Most of the applications are run in batch mode with static resource requirements. However, there do exist situations that have dynamic demands, particularly ones with public-facing interfaces providing information to the general public, collaborators and partners, as well as to internal NASA users. In the last few months we have been studying the suitability of cloud environments for NASA's technical and scientific workloads. We have ported several applications to multiple cloud environments including NASA's Nebula environment, Amazon's EC2, Magellan at NERSC, and SGI's Cyclone system. We critically examined the performance of the applications on these systems. We also collected information on the usability of these cloud environments. In this talk we will present the results of our study focusing on the efficacy of using clouds for NASA's scientific applications.
NASA Astrophysics Data System (ADS)
Zhang, Y. Y.; Shao, Q. X.; Ye, A. Z.; Xing, H. T.; Xia, J.
2016-02-01
Integrated water system modeling is a feasible approach to understanding severe water crises in the world and promoting the implementation of integrated river basin management. In this study, a classic hydrological model (the time variant gain model: TVGM) was extended to an integrated water system model by coupling multiple water-related processes in hydrology, biogeochemistry, water quality, and ecology, and considering the interference of human activities. A parameter analysis tool, which included sensitivity analysis, autocalibration and model performance evaluation, was developed to improve modeling efficiency. To demonstrate the model performances, the Shaying River catchment, which is the largest highly regulated and heavily polluted tributary of the Huai River basin in China, was selected as the case study area. The model performances were evaluated on the key water-related components including runoff, water quality, diffuse pollution load (or nonpoint sources) and crop yield. Results showed that our proposed model simulated most components reasonably well. The simulated daily runoff at most regulated and less-regulated stations matched well with the observations. The average correlation coefficient and Nash-Sutcliffe efficiency were 0.85 and 0.70, respectively. Both the simulated low and high flows at most stations were improved when the dam regulation was considered. The daily ammonium-nitrogen (NH4-N) concentration was also well captured with the average correlation coefficient of 0.67. Furthermore, the diffuse source load of NH4-N and the corn yield were reasonably simulated at the administrative region scale. This integrated water system model is expected to improve the simulation performances with extension to more model functionalities, and to provide a scientific basis for the implementation in integrated river basin managements.
Erdemir, Ahmet; Guess, Trent M.; Halloran, Jason P.; Modenese, Luca; Reinbolt, Jeffrey A.; Thelen, Darryl G.; Umberger, Brian R.
2016-01-01
Objective The overall goal of this document is to demonstrate that dissemination of models and analyses for assessing the reproducibility of simulation results can be incorporated in the scientific review process in biomechanics. Methods As part of a special issue on model sharing and reproducibility in IEEE Transactions on Biomedical Engineering, two manuscripts on computational biomechanics were submitted: A. Rajagopal et al., IEEE Trans. Biomed. Eng., 2016 and A. Schmitz and D. Piovesan, IEEE Trans. Biomed. Eng., 2016. Models used in these studies were shared with the scientific reviewers and the public. In addition to the standard review of the manuscripts, the reviewers downloaded the models and performed simulations that reproduced results reported in the studies. Results There was general agreement between simulation results of the authors and those of the reviewers. Discrepancies were resolved during the necessary revisions. The manuscripts and instructions for download and simulation were updated in response to the reviewers’ feedback; changes that may otherwise have been missed if explicit model sharing and simulation reproducibility analysis were not conducted in the review process. Increased burden on the authors and the reviewers, to facilitate model sharing and to repeat simulations, were noted. Conclusion When the authors of computational biomechanics studies provide access to models and data, the scientific reviewers can download and thoroughly explore the model, perform simulations, and evaluate simulation reproducibility beyond the traditional manuscript-only review process. Significance Model sharing and reproducibility analysis in scholarly publishing will result in a more rigorous review process, which will enhance the quality of modeling and simulation studies and inform future users of computational models. PMID:28072567
DOE Office of Scientific and Technical Information (OSTI.GOV)
Chrzanowski, P; Walter, K
For the Laboratory and staff, 2006 was a year of outstanding achievements. As our many accomplishments in this annual report illustrate, the Laboratory's focus on important problems that affect our nation's security and our researchers breakthroughs in science and technology have led to major successes. As a national laboratory that is part of the Department of Energy's National Nuclear Security Administration (DOE/NNSA), Livermore is a key contributor to the Stockpile Stewardship Program for maintaining the safety, security, and reliability of the nation's nuclear weapons stockpile. The program has been highly successful, and our annual report features some of the Laboratory'smore » significant stockpile stewardship accomplishments in 2006. A notable example is a long-term study with Los Alamos National Laboratory, which found that weapon pit performance will not sharply degrade from the aging effects on plutonium. The conclusion was based on a wide range of nonnuclear experiments, detailed simulations, theoretical advances, and thorough analyses of the results of past nuclear tests. The study was a superb scientific effort. The continuing success of stockpile stewardship enabled NNSA in 2006 to lay out Complex 2030, a vision for a transformed nuclear weapons complex that is more responsive, cost efficient, and highly secure. One of the ways our Laboratory will help lead this transformation is through the design and development of reliable replacement warheads (RRWs). Compared to current designs, these warheads would have enhanced performance margins and security features and would be less costly to manufacture and maintain in a smaller, modernized production complex. In early 2007, NNSA selected Lawrence Livermore and Sandia National Laboratories-California to develop ''RRW-1'' for the U.S. Navy. Design efforts for the RRW, the plutonium aging work, and many other stockpile stewardship accomplishments rely on computer simulations performed on NNSA's Advanced Simulation and Computing (ASC) Program supercomputers at Livermore. ASC Purple and BlueGene/L, the world's fastest computer, together provide nearly a half petaflop (500 trillion operations per second) of computer power for use by the three NNSA national laboratories. Livermore-led teams were awarded the Gordon Bell Prize for Peak Performance in both 2005 and 2006. The winning simulations, run on BlueGene/L, investigated the properties of materials at the length and time scales of atomic interactions. The computing power that makes possible such detailed simulations provides unprecedented opportunities for scientific discovery. Laboratory scientists are meeting the extraordinary challenge of creating experimental capabilities to match the resolution of supercomputer simulations. Working with a wide range of collaborators, we are developing experimental tools that gather better data at the nanometer and subnanosecond scales. Applications range from imaging biomolecules to studying matter at extreme conditions of pressure and temperature. The premier high-energy-density experimental physics facility in the world will be the National Ignition Facility (NIF) when construction is completed in 2009. We are leading the national effort to perform the first fusion ignition experiments using NIF's 192-beam laser and prepare to explore some of the remaining important issues in weapons physics. With scientific colleagues from throughout the nation, we are also designing revolutionary experiments on NIF to advance the fields of astrophysics, planetary physics, and materials science. Mission-directed, multidisciplinary science and technology at Livermore is also focused on reducing the threat posed by the proliferation of weapons of mass destruction as well as their acquisition and use by terrorists. The Laboratory helps this important national effort by providing its unique expertise, integration analyses, and operational support to the Department of Homeland Security. For this vital facet of the Laboratory's national security mission, we are developing advanced technologies, such as a pocket-size explosives detector and an airborne persistent surveillance system, both of which earned R&D 100 Awards. Altogether, Livermore won seven R&D 100 Awards in 2006, the most for any organization. Emerging threats to national and global security go beyond defense and homeland security. Livermore pursues major scientific and technical advances to meet the need for a clean environment; clean, abundant energy; better water management; and improved human health. Our annual report highlights the link between human activities and the warming of tropical oceans, as well as techniques for imaging biological molecules and detecting bone cancer in its earliest stages. In addition, we showcase many scientific discoveries: distant planets, the composition of comets, a new superheavy element.« less
Scientific Discovery through Advanced Computing in Plasma Science
NASA Astrophysics Data System (ADS)
Tang, William
2005-03-01
Advanced computing is generally recognized to be an increasingly vital tool for accelerating progress in scientific research during the 21st Century. For example, the Department of Energy's ``Scientific Discovery through Advanced Computing'' (SciDAC) Program was motivated in large measure by the fact that formidable scientific challenges in its research portfolio could best be addressed by utilizing the combination of the rapid advances in super-computing technology together with the emergence of effective new algorithms and computational methodologies. The imperative is to translate such progress into corresponding increases in the performance of the scientific codes used to model complex physical systems such as those encountered in high temperature plasma research. If properly validated against experimental measurements and analytic benchmarks, these codes can provide reliable predictive capability for the behavior of a broad range of complex natural and engineered systems. This talk reviews recent progress and future directions for advanced simulations with some illustrative examples taken from the plasma science applications area. Significant recent progress has been made in both particle and fluid simulations of fine-scale turbulence and large-scale dynamics, giving increasingly good agreement between experimental observations and computational modeling. This was made possible by the combination of access to powerful new computational resources together with innovative advances in analytic and computational methods for developing reduced descriptions of physics phenomena spanning a huge range in time and space scales. In particular, the plasma science community has made excellent progress in developing advanced codes for which computer run-time and problem size scale well with the number of processors on massively parallel machines (MPP's). A good example is the effective usage of the full power of multi-teraflop (multi-trillion floating point computations per second) MPP's to produce three-dimensional, general geometry, nonlinear particle simulations which have accelerated progress in understanding the nature of plasma turbulence in magnetically-confined high temperature plasmas. These calculations, which typically utilized billions of particles for thousands of time-steps, would not have been possible without access to powerful present generation MPP computers and the associated diagnostic and visualization capabilities. In general, results from advanced simulations provide great encouragement for being able to include increasingly realistic dynamics to enable deeper physics insights into plasmas in both natural and laboratory environments. The associated scientific excitement should serve to stimulate improved cross-cutting collaborations with other fields and also to help attract bright young talent to the computational science area.
NASA Astrophysics Data System (ADS)
Myre, Joseph M.
Heterogeneous computing systems have recently come to the forefront of the High-Performance Computing (HPC) community's interest. HPC computer systems that incorporate special purpose accelerators, such as Graphics Processing Units (GPUs), are said to be heterogeneous. Large scale heterogeneous computing systems have consistently ranked highly on the Top500 list since the beginning of the heterogeneous computing trend. By using heterogeneous computing systems that consist of both general purpose processors and special- purpose accelerators, the speed and problem size of many simulations could be dramatically increased. Ultimately this results in enhanced simulation capabilities that allows, in some cases for the first time, the execution of parameter space and uncertainty analyses, model optimizations, and other inverse modeling techniques that are critical for scientific discovery and engineering analysis. However, simplifying the usage and optimization of codes for heterogeneous computing systems remains a challenge. This is particularly true for scientists and engineers for whom understanding HPC architectures and undertaking performance analysis may not be primary research objectives. To enable scientists and engineers to remain focused on their primary research objectives, a modular environment for geophysical inversion and run-time autotuning on heterogeneous computing systems is presented. This environment is composed of three major components: 1) CUSH---a framework for reducing the complexity of programming heterogeneous computer systems, 2) geophysical inversion routines which can be used to characterize physical systems, and 3) run-time autotuning routines designed to determine configurations of heterogeneous computing systems in an attempt to maximize the performance of scientific and engineering codes. Using three case studies, a lattice-Boltzmann method, a non-negative least squares inversion, and a finite-difference fluid flow method, it is shown that this environment provides scientists and engineers with means to reduce the programmatic complexity of their applications, to perform geophysical inversions for characterizing physical systems, and to determine high-performing run-time configurations of heterogeneous computing systems using a run-time autotuner.
Modeling a Wireless Network for International Space Station
NASA Technical Reports Server (NTRS)
Alena, Richard; Yaprak, Ece; Lamouri, Saad
2000-01-01
This paper describes the application of wireless local area network (LAN) simulation modeling methods to the hybrid LAN architecture designed for supporting crew-computing tools aboard the International Space Station (ISS). These crew-computing tools, such as wearable computers and portable advisory systems, will provide crew members with real-time vehicle and payload status information and access to digital technical and scientific libraries, significantly enhancing human capabilities in space. A wireless network, therefore, will provide wearable computer and remote instruments with the high performance computational power needed by next-generation 'intelligent' software applications. Wireless network performance in such simulated environments is characterized by the sustainable throughput of data under different traffic conditions. This data will be used to help plan the addition of more access points supporting new modules and more nodes for increased network capacity as the ISS grows.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Adelmann, Andreas; Gsell, Achim; Oswald, Benedikt
Significant problems facing all experimental andcomputationalsciences arise from growing data size and complexity. Commonto allthese problems is the need to perform efficient data I/O ondiversecomputer architectures. In our scientific application, thelargestparallel particle simulations generate vast quantitiesofsix-dimensional data. Such a simulation run produces data foranaggregate data size up to several TB per run. Motived by the needtoaddress data I/O and access challenges, we have implemented H5Part,anopen source data I/O API that simplifies the use of the HierarchicalDataFormat v5 library (HDF5). HDF5 is an industry standard forhighperformance, cross-platform data storage and retrieval that runsonall contemporary architectures from large parallel supercomputerstolaptops. H5Part, whichmore » is oriented to the needs of the particlephysicsand cosmology communities, provides support for parallelstorage andretrieval of particles, structured and in the future unstructuredmeshes.In this paper, we describe recent work focusing on I/O supportforparticles and structured meshes and provide data showing performance onmodernsupercomputer architectures like the IBM POWER 5.« less
NAS (Numerical Aerodynamic Simulation Program) technical summaries, March 1989 - February 1990
NASA Technical Reports Server (NTRS)
1990-01-01
Given here are selected scientific results from the Numerical Aerodynamic Simulation (NAS) Program's third year of operation. During this year, the scientific community was given access to a Cray-2 and a Cray Y-MP supercomputer. Topics covered include flow field analysis of fighter wing configurations, large-scale ocean modeling, the Space Shuttle flow field, advanced computational fluid dynamics (CFD) codes for rotary-wing airloads and performance prediction, turbulence modeling of separated flows, airloads and acoustics of rotorcraft, vortex-induced nonlinearities on submarines, and standing oblique detonation waves.
Scout: high-performance heterogeneous computing made simple
DOE Office of Scientific and Technical Information (OSTI.GOV)
Jablin, James; Mc Cormick, Patrick; Herlihy, Maurice
2011-01-26
Researchers must often write their own simulation and analysis software. During this process they simultaneously confront both computational and scientific problems. Current strategies for aiding the generation of performance-oriented programs do not abstract the software development from the science. Furthermore, the problem is becoming increasingly complex and pressing with the continued development of many-core and heterogeneous (CPU-GPU) architectures. To acbieve high performance, scientists must expertly navigate both software and hardware. Co-design between computer scientists and research scientists can alleviate but not solve this problem. The science community requires better tools for developing, optimizing, and future-proofing codes, allowing scientists to focusmore » on their research while still achieving high computational performance. Scout is a parallel programming language and extensible compiler framework targeting heterogeneous architectures. It provides the abstraction required to buffer scientists from the constantly-shifting details of hardware while still realizing higb-performance by encapsulating software and hardware optimization within a compiler framework.« less
Multi-threaded ATLAS simulation on Intel Knights Landing processors
NASA Astrophysics Data System (ADS)
Farrell, Steven; Calafiura, Paolo; Leggett, Charles; Tsulaia, Vakhtang; Dotti, Andrea; ATLAS Collaboration
2017-10-01
The Knights Landing (KNL) release of the Intel Many Integrated Core (MIC) Xeon Phi line of processors is a potential game changer for HEP computing. With 72 cores and deep vector registers, the KNL cards promise significant performance benefits for highly-parallel, compute-heavy applications. Cori, the newest supercomputer at the National Energy Research Scientific Computing Center (NERSC), was delivered to its users in two phases with the first phase online at the end of 2015 and the second phase now online at the end of 2016. Cori Phase 2 is based on the KNL architecture and contains over 9000 compute nodes with 96GB DDR4 memory. ATLAS simulation with the multithreaded Athena Framework (AthenaMT) is a good potential use-case for the KNL architecture and supercomputers like Cori. ATLAS simulation jobs have a high ratio of CPU computation to disk I/O and have been shown to scale well in multi-threading and across many nodes. In this paper we will give an overview of the ATLAS simulation application with details on its multi-threaded design. Then, we will present a performance analysis of the application on KNL devices and compare it to a traditional x86 platform to demonstrate the capabilities of the architecture and evaluate the benefits of utilizing KNL platforms like Cori for ATLAS production.
Nuclear Power Plant Simulation Game.
ERIC Educational Resources Information Center
Weiss, Fran
1979-01-01
Presents a nuclear power plant simulation game which is designed to involve a class of 30 junior or senior high school students. Scientific, ecological, and social issues covered in the game are also presented. (HM)
DOE Office of Scientific and Technical Information (OSTI.GOV)
Ruebel, Oliver
2009-11-20
Knowledge discovery from large and complex collections of today's scientific datasets is a challenging task. With the ability to measure and simulate more processes at increasingly finer spatial and temporal scales, the increasing number of data dimensions and data objects is presenting tremendous challenges for data analysis and effective data exploration methods and tools. Researchers are overwhelmed with data and standard tools are often insufficient to enable effective data analysis and knowledge discovery. The main objective of this thesis is to provide important new capabilities to accelerate scientific knowledge discovery form large, complex, and multivariate scientific data. The research coveredmore » in this thesis addresses these scientific challenges using a combination of scientific visualization, information visualization, automated data analysis, and other enabling technologies, such as efficient data management. The effectiveness of the proposed analysis methods is demonstrated via applications in two distinct scientific research fields, namely developmental biology and high-energy physics.Advances in microscopy, image analysis, and embryo registration enable for the first time measurement of gene expression at cellular resolution for entire organisms. Analysis of high-dimensional spatial gene expression datasets is a challenging task. By integrating data clustering and visualization, analysis of complex, time-varying, spatial gene expression patterns and their formation becomes possible. The analysis framework MATLAB and the visualization have been integrated, making advanced analysis tools accessible to biologist and enabling bioinformatic researchers to directly integrate their analysis with the visualization. Laser wakefield particle accelerators (LWFAs) promise to be a new compact source of high-energy particles and radiation, with wide applications ranging from medicine to physics. To gain insight into the complex physical processes of particle acceleration, physicists model LWFAs computationally. The datasets produced by LWFA simulations are (i) extremely large, (ii) of varying spatial and temporal resolution, (iii) heterogeneous, and (iv) high-dimensional, making analysis and knowledge discovery from complex LWFA simulation data a challenging task. To address these challenges this thesis describes the integration of the visualization system VisIt and the state-of-the-art index/query system FastBit, enabling interactive visual exploration of extremely large three-dimensional particle datasets. Researchers are especially interested in beams of high-energy particles formed during the course of a simulation. This thesis describes novel methods for automatic detection and analysis of particle beams enabling a more accurate and efficient data analysis process. By integrating these automated analysis methods with visualization, this research enables more accurate, efficient, and effective analysis of LWFA simulation data than previously possible.« less
2HOT: An Improved Parallel Hashed Oct-Tree N-Body Algorithm for Cosmological Simulation
Warren, Michael S.
2014-01-01
We report on improvements made over the past two decades to our adaptive treecode N-body method (HOT). A mathematical and computational approach to the cosmological N-body problem is described, with performance and scalability measured up to 256k (2 18 ) processors. We present error analysis and scientific application results from a series of more than ten 69 billion (4096 3 ) particle cosmological simulations, accounting for 4×10 20 floating point operations. These results include the first simulations using the new constraints on the standard model of cosmology from the Planck satellite. Our simulations set a new standard for accuracy andmore » scientific throughput, while meeting or exceeding the computational efficiency of the latest generation of hybrid TreePM N-body methods.« less
On-demand Simulation of Atmospheric Transport Processes on the AlpEnDAC Cloud
NASA Astrophysics Data System (ADS)
Hachinger, S.; Harsch, C.; Meyer-Arnek, J.; Frank, A.; Heller, H.; Giemsa, E.
2016-12-01
The "Alpine Environmental Data Analysis Centre" (AlpEnDAC) develops a data-analysis platform for high-altitude research facilities within the "Virtual Alpine Observatory" project (VAO). This platform, with its web portal, will support use cases going much beyond data management: On user request, the data are augmented with "on-demand" simulation results, such as air-parcel trajectories for tracing down the source of pollutants when they appear in high concentration. The respective back-end mechanism uses the Compute Cloud of the Leibniz Supercomputing Centre (LRZ) to transparently calculate results requested by the user, as far as they have not yet been stored in AlpEnDAC. The queuing-system operation model common in supercomputing is replaced by a model in which Virtual Machines (VMs) on the cloud are automatically created/destroyed, providing the necessary computing power immediately on demand. From a security point of view, this allows to perform simulations in a sandbox defined by the VM configuration, without direct access to a computing cluster. Within few minutes, the user receives conveniently visualized results. The AlpEnDAC infrastructure is distributed among two participating institutes [front-end at German Aerospace Centre (DLR), simulation back-end at LRZ], requiring an efficient mechanism for synchronization of measured and augmented data. We discuss our iRODS-based solution for these data-management tasks as well as the general AlpEnDAC framework. Our cloud-based offerings aim at making scientific computing for our users much more convenient and flexible than it has been, and to allow scientists without a broad background in scientific computing to benefit from complex numerical simulations.
ISCR Annual Report: Fical Year 2004
DOE Office of Scientific and Technical Information (OSTI.GOV)
McGraw, J R
2005-03-03
Large-scale scientific computation and all of the disciplines that support and help to validate it have been placed at the focus of Lawrence Livermore National Laboratory (LLNL) by the Advanced Simulation and Computing (ASC) program of the National Nuclear Security Administration (NNSA) and the Scientific Discovery through Advanced Computing (SciDAC) initiative of the Office of Science of the Department of Energy (DOE). The maturation of computational simulation as a tool of scientific and engineering research is underscored in the November 2004 statement of the Secretary of Energy that, ''high performance computing is the backbone of the nation's science and technologymore » enterprise''. LLNL operates several of the world's most powerful computers--including today's single most powerful--and has undertaken some of the largest and most compute-intensive simulations ever performed. Ultrascale simulation has been identified as one of the highest priorities in DOE's facilities planning for the next two decades. However, computers at architectural extremes are notoriously difficult to use efficiently. Furthermore, each successful terascale simulation only points out the need for much better ways of interacting with the resulting avalanche of data. Advances in scientific computing research have, therefore, never been more vital to LLNL's core missions than at present. Computational science is evolving so rapidly along every one of its research fronts that to remain on the leading edge, LLNL must engage researchers at many academic centers of excellence. In Fiscal Year 2004, the Institute for Scientific Computing Research (ISCR) served as one of LLNL's main bridges to the academic community with a program of collaborative subcontracts, visiting faculty, student internships, workshops, and an active seminar series. The ISCR identifies researchers from the academic community for computer science and computational science collaborations with LLNL and hosts them for short- and long-term visits with the aim of encouraging long-term academic research agendas that address LLNL's research priorities. Through such collaborations, ideas and software flow in both directions, and LLNL cultivates its future workforce. The Institute strives to be LLNL's ''eyes and ears'' in the computer and information sciences, keeping the Laboratory aware of and connected to important external advances. It also attempts to be the ''feet and hands'' that carry those advances into the Laboratory and incorporates them into practice. ISCR research participants are integrated into LLNL's Computing and Applied Research (CAR) Department, especially into its Center for Applied Scientific Computing (CASC). In turn, these organizations address computational challenges arising throughout the rest of the Laboratory. Administratively, the ISCR flourishes under LLNL's University Relations Program (URP). Together with the other five institutes of the URP, it navigates a course that allows LLNL to benefit from academic exchanges while preserving national security. While it is difficult to operate an academic-like research enterprise within the context of a national security laboratory, the results declare the challenges well met and worth the continued effort.« less
Design of FastQuery: How to Generalize Indexing and Querying System for Scientific Data
DOE Office of Scientific and Technical Information (OSTI.GOV)
Wu, Jerry; Wu, Kesheng
2011-04-18
Modern scientific datasets present numerous data management and analysis challenges. State-of-the-art index and query technologies such as FastBit are critical for facilitating interactive exploration of large datasets. These technologies rely on adding auxiliary information to existing datasets to accelerate query processing. To use these indices, we need to match the relational data model used by the indexing systems with the array data model used by most scientific data, and to provide an efficient input and output layer for reading and writing the indices. In this work, we present a flexible design that can be easily applied to most scientific datamore » formats. We demonstrate this flexibility by applying it to two of the most commonly used scientific data formats, HDF5 and NetCDF. We present two case studies using simulation data from the particle accelerator and climate simulation communities. To demonstrate the effectiveness of the new design, we also present a detailed performance study using both synthetic and real scientific workloads.« less
Constructing Scientific Arguments Using Evidence from Dynamic Computational Climate Models
NASA Astrophysics Data System (ADS)
Pallant, Amy; Lee, Hee-Sun
2015-04-01
Modeling and argumentation are two important scientific practices students need to develop throughout school years. In this paper, we investigated how middle and high school students ( N = 512) construct a scientific argument based on evidence from computational models with which they simulated climate change. We designed scientific argumentation tasks with three increasingly complex dynamic climate models. Each scientific argumentation task consisted of four parts: multiple-choice claim, openended explanation, five-point Likert scale uncertainty rating, and open-ended uncertainty rationale. We coded 1,294 scientific arguments in terms of a claim's consistency with current scientific consensus, whether explanations were model based or knowledge based and categorized the sources of uncertainty (personal vs. scientific). We used chi-square and ANOVA tests to identify significant patterns. Results indicate that (1) a majority of students incorporated models as evidence to support their claims, (2) most students used model output results shown on graphs to confirm their claim rather than to explain simulated molecular processes, (3) students' dependence on model results and their uncertainty rating diminished as the dynamic climate models became more and more complex, (4) some students' misconceptions interfered with observing and interpreting model results or simulated processes, and (5) students' uncertainty sources reflected more frequently on their assessment of personal knowledge or abilities related to the tasks than on their critical examination of scientific evidence resulting from models. These findings have implications for teaching and research related to the integration of scientific argumentation and modeling practices to address complex Earth systems.
Crochet, Patrice; Aggarwal, Rajesh; Knight, Sophie; Berdah, Stéphane; Boubli, Léon; Agostini, Aubert
2017-06-01
Substantial evidence in the scientific literature supports the use of simulation for surgical education. However, curricula lack for complex laparoscopic procedures in gynecology. The objective was to evaluate the validity of a program that reproduces key specific components of a laparoscopic hysterectomy (LH) procedure until colpotomy on a virtual reality (VR) simulator and to develop an evidence-based and stepwise training curriculum. This prospective cohort study was conducted in a Marseille teaching hospital. Forty participants were enrolled and were divided into experienced (senior surgeons who had performed more than 100 LH; n = 8), intermediate (surgical trainees who had performed 2-10 LH; n = 8) and inexperienced (n = 24) groups. Baselines were assessed on a validated basic task. Participants were tested for the LH procedure on a high-fidelity VR simulator. Validity evidence was proposed as the ability to differentiate between the three levels of experience. Inexperienced subjects performed ten repetitions for learning curve analysis. Proficiency measures were based on experienced surgeons' performances. Outcome measures were simulator-derived metrics and Objective Structured Assessment of Technical Skills (OSATS) scores. Quantitative analysis found significant inter-group differences between experienced intermediate and inexperienced groups for time (1369, 2385 and 3370 s; p < 0.001), number of movements (2033, 3195 and 4056; p = 0.001), path length (3390, 4526 and 5749 cm; p = 0.002), idle time (357, 654 and 747 s; p = 0.001), respect for tissue (24, 40 and 84; p = 0.01) and number of bladder injuries (0.13, 0 and 4.27; p < 0.001). Learning curves plateaued at the 2nd to 6th repetition. Further qualitative analysis found significant inter-group OSATS score differences at first repetition (22, 15 and 8, respectively; p < 0.001) and second repetition (25.5, 19.5 and 14; p < 0.001). The VR program for LH accrued validity evidence and allowed the development of a training curriculum using a structured scientific methodology.
Ecological prediction with nonlinear multivariate time-frequency functional data models
Yang, Wen-Hsi; Wikle, Christopher K.; Holan, Scott H.; Wildhaber, Mark L.
2013-01-01
Time-frequency analysis has become a fundamental component of many scientific inquiries. Due to improvements in technology, the amount of high-frequency signals that are collected for ecological and other scientific processes is increasing at a dramatic rate. In order to facilitate the use of these data in ecological prediction, we introduce a class of nonlinear multivariate time-frequency functional models that can identify important features of each signal as well as the interaction of signals corresponding to the response variable of interest. Our methodology is of independent interest and utilizes stochastic search variable selection to improve model selection and performs model averaging to enhance prediction. We illustrate the effectiveness of our approach through simulation and by application to predicting spawning success of shovelnose sturgeon in the Lower Missouri River.
Outcomes from the DOE Workshop on Turbulent Flow Simulation at the Exascale
DOE Office of Scientific and Technical Information (OSTI.GOV)
Sprague, Michael; Boldyrev, Stanislav; Chang, Choong-Seock
This paper summarizes the outcomes from the Turbulent Flow Simulation at the Exascale: Opportunities and Challenges Workshop, which was held 4-5 August 2015, and was sponsored by the U.S. Department of Energy Office of Advanced Scientific Computing Research. The workshop objective was to define and describe the challenges and opportunities that computing at the exascale will bring to turbulent-flow simulations in applied science and technology. The need for accurate simulation of turbulent flows is evident across the U.S. Department of Energy applied-science and engineering portfolios, including combustion, plasma physics, nuclear-reactor physics, wind energy, and atmospheric science. The workshop brought togethermore » experts in turbulent-flow simulation, computational mathematics, and high-performance computing. Building upon previous ASCR workshops on exascale computing, participants defined a research agenda and path forward that will enable scientists and engineers to continually leverage, engage, and direct advances in computational systems on the path to exascale computing.« less
NASA/ESA CT-990 Spacelab simulation. Appendix A: The experiment operator
NASA Technical Reports Server (NTRS)
Reller, J. O., Jr.; Neel, C. B.; Haughney, L. C.
1976-01-01
A joint NASA/ESA endeavor was established to conduct an extensive spacelab simulation using the NASA CV-990 airborne laboratory. The scientific payload was selected to perform studies in upper atmospheric physics and infrared astronomy with principal investigators from France, the Netherlands, England, and several groups from the United States. Two experiment operators from Europe and two from the U.S. were selected to live aboard the aircraft along with a mission manager for a six-day period and operate the experiments in behalf of the principal scientists. This appendix discusses the experiment operators and their relationship to the joint mission under the following general headings: selection criteria, training programs, and performance. The performance of the proxy operators was assessed in terms of adequacy of training, amount of scientific data obtained, quality of data obtained, and reactions to problems that arose in experiment operation.
PANORAMA: An approach to performance modeling and diagnosis of extreme-scale workflows
DOE Office of Scientific and Technical Information (OSTI.GOV)
Deelman, Ewa; Carothers, Christopher; Mandal, Anirban
Here we report that computational science is well established as the third pillar of scientific discovery and is on par with experimentation and theory. However, as we move closer toward the ability to execute exascale calculations and process the ensuing extreme-scale amounts of data produced by both experiments and computations alike, the complexity of managing the compute and data analysis tasks has grown beyond the capabilities of domain scientists. Therefore, workflow management systems are absolutely necessary to ensure current and future scientific discoveries. A key research question for these workflow management systems concerns the performance optimization of complex calculation andmore » data analysis tasks. The central contribution of this article is a description of the PANORAMA approach for modeling and diagnosing the run-time performance of complex scientific workflows. This approach integrates extreme-scale systems testbed experimentation, structured analytical modeling, and parallel systems simulation into a comprehensive workflow framework called Pegasus for understanding and improving the overall performance of complex scientific workflows.« less
PANORAMA: An approach to performance modeling and diagnosis of extreme-scale workflows
Deelman, Ewa; Carothers, Christopher; Mandal, Anirban; ...
2015-07-14
Here we report that computational science is well established as the third pillar of scientific discovery and is on par with experimentation and theory. However, as we move closer toward the ability to execute exascale calculations and process the ensuing extreme-scale amounts of data produced by both experiments and computations alike, the complexity of managing the compute and data analysis tasks has grown beyond the capabilities of domain scientists. Therefore, workflow management systems are absolutely necessary to ensure current and future scientific discoveries. A key research question for these workflow management systems concerns the performance optimization of complex calculation andmore » data analysis tasks. The central contribution of this article is a description of the PANORAMA approach for modeling and diagnosing the run-time performance of complex scientific workflows. This approach integrates extreme-scale systems testbed experimentation, structured analytical modeling, and parallel systems simulation into a comprehensive workflow framework called Pegasus for understanding and improving the overall performance of complex scientific workflows.« less
Large-Scale NASA Science Applications on the Columbia Supercluster
NASA Technical Reports Server (NTRS)
Brooks, Walter
2005-01-01
Columbia, NASA's newest 61 teraflops supercomputer that became operational late last year, is a highly integrated Altix cluster of 10,240 processors, and was named to honor the crew of the Space Shuttle lost in early 2003. Constructed in just four months, Columbia increased NASA's computing capability ten-fold, and revitalized the Agency's high-end computing efforts. Significant cutting-edge science and engineering simulations in the areas of space and Earth sciences, as well as aeronautics and space operations, are already occurring on this largest operational Linux supercomputer, demonstrating its capacity and capability to accelerate NASA's space exploration vision. The presentation will describe how an integrated environment consisting not only of next-generation systems, but also modeling and simulation, high-speed networking, parallel performance optimization, and advanced data analysis and visualization, is being used to reduce design cycle time, accelerate scientific discovery, conduct parametric analysis of multiple scenarios, and enhance safety during the life cycle of NASA missions. The talk will conclude by discussing how NAS partnered with various NASA centers, other government agencies, computer industry, and academia, to create a national resource in large-scale modeling and simulation.
DECISION MAKING , * GROUP DYNAMICS, NAVAL TRAINING, TRANSFER OF TRAINING, SCIENTIFIC RESEARCH, CLASSIFICATION, PROBLEM SOLVING, MATHEMATICAL MODELS, SUBMARINES, SIMULATORS, PERFORMANCE(HUMAN), UNDERSEA WARFARE.
Optimized technical and scientific design approach for high performance anticoincidence shields
NASA Astrophysics Data System (ADS)
Graue, Roland; Stuffler, Timo; Monzani, Franco; Bastia, Paolo; Gryksa, Werner; Pahl, Germit
2018-04-01
This paper, "Optimized technical and scientific design approach for high performance anticoincidence shields," was presented as part of International Conference on Space Optics—ICSO 1997, held in Toulouse, France.
Software Engineering for Scientific Computer Simulations
NASA Astrophysics Data System (ADS)
Post, Douglass E.; Henderson, Dale B.; Kendall, Richard P.; Whitney, Earl M.
2004-11-01
Computer simulation is becoming a very powerful tool for analyzing and predicting the performance of fusion experiments. Simulation efforts are evolving from including only a few effects to many effects, from small teams with a few people to large teams, and from workstations and small processor count parallel computers to massively parallel platforms. Successfully making this transition requires attention to software engineering issues. We report on the conclusions drawn from a number of case studies of large scale scientific computing projects within DOE, academia and the DoD. The major lessons learned include attention to sound project management including setting reasonable and achievable requirements, building a good code team, enforcing customer focus, carrying out verification and validation and selecting the optimum computational mathematics approaches.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Sunderam, Vaidy S.
2007-01-09
The Harness project has developed novel software frameworks for the execution of high-end simulations in a fault-tolerant manner on distributed resources. The H2O subsystem comprises the kernel of the Harness framework, and controls the key functions of resource management across multiple administrative domains, especially issues of access and allocation. It is based on a “pluggable” architecture that enables the aggregated use of distributed heterogeneous resources for high performance computing. The major contributions of the Harness II project result in significantly enhancing the overall computational productivity of high-end scientific applications by enabling robust, failure-resilient computations on cooperatively pooled resource collections.
MADNESS: A Multiresolution, Adaptive Numerical Environment for Scientific Simulation
DOE Office of Scientific and Technical Information (OSTI.GOV)
Harrison, Robert J.; Beylkin, Gregory; Bischoff, Florian A.
2016-01-01
MADNESS (multiresolution adaptive numerical environment for scientific simulation) is a high-level software environment for solving integral and differential equations in many dimensions that uses adaptive and fast harmonic analysis methods with guaranteed precision based on multiresolution analysis and separated representations. Underpinning the numerical capabilities is a powerful petascale parallel programming environment that aims to increase both programmer productivity and code scalability. This paper describes the features and capabilities of MADNESS and briefly discusses some current applications in chemistry and several areas of physics.
Developing DIII-D To Prepare For ITER And The Path To Fusion Energy
NASA Astrophysics Data System (ADS)
Buttery, Richard; Hill, David; Solomon, Wayne; Guo, Houyang; DIII-D Team
2017-10-01
DIII-D pursues the advancement of fusion energy through scientific understanding and discovery of solutions. Research targets two key goals. First, to prepare for ITER we must resolve how to use its flexible control tools to rapidly reach Q =10, and develop the scientific basis to interpret results from ITER for fusion projection. Second, we must determine how to sustain a high performance fusion core in steady state conditions, with minimal actuators and a plasma exhaust solution. DIII-D will target these missions with: (i) increased electron heating and balanced torque neutral beams to simulate burning plasma conditions (ii) new 3D coil arrays to resolve control of transients (iii) off axis current drive to study physics in steady state regimes (iv) divertors configurations to promote detachment with low upstream density (v) a reactor relevant wall to qualify materials and resolve physics in reactor-like conditions. With new diagnostics and leading edge simulation, this will position the US for success in ITER and a unique knowledge to accelerate the approach to fusion energy. Supported by the US DOE under DE-FC02-04ER54698.
Baseline Design and Performance Analysis of Laser Altimeter for Korean Lunar Orbiter
NASA Astrophysics Data System (ADS)
Lim, Hyung-Chul; Neumann, Gregory A.; Choi, Myeong-Hwan; Yu, Sung-Yeol; Bang, Seong-Cheol; Ka, Neung-Hyun; Park, Jong-Uk; Choi, Man-Soo; Park, Eunseo
2016-09-01
Korea’s lunar exploration project includes the launching of an orbiter, a lander (including a rover), and an experimental orbiter (referred to as a lunar pathfinder). Laser altimeters have played an important scientific role in lunar, planetary, and asteroid exploration missions since their first use in 1971 onboard the Apollo 15 mission to the Moon. In this study, a laser altimeter was proposed as a scientific instrument for the Korean lunar orbiter, which will be launched by 2020, to study the global topography of the surface of the Moon and its gravitational field and to support other payloads such as a terrain mapping camera or spectral imager. This study presents the baseline design and performance model for the proposed laser altimeter. Additionally, the study discusses the expected performance based on numerical simulation results. The simulation results indicate that the design of system parameters satisfies performance requirements with respect to detection probability and range error even under unfavorable conditions.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Spong, D.A.
The design techniques and physics analysis of modern stellarator configurations for magnetic fusion research rely heavily on high performance computing and simulation. Stellarators, which are fundamentally 3-dimensional in nature, offer significantly more design flexibility than more symmetric devices such as the tokamak. By varying the outer boundary shape of the plasma, a variety of physics features, such as transport, stability, and heating efficiency can be optimized. Scientific visualization techniques are an important adjunct to this effort as they provide a necessary ergonomic link between the numerical results and the intuition of the human researcher. The authors have developed a varietymore » of visualization techniques for stellarators which both facilitate the design optimization process and allow the physics simulations to be more readily understood.« less
Resilient and Robust High Performance Computing Platforms for Scientific Computing Integrity
DOE Office of Scientific and Technical Information (OSTI.GOV)
Jin, Yier
As technology advances, computer systems are subject to increasingly sophisticated cyber-attacks that compromise both their security and integrity. High performance computing platforms used in commercial and scientific applications involving sensitive, or even classified data, are frequently targeted by powerful adversaries. This situation is made worse by a lack of fundamental security solutions that both perform efficiently and are effective at preventing threats. Current security solutions fail to address the threat landscape and ensure the integrity of sensitive data. As challenges rise, both private and public sectors will require robust technologies to protect its computing infrastructure. The research outcomes from thismore » project try to address all these challenges. For example, we present LAZARUS, a novel technique to harden kernel Address Space Layout Randomization (KASLR) against paging-based side-channel attacks. In particular, our scheme allows for fine-grained protection of the virtual memory mappings that implement the randomization. We demonstrate the effectiveness of our approach by hardening a recent Linux kernel with LAZARUS, mitigating all of the previously presented side-channel attacks on KASLR. Our extensive evaluation shows that LAZARUS incurs only 0.943% overhead for standard benchmarks, and is therefore highly practical. We also introduced HA2lloc, a hardware-assisted allocator that is capable of leveraging an extended memory management unit to detect memory errors in the heap. We also perform testing using HA2lloc in a simulation environment and find that the approach is capable of preventing common memory vulnerabilities.« less
Mathematical modeling of heat transfer problems in the permafrost
NASA Astrophysics Data System (ADS)
Gornov, V. F.; Stepanov, S. P.; Vasilyeva, M. V.; Vasilyev, V. I.
2014-11-01
In this work we present results of numerical simulation of three-dimensional temperature fields in soils for various applied problems: the railway line in the conditions of permafrost for different geometries, the horizontal tunnel underground storage and greenhouses of various designs in the Far North. Mathematical model of the process is described by a nonstationary heat equation with phase transitions of pore water. The numerical realization of the problem is based on the finite element method using a library of scientific computing FEniCS. For numerical calculations we use high-performance computing systems.
Modeling Primary Atomization of Liquid Fuels using a Multiphase DNS/LES Approach
DOE Office of Scientific and Technical Information (OSTI.GOV)
Arienti, Marco; Oefelein, Joe; Doisneau, Francois
2016-08-01
As part of a Laboratory Directed Research and Development project, we are developing a modeling-and-simulation capability to study fuel direct injection in automotive engines. Predicting mixing and combustion at realistic conditions remains a challenging objective of energy science. And it is a research priority in Sandia’s mission-critical area of energy security, being also relevant to many flows in defense and climate. High-performance computing applied to this non-linear multi-scale problem is key to engine calculations with increased scientific reliability.
Cazzaniga, Paolo; Nobile, Marco S.; Besozzi, Daniela; Bellini, Matteo; Mauri, Giancarlo
2014-01-01
The introduction of general-purpose Graphics Processing Units (GPUs) is boosting scientific applications in Bioinformatics, Systems Biology, and Computational Biology. In these fields, the use of high-performance computing solutions is motivated by the need of performing large numbers of in silico analysis to study the behavior of biological systems in different conditions, which necessitate a computing power that usually overtakes the capability of standard desktop computers. In this work we present coagSODA, a CUDA-powered computational tool that was purposely developed for the analysis of a large mechanistic model of the blood coagulation cascade (BCC), defined according to both mass-action kinetics and Hill functions. coagSODA allows the execution of parallel simulations of the dynamics of the BCC by automatically deriving the system of ordinary differential equations and then exploiting the numerical integration algorithm LSODA. We present the biological results achieved with a massive exploration of perturbed conditions of the BCC, carried out with one-dimensional and bi-dimensional parameter sweep analysis, and show that GPU-accelerated parallel simulations of this model can increase the computational performances up to a 181× speedup compared to the corresponding sequential simulations. PMID:25025072
Visualization and Analysis of Climate Simulation Performance Data
NASA Astrophysics Data System (ADS)
Röber, Niklas; Adamidis, Panagiotis; Behrens, Jörg
2015-04-01
Visualization is the key process of transforming abstract (scientific) data into a graphical representation, to aid in the understanding of the information hidden within the data. Climate simulation data sets are typically quite large, time varying, and consist of many different variables sampled on an underlying grid. A large variety of climate models - and sub models - exist to simulate various aspects of the climate system. Generally, one is mainly interested in the physical variables produced by the simulation runs, but model developers are also interested in performance data measured along with these simulations. Climate simulation models are carefully developed complex software systems, designed to run in parallel on large HPC systems. An important goal thereby is to utilize the entire hardware as efficiently as possible, that is, to distribute the workload as even as possible among the individual components. This is a very challenging task, and detailed performance data, such as timings, cache misses etc. have to be used to locate and understand performance problems in order to optimize the model implementation. Furthermore, the correlation of performance data to the processes of the application and the sub-domains of the decomposed underlying grid is vital when addressing communication and load imbalance issues. High resolution climate simulations are carried out on tens to hundreds of thousands of cores, thus yielding a vast amount of profiling data, which cannot be analyzed without appropriate visualization techniques. This PICO presentation displays and discusses the ICON simulation model, which is jointly developed by the Max Planck Institute for Meteorology and the German Weather Service and in partnership with DKRZ. The visualization and analysis of the models performance data allows us to optimize and fine tune the model, as well as to understand its execution on the HPC system. We show and discuss our workflow, as well as present new ideas and solutions that greatly aided our understanding. The software employed is based on Avizo Green, ParaView and SimVis, as well as own developed software extensions.
Simulating Scenes In Outer Space
NASA Technical Reports Server (NTRS)
Callahan, John D.
1989-01-01
Multimission Interactive Picture Planner, MIP, computer program for scientifically accurate and fast, three-dimensional animation of scenes in deep space. Versatile, reasonably comprehensive, and portable, and runs on microcomputers. New techniques developed to perform rapidly calculations and transformations necessary to animate scenes in scientifically accurate three-dimensional space. Written in FORTRAN 77 code. Primarily designed to handle Voyager, Galileo, and Space Telescope. Adapted to handle other missions.
NASA Astrophysics Data System (ADS)
Lin, S. J.
2015-12-01
The NOAA/Geophysical Fluid Dynamics Laboratory has been developing a unified regional-global modeling system with variable resolution capabilities that can be used for severe weather predictions (e.g., tornado outbreak events and cat-5 hurricanes) and ultra-high-resolution (1-km) regional climate simulations within a consistent global modeling framework. The fundation of this flexible regional-global modeling system is the non-hydrostatic extension of the vertically Lagrangian dynamical core (Lin 2004, Monthly Weather Review) known in the community as FV3 (finite-volume on the cubed-sphere). Because of its flexability and computational efficiency, the FV3 is one of the final candidates of NOAA's Next Generation Global Prediction System (NGGPS). We have built into the modeling system a stretched (single) grid capability, a two-way (regional-global) multiple nested grid capability, and the combination of the stretched and two-way nests, so as to make convection-resolving regional climate simulation within a consistent global modeling system feasible using today's High Performance Computing System. One of our main scientific goals is to enable simulations of high impact weather phenomena (such as tornadoes, thunderstorms, category-5 hurricanes) within an IPCC-class climate modeling system previously regarded as impossible. In this presentation I will demonstrate that it is computationally feasible to simulate not only super-cell thunderstorms, but also the subsequent genesis of tornadoes using a global model that was originally designed for century long climate simulations. As a unified weather-climate modeling system, we evaluated the performance of the model with horizontal resolution ranging from 1 km to as low as 200 km. In particular, for downscaling studies, we have developed various tests to ensure that the large-scale circulation within the global varaible resolution system is well simulated while at the same time the small-scale can be accurately captured within the targeted high resolution region.
Scientific Design of the New Neutron Radiography Facility (SANRAD) at SAFARI-1 for South Africa
NASA Astrophysics Data System (ADS)
de Beer, F. C.; Gruenauer, F.; Radebe, J. M.; Modise, T.; Schillinger, B.
The final scientific design for an upgraded neutron radiography/tomography facility at beam port no.2 of the SAFARI-1 nuclear research reactor has been performed through expert advice from Physics Consulting, FRMII in Germany and IPEN, Brazil. A need to upgrade the facility became apparent due to the identification of various deficiencies of the current SANRAD facility during an IAEA-sponsored expert mission of international scientists to Necsa, South Africa. A lack of adequate shielding that results in high neutron background on the beam port floor, a mismatch in the collimator aperture to the core that results in a high gradient in neutron flux on the imaging plane and due to a relative low L/D the quality of the radiographs are poor, are a number of deficiencies to name a few.The new design, based on results of Monte Carlo (MCNP-X) simulations of neutron- and gamma transport from the reactor core and through the new facility, is being outlined. The scientific design philosophy, neutron optics and imaging capabilities that include the utilization of fission neutrons, thermal neutrons, and gamma-rays emerging from the core of SAFARI-1 are discussed.
A domain-specific compiler for a parallel multiresolution adaptive numerical simulation environment
DOE Office of Scientific and Technical Information (OSTI.GOV)
Rajbhandari, Samyam; Kim, Jinsung; Krishnamoorthy, Sriram
This paper describes the design and implementation of a layered domain-specific compiler to support MADNESS---Multiresolution ADaptive Numerical Environment for Scientific Simulation. MADNESS is a high-level software environment for the solution of integral and differential equations in many dimensions, using adaptive and fast harmonic analysis methods with guaranteed precision. MADNESS uses k-d trees to represent spatial functions and implements operators like addition, multiplication, differentiation, and integration on the numerical representation of functions. The MADNESS runtime system provides global namespace support and a task-based execution model including futures. MADNESS is currently deployed on massively parallel supercomputers and has enabled many science advances.more » Due to the highly irregular and statically unpredictable structure of the k-d trees representing the spatial functions encountered in MADNESS applications, only purely runtime approaches to optimization have previously been implemented in the MADNESS framework. This paper describes a layered domain-specific compiler developed to address some performance bottlenecks in MADNESS. The newly developed static compile-time optimizations, in conjunction with the MADNESS runtime support, enable significant performance improvement for the MADNESS framework.« less
Probabilities and Predictions: Modeling the Development of Scientific Problem-Solving Skills
ERIC Educational Resources Information Center
Stevens, Ron; Johnson, David F.; Soller, Amy
2005-01-01
The IMMEX (Interactive Multi-Media Exercises) Web-based problem set platform enables the online delivery of complex, multimedia simulations, the rapid collection of student performance data, and has already been used in several genetic simulations. The next step is the use of these data to understand and improve student learning in a formative…
Extreme Scale Computing to Secure the Nation
DOE Office of Scientific and Technical Information (OSTI.GOV)
Brown, D L; McGraw, J R; Johnson, J R
2009-11-10
Since the dawn of modern electronic computing in the mid 1940's, U.S. national security programs have been dominant users of every new generation of high-performance computer. Indeed, the first general-purpose electronic computer, ENIAC (the Electronic Numerical Integrator and Computer), was used to calculate the expected explosive yield of early thermonuclear weapons designs. Even the U. S. numerical weather prediction program, another early application for high-performance computing, was initially funded jointly by sponsors that included the U.S. Air Force and Navy, agencies interested in accurate weather predictions to support U.S. military operations. For the decades of the cold war, national securitymore » requirements continued to drive the development of high performance computing (HPC), including advancement of the computing hardware and development of sophisticated simulation codes to support weapons and military aircraft design, numerical weather prediction as well as data-intensive applications such as cryptography and cybersecurity U.S. national security concerns continue to drive the development of high-performance computers and software in the U.S. and in fact, events following the end of the cold war have driven an increase in the growth rate of computer performance at the high-end of the market. This mainly derives from our nation's observance of a moratorium on underground nuclear testing beginning in 1992, followed by our voluntary adherence to the Comprehensive Test Ban Treaty (CTBT) beginning in 1995. The CTBT prohibits further underground nuclear tests, which in the past had been a key component of the nation's science-based program for assuring the reliability, performance and safety of U.S. nuclear weapons. In response to this change, the U.S. Department of Energy (DOE) initiated the Science-Based Stockpile Stewardship (SBSS) program in response to the Fiscal Year 1994 National Defense Authorization Act, which requires, 'in the absence of nuclear testing, a progam to: (1) Support a focused, multifaceted program to increase the understanding of the enduring stockpile; (2) Predict, detect, and evaluate potential problems of the aging of the stockpile; (3) Refurbish and re-manufacture weapons and components, as required; and (4) Maintain the science and engineering institutions needed to support the nation's nuclear deterrent, now and in the future'. This program continues to fulfill its national security mission by adding significant new capabilities for producing scientific results through large-scale computational simulation coupled with careful experimentation, including sub-critical nuclear experiments permitted under the CTBT. To develop the computational science and the computational horsepower needed to support its mission, SBSS initiated the Accelerated Strategic Computing Initiative, later renamed the Advanced Simulation & Computing (ASC) program (sidebar: 'History of ASC Computing Program Computing Capability'). The modern 3D computational simulation capability of the ASC program supports the assessment and certification of the current nuclear stockpile through calibration with past underground test (UGT) data. While an impressive accomplishment, continued evolution of national security mission requirements will demand computing resources at a significantly greater scale than we have today. In particular, continued observance and potential Senate confirmation of the Comprehensive Test Ban Treaty (CTBT) together with the U.S administration's promise for a significant reduction in the size of the stockpile and the inexorable aging and consequent refurbishment of the stockpile all demand increasing refinement of our computational simulation capabilities. Assessment of the present and future stockpile with increased confidence of the safety and reliability without reliance upon calibration with past or future test data is a long-term goal of the ASC program. This will be accomplished through significant increases in the scientific bases that underlie the computational tools. Computer codes must be developed that replace phenomenology with increased levels of scientific understanding together with an accompanying quantification of uncertainty. These advanced codes will place significantly higher demands on the computing infrastructure than do the current 3D ASC codes. This article discusses not only the need for a future computing capability at the exascale for the SBSS program, but also considers high performance computing requirements for broader national security questions. For example, the increasing concern over potential nuclear terrorist threats demands a capability to assess threats and potential disablement technologies as well as a rapid forensic capability for determining a nuclear weapons design from post-detonation evidence (nuclear counterterrorism).« less
NASA Technical Reports Server (NTRS)
Talbot, Bryan; Zhou, Shu-Jia; Higgins, Glenn; Zukor, Dorothy (Technical Monitor)
2002-01-01
One of the most significant challenges in large-scale climate modeling, as well as in high-performance computing in other scientific fields, is that of effectively integrating many software models from multiple contributors. A software framework facilitates the integration task, both in the development and runtime stages of the simulation. Effective software frameworks reduce the programming burden for the investigators, freeing them to focus more on the science and less on the parallel communication implementation. while maintaining high performance across numerous supercomputer and workstation architectures. This document surveys numerous software frameworks for potential use in Earth science modeling. Several frameworks are evaluated in depth, including Parallel Object-Oriented Methods and Applications (POOMA), Cactus (from (he relativistic physics community), Overture, Goddard Earth Modeling System (GEMS), the National Center for Atmospheric Research Flux Coupler, and UCLA/UCB Distributed Data Broker (DDB). Frameworks evaluated in less detail include ROOT, Parallel Application Workspace (PAWS), and Advanced Large-Scale Integrated Computational Environment (ALICE). A host of other frameworks and related tools are referenced in this context. The frameworks are evaluated individually and also compared with each other.
The SCEC Broadband Platform: Open-Source Software for Strong Ground Motion Simulation and Validation
NASA Astrophysics Data System (ADS)
Goulet, C.; Silva, F.; Maechling, P. J.; Callaghan, S.; Jordan, T. H.
2015-12-01
The Southern California Earthquake Center (SCEC) Broadband Platform (BBP) is a carefully integrated collection of open-source scientific software programs that can simulate broadband (0-100Hz) ground motions for earthquakes at regional scales. The BBP scientific software modules implement kinematic rupture generation, low and high-frequency seismogram synthesis using wave propagation through 1D layered velocity structures, seismogram ground motion amplitude calculations, and goodness of fit measurements. These modules are integrated into a software system that provides user-defined, repeatable, calculation of ground motion seismograms, using multiple alternative ground motion simulation methods, and software utilities that can generate plots, charts, and maps. The BBP has been developed over the last five years in a collaborative scientific, engineering, and software development project involving geoscientists, earthquake engineers, graduate students, and SCEC scientific software developers. The BBP can run earthquake rupture and wave propagation modeling software to simulate ground motions for well-observed historical earthquakes and to quantify how well the simulated broadband seismograms match the observed seismograms. The BBP can also run simulations for hypothetical earthquakes. In this case, users input an earthquake location and magnitude description, a list of station locations, and a 1D velocity model for the region of interest, and the BBP software then calculates ground motions for the specified stations. The SCEC BBP software released in 2015 can be compiled and run on recent Linux systems with GNU compilers. It includes 5 simulation methods, 7 simulation regions covering California, Japan, and Eastern North America, the ability to compare simulation results against GMPEs, updated ground motion simulation methods, and a simplified command line user interface.
Active Storage with Analytics Capabilities and I/O Runtime System for Petascale Systems
DOE Office of Scientific and Technical Information (OSTI.GOV)
Choudhary, Alok
Computational scientists must understand results from experimental, observational and computational simulation generated data to gain insights and perform knowledge discovery. As systems approach the petascale range, problems that were unimaginable a few years ago are within reach. With the increasing volume and complexity of data produced by ultra-scale simulations and high-throughput experiments, understanding the science is largely hampered by the lack of comprehensive I/O, storage, acceleration of data manipulation, analysis, and mining tools. Scientists require techniques, tools and infrastructure to facilitate better understanding of their data, in particular the ability to effectively perform complex data analysis, statistical analysis and knowledgemore » discovery. The goal of this work is to enable more effective analysis of scientific datasets through the integration of enhancements in the I/O stack, from active storage support at the file system layer to MPI-IO and high-level I/O library layers. We propose to provide software components to accelerate data analytics, mining, I/O, and knowledge discovery for large-scale scientific applications, thereby increasing productivity of both scientists and the systems. Our approaches include 1) design the interfaces in high-level I/O libraries, such as parallel netCDF, for applications to activate data mining operations at the lower I/O layers; 2) Enhance MPI-IO runtime systems to incorporate the functionality developed as a part of the runtime system design; 3) Develop parallel data mining programs as part of runtime library for server-side file system in PVFS file system; and 4) Prototype an active storage cluster, which will utilize multicore CPUs, GPUs, and FPGAs to carry out the data mining workload.« less
High resolution global climate modelling; the UPSCALE project, a large simulation campaign
NASA Astrophysics Data System (ADS)
Mizielinski, M. S.; Roberts, M. J.; Vidale, P. L.; Schiemann, R.; Demory, M.-E.; Strachan, J.; Edwards, T.; Stephens, A.; Lawrence, B. N.; Pritchard, M.; Chiu, P.; Iwi, A.; Churchill, J.; del Cano Novales, C.; Kettleborough, J.; Roseblade, W.; Selwood, P.; Foster, M.; Glover, M.; Malcolm, A.
2014-01-01
The UPSCALE (UK on PRACE: weather-resolving Simulations of Climate for globAL Environmental risk) project constructed and ran an ensemble of HadGEM3 (Hadley centre Global Environment Model 3) atmosphere-only global climate simulations over the period 1985-2011, at resolutions of N512 (25 km), N216 (60 km) and N96 (130 km) as used in current global weather forecasting, seasonal prediction and climate modelling respectively. Alongside these present climate simulations a parallel ensemble looking at extremes of future climate was run, using a time-slice methodology to consider conditions at the end of this century. These simulations were primarily performed using a 144 million core hour, single year grant of computing time from PRACE (the Partnership for Advanced Computing in Europe) in 2012, with additional resources supplied by the Natural Environmental Research Council (NERC) and the Met Office. Almost 400 terabytes of simulation data were generated on the HERMIT supercomputer at the high performance computing center Stuttgart (HLRS), and transferred to the JASMIN super-data cluster provided by the Science and Technology Facilities Council Centre for Data Archival (STFC CEDA) for analysis and storage. In this paper we describe the implementation of the project, present the technical challenges in terms of optimisation, data output, transfer and storage that such a project involves and include details of the model configuration and the composition of the UPSCALE dataset. This dataset is available for scientific analysis to allow assessment of the value of model resolution in both present and potential future climate conditions.
High-resolution global climate modelling: the UPSCALE project, a large-simulation campaign
NASA Astrophysics Data System (ADS)
Mizielinski, M. S.; Roberts, M. J.; Vidale, P. L.; Schiemann, R.; Demory, M.-E.; Strachan, J.; Edwards, T.; Stephens, A.; Lawrence, B. N.; Pritchard, M.; Chiu, P.; Iwi, A.; Churchill, J.; del Cano Novales, C.; Kettleborough, J.; Roseblade, W.; Selwood, P.; Foster, M.; Glover, M.; Malcolm, A.
2014-08-01
The UPSCALE (UK on PRACE: weather-resolving Simulations of Climate for globAL Environmental risk) project constructed and ran an ensemble of HadGEM3 (Hadley Centre Global Environment Model 3) atmosphere-only global climate simulations over the period 1985-2011, at resolutions of N512 (25 km), N216 (60 km) and N96 (130 km) as used in current global weather forecasting, seasonal prediction and climate modelling respectively. Alongside these present climate simulations a parallel ensemble looking at extremes of future climate was run, using a time-slice methodology to consider conditions at the end of this century. These simulations were primarily performed using a 144 million core hour, single year grant of computing time from PRACE (the Partnership for Advanced Computing in Europe) in 2012, with additional resources supplied by the Natural Environment Research Council (NERC) and the Met Office. Almost 400 terabytes of simulation data were generated on the HERMIT supercomputer at the High Performance Computing Center Stuttgart (HLRS), and transferred to the JASMIN super-data cluster provided by the Science and Technology Facilities Council Centre for Data Archival (STFC CEDA) for analysis and storage. In this paper we describe the implementation of the project, present the technical challenges in terms of optimisation, data output, transfer and storage that such a project involves and include details of the model configuration and the composition of the UPSCALE data set. This data set is available for scientific analysis to allow assessment of the value of model resolution in both present and potential future climate conditions.
Discovery & Interaction in Astro 101 Laboratory Experiments
NASA Astrophysics Data System (ADS)
Maloney, Frank Patrick; Maurone, Philip; DeWarf, Laurence E.
2016-01-01
The availability of low-cost, high-performance computing hardware and software has transformed the manner by which astronomical concepts can be re-discovered and explored in a laboratory that accompanies an astronomy course for arts students. We report on a strategy, begun in 1992, for allowing each student to understand fundamental scientific principles by interactively confronting astronomical and physical phenomena, through direct observation and by computer simulation. These experiments have evolved as :a) the quality and speed of the hardware has greatly increasedb) the corresponding hardware costs have decreasedc) the students have become computer and Internet literated) the importance of computationally and scientifically literate arts graduates in the workplace has increased.We present the current suite of laboratory experiments, and describe the nature, procedures, and goals in this two-semester laboratory for liberal arts majors at the Astro 101 university level.
NASA Astrophysics Data System (ADS)
Lele, Sanjiva K.
2002-08-01
Funds were received in April 2001 under the Department of Defense DURIP program for construction of a 48 processor high performance computing cluster. This report details the hardware which was purchased and how it has been used to enable and enhance research activities directly supported by, and of interest to, the Air Force Office of Scientific Research and the Department of Defense. The report is divided into two major sections. The first section after this summary describes the computer cluster, its setup, and some cluster performance benchmark results. The second section explains ongoing research efforts which have benefited from the cluster hardware, and presents highlights of those efforts since installation of the cluster.
High-stability Shuttle pointing system
NASA Technical Reports Server (NTRS)
Van Riper, R.
1981-01-01
It was recognized that precision pointing provided by the Orbiter's attitude control system would not be good enough for Shuttle payload scientific experiments or certain Defense department payloads. The Annular Suspension Pointing System (ASPS) is being developed to satisfy these more exacting pointing requirements. The ASPS is a modular pointing system which consists of two principal parts, including an ASPS Gimbal System (AGS) which provides three conventional ball-bearing gimbals and an ASPS Vernier System (AVS) which magnetically isolates the payload. AGS performance requirements are discussed and an AGS system description is given. The overall AGS system consists of the mechanical hardware, sensors, electronics, and software. Attention is also given to system simulation and performance prediction, and support facilities.
Effects of Background Pressure on Relativistic Laser-Plasma Interaction Ion Acceleration
NASA Astrophysics Data System (ADS)
Peterson, Andrew; Orban, C.; Feister, S.; Ngirmang, G.; Smith, J. T.; Klim, A.; Frische, K.; Morrison, J.; Chowdhury, E.; Roquemore, W. M.
2016-10-01
Typically, ultra-intense laser-accelerated ion experiments are carried out under high-vacuum conditions and with a repetition rate up to several shots per day. Looking to the future there is a need to perform these experiments with a much larger repetition rate. A continuously flowing liquid target is more suitable than a solid target for this purpose. However liquids vaporize below their vapor pressure, and the experiment cannot be performed under high-vacuum conditions. The effects of this non-negligible high chamber pressure acceleration of charged particles is not yet well understood. We investigate this phenomena using Particle-in-Cell simulations, exploring the effect of the background pressure on the accelerated ion spectrum. Experiments in this regime are being performed at the Air Force Research Laboratory at Wright-Patterson Air Force Base. This research was sponsored by the Quantum and Non-Equilibrium Processes Division of the Air Force Office of Scientific Research, under the management of Dr. Enrique Parra, Program Manager and significant support from the DOD HPCMP Internship Program.
Figure of Merit for Asteroid Regolith Simulants
NASA Astrophysics Data System (ADS)
Metzger, P.; Britt, D.; Covey, S.; Lewis, J. S.
2017-09-01
High fidelity asteroid simulant has been developed, closely matching the mineral and elemental abundances of reference meteorites representing the target asteroid classes. The first simulant is a CI class based upon the Orgueil meteorite, and several other simulants are being developed. They will enable asteroid mining and water extraction tests, helping mature the technologies for space resource utilization for both commercial and scientific/exploration activities in space.
Performance model-directed data sieving for high-performance I/O
DOE Office of Scientific and Technical Information (OSTI.GOV)
Chen, Yong; Lu, Yin; Amritkar, Prathamesh
2014-09-10
Many scientific computing applications and engineering simulations exhibit noncontiguous I/O access patterns. Data sieving is an important technique to improve the performance of noncontiguous I/O accesses by combining small and noncontiguous requests into a large and contiguous request. It has been proven effective even though more data are potentially accessed than demanded. In this study, we propose a new data sieving approach namely performance model-directed data sieving, or PMD data sieving in short. It improves the existing data sieving approach from two aspects: (1) dynamically determines when it is beneficial to perform data sieving; and (2) dynamically determines how tomore » perform data sieving if beneficial. It improves the performance of the existing data sieving approach considerably and reduces the memory consumption as verified by both theoretical analysis and experimental results. Given the importance of supporting noncontiguous accesses effectively and reducing the memory pressure in a large-scale system, the proposed PMD data sieving approach in this research holds a great promise and will have an impact on high-performance I/O systems.« less
NASA Astrophysics Data System (ADS)
Fiore, S.; Płóciennik, M.; Doutriaux, C.; Blanquer, I.; Barbera, R.; Williams, D. N.; Anantharaj, V. G.; Evans, B. J. K.; Salomoni, D.; Aloisio, G.
2017-12-01
The increased models resolution in the development of comprehensive Earth System Models is rapidly leading to very large climate simulations output that pose significant scientific data management challenges in terms of data sharing, processing, analysis, visualization, preservation, curation, and archiving.Large scale global experiments for Climate Model Intercomparison Projects (CMIP) have led to the development of the Earth System Grid Federation (ESGF), a federated data infrastructure which has been serving the CMIP5 experiment, providing access to 2PB of data for the IPCC Assessment Reports. In such a context, running a multi-model data analysis experiment is very challenging, as it requires the availability of a large amount of data related to multiple climate models simulations and scientific data management tools for large-scale data analytics. To address these challenges, a case study on climate models intercomparison data analysis has been defined and implemented in the context of the EU H2020 INDIGO-DataCloud project. The case study has been tested and validated on CMIP5 datasets, in the context of a large scale, international testbed involving several ESGF sites (LLNL, ORNL and CMCC), one orchestrator site (PSNC) and one more hosting INDIGO PaaS services (UPV). Additional ESGF sites, such as NCI (Australia) and a couple more in Europe, are also joining the testbed. The added value of the proposed solution is summarized in the following: it implements a server-side paradigm which limits data movement; it relies on a High-Performance Data Analytics (HPDA) stack to address performance; it exploits the INDIGO PaaS layer to support flexible, dynamic and automated deployment of software components; it provides user-friendly web access based on the INDIGO Future Gateway; and finally it integrates, complements and extends the support currently available through ESGF. Overall it provides a new "tool" for climate scientists to run multi-model experiments. At the time this contribution is being written, the proposed testbed represents the first implementation of a distributed large-scale, multi-model experiment in the ESGF/CMIP context, joining together server-side approaches for scientific data analysis, HPDA frameworks, end-to-end workflow management, and cloud computing.
Modern software approaches applied to a Hydrological model: the GEOtop Open-Source Software Project
NASA Astrophysics Data System (ADS)
Cozzini, Stefano; Endrizzi, Stefano; Cordano, Emanuele; Bertoldi, Giacomo; Dall'Amico, Matteo
2017-04-01
The GEOtop hydrological scientific package is an integrated hydrological model that simulates the heat and water budgets at and below the soil surface. It describes the three-dimensional water flow in the soil and the energy exchange with the atmosphere, considering the radiative and turbulent fluxes. Furthermore, it reproduces the highly non-linear interactions between the water and energy balance during soil freezing and thawing, and simulates the temporal evolution of snow cover, soil temperature and moisture. The core components of the package were presented in the 2.0 version (Endrizzi et al, 2014), which was released as Free Software Open-source project. However, despite the high scientific quality of the project, a modern software engineering approach was still missing. Such weakness hindered its scientific potential and its use both as a standalone package and, more importantly, in an integrate way with other hydrological software tools. In this contribution we present our recent software re-engineering efforts to create a robust and stable scientific software package open to the hydrological community, easily usable by researchers and experts, and interoperable with other packages. The activity takes as a starting point the 2.0 version, scientifically tested and published. This version, together with several test cases based on recent published or available GEOtop applications (Cordano and Rigon, 2013, WRR, Kollet et al, 2016, WRR) provides the baseline code and a certain number of referenced results as benchmark. Comparison and scientific validation can then be performed for each software re-engineering activity performed on the package. To keep track of any single change the package is published on its own github repository geotopmodel.github.io/geotop/ under GPL v3.0 license. A Continuous Integration mechanism by means of Travis-CI has been enabled on the github repository on master and main development branches. The usage of CMake configuration tool and the suite of tests (easily manageable by means of ctest tools) greatly reduces the burden of the installation and allows us to enhance portability on different compilers and Operating system platforms. The package was also complemented by several software tools which provide web-based visualization of results based on R plugins, in particular "shiny" (Chang at al, 2016), "geotopbricks" and "geotopOptim2" (Cordano et al, 2016) packages, which allow rapid and efficient scientific validation of new examples and tests. The software re-engineering activities are still under development. However, our first results are promising enough to eventually reach a robust and stable software project that manages in a flexible way a complex state-of-the-art hydrological model like GEOtop and integrates it into wider workflows.
MADNESS: A Multiresolution, Adaptive Numerical Environment for Scientific Simulation
Harrison, Robert J.; Beylkin, Gregory; Bischoff, Florian A.; ...
2016-01-01
We present MADNESS (multiresolution adaptive numerical environment for scientific simulation) that is a high-level software environment for solving integral and differential equations in many dimensions that uses adaptive and fast harmonic analysis methods with guaranteed precision that are based on multiresolution analysis and separated representations. Underpinning the numerical capabilities is a powerful petascale parallel programming environment that aims to increase both programmer productivity and code scalability. This paper describes the features and capabilities of MADNESS and briefly discusses some current applications in chemistry and several areas of physics.
ERIC Educational Resources Information Center
Donnelly, Dermot; O'Reilly, John; McGarr, Oliver
2013-01-01
Practical work is often noted as a core reason many students take on science in secondary schools (high schools). However, there are inherent difficulties associated with classroom practical work that militate against scientific inquiry, an approach espoused by many science educators. The use of interactive simulations to facilitate student…
Challenges of the Cassini Test Bed Simulating the Saturnian Environment
NASA Technical Reports Server (NTRS)
Hernandez, Juan C.; Badaruddin, Kareem S.
2007-01-01
The Cassini-Huygens mission is a joint NASA and European Space Agency (ESA) mission to collect scientific data of the Saturnian system and is managed by the Jet Propulsion Laboratory (JPL). After having arrived in Saturn orbit and releasing the ESA's Huygens probe for a highly successful descent and landing mission on Saturn's moon Titan, the Cassini orbiter continues on its tour of Saturn, its satellites, and the Saturnian environment. JPL's Cassini Integrated Test laboratory (ITL) is a dedicated high fidelity test bed that verifies and validates command sequences and flight software before upload to the Cassini spacecraft. The ITL provides artificial stimuli that allow a highly accurate hardware-in-the-loop test bed model that tests the operation of the Cassini spacecraft on the ground. This enables accurate prediction and recreation of mission events and flight software and hardware behavior. As we discovered more about the Saturnian environment, a combination of creative test methods and simulation changes were necessary to simulate the harmful effect that the optical and physical environment has on the pointing performance of Cassini. This paper presents the challenges experienced and overcome in that endeavor to simulate and test the post Saturn Orbit Insertion (SOI) and Probe Relay tour phase of the Cassini mission.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Gerber, Richard; Hack, James; Riley, Katherine
The mission of the U.S. Department of Energy Office of Science (DOE SC) is the delivery of scientific discoveries and major scientific tools to transform our understanding of nature and to advance the energy, economic, and national security missions of the United States. To achieve these goals in today’s world requires investments in not only the traditional scientific endeavors of theory and experiment, but also in computational science and the facilities that support large-scale simulation and data analysis. The Advanced Scientific Computing Research (ASCR) program addresses these challenges in the Office of Science. ASCR’s mission is to discover, develop, andmore » deploy computational and networking capabilities to analyze, model, simulate, and predict complex phenomena important to DOE. ASCR supports research in computational science, three high-performance computing (HPC) facilities — the National Energy Research Scientific Computing Center (NERSC) at Lawrence Berkeley National Laboratory and Leadership Computing Facilities at Argonne (ALCF) and Oak Ridge (OLCF) National Laboratories — and the Energy Sciences Network (ESnet) at Berkeley Lab. ASCR is guided by science needs as it develops research programs, computers, and networks at the leading edge of technologies. As we approach the era of exascale computing, technology changes are creating challenges for science programs in SC for those who need to use high performance computing and data systems effectively. Numerous significant modifications to today’s tools and techniques will be needed to realize the full potential of emerging computing systems and other novel computing architectures. To assess these needs and challenges, ASCR held a series of Exascale Requirements Reviews in 2015–2017, one with each of the six SC program offices,1 and a subsequent Crosscut Review that sought to integrate the findings from each. Participants at the reviews were drawn from the communities of leading domain scientists, experts in computer science and applied mathematics, ASCR facility staff, and DOE program managers in ASCR and the respective program offices. The purpose of these reviews was to identify mission-critical scientific problems within the DOE Office of Science (including experimental facilities) and determine the requirements for the exascale ecosystem that would be needed to address those challenges. The exascale ecosystem includes exascale computing systems, high-end data capabilities, efficient software at scale, libraries, tools, and other capabilities. This effort will contribute to the development of a strategic roadmap for ASCR compute and data facility investments and will help the ASCR Facility Division establish partnerships with Office of Science stakeholders. It will also inform the Office of Science research needs and agenda. The results of the six reviews have been published in reports available on the web at http://exascaleage.org/. This report presents a summary of the individual reports and of common and crosscutting findings, and it identifies opportunities for productive collaborations among the DOE SC program offices.« less
XVis: Visualization for the Extreme-Scale Scientific-Computation Ecosystem: Mid-year report FY17 Q2
DOE Office of Scientific and Technical Information (OSTI.GOV)
Moreland, Kenneth D.; Pugmire, David; Rogers, David
The XVis project brings together the key elements of research to enable scientific discovery at extreme scale. Scientific computing will no longer be purely about how fast computations can be performed. Energy constraints, processor changes, and I/O limitations necessitate significant changes in both the software applications used in scientific computation and the ways in which scientists use them. Components for modeling, simulation, analysis, and visualization must work together in a computational ecosystem, rather than working independently as they have in the past. This project provides the necessary research and infrastructure for scientific discovery in this new computational ecosystem by addressingmore » four interlocking challenges: emerging processor technology, in situ integration, usability, and proxy analysis.« less
XVis: Visualization for the Extreme-Scale Scientific-Computation Ecosystem: Year-end report FY17.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Moreland, Kenneth D.; Pugmire, David; Rogers, David
The XVis project brings together the key elements of research to enable scientific discovery at extreme scale. Scientific computing will no longer be purely about how fast computations can be performed. Energy constraints, processor changes, and I/O limitations necessitate significant changes in both the software applications used in scientific computation and the ways in which scientists use them. Components for modeling, simulation, analysis, and visualization must work together in a computational ecosystem, rather than working independently as they have in the past. This project provides the necessary research and infrastructure for scientific discovery in this new computational ecosystem by addressingmore » four interlocking challenges: emerging processor technology, in situ integration, usability, and proxy analysis.« less
XVis: Visualization for the Extreme-Scale Scientific-Computation Ecosystem. Mid-year report FY16 Q2
DOE Office of Scientific and Technical Information (OSTI.GOV)
Moreland, Kenneth D.; Sewell, Christopher; Childs, Hank
The XVis project brings together the key elements of research to enable scientific discovery at extreme scale. Scientific computing will no longer be purely about how fast computations can be performed. Energy constraints, processor changes, and I/O limitations necessitate significant changes in both the software applications used in scientific computation and the ways in which scientists use them. Components for modeling, simulation, analysis, and visualization must work together in a computational ecosystem, rather than working independently as they have in the past. This project provides the necessary research and infrastructure for scientific discovery in this new computational ecosystem by addressingmore » four interlocking challenges: emerging processor technology, in situ integration, usability, and proxy analysis.« less
DOE Office of Scientific and Technical Information (OSTI.GOV)
Geveci, Berk; Maynard, Robert
The XVis project brings together the key elements of research to enable scientific discovery at extreme scale. Scientific computing will no longer be purely about how fast computations can be performed. Energy constraints, processor changes, and I/O limitations necessitate significant changes in both the software applications used in scientific computation and the ways in which scientists use them. Components for modeling, simulation, analysis, and visualization must work together in a computational ecosystem, rather than working independently as they have in the past. The XVis project brought together collaborators from predominant DOE projects for visualization on accelerators and combining their respectivemore » features into a new visualization toolkit called VTK-m.« less
XVis: Visualization for the Extreme-Scale Scientific-Computation Ecosystem: Year-end report FY15 Q4.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Moreland, Kenneth D.; Sewell, Christopher; Childs, Hank
The XVis project brings together the key elements of research to enable scientific discovery at extreme scale. Scientific computing will no longer be purely about how fast computations can be performed. Energy constraints, processor changes, and I/O limitations necessitate significant changes in both the software applications used in scientific computation and the ways in which scientists use them. Components for modeling, simulation, analysis, and visualization must work together in a computational ecosystem, rather than working independently as they have in the past. This project provides the necessary research and infrastructure for scientific discovery in this new computational ecosystem by addressingmore » four interlocking challenges: emerging processor technology, in situ integration, usability, and proxy analysis.« less
A Five-Tier System for Improving the Categorization of Transplant Program Performance.
Wey, Andrew; Salkowski, Nicholas; Kasiske, Bertram L; Israni, Ajay K; Snyder, Jon J
2018-06-01
To better inform health care consumers by better identifying differences in transplant program performance. Adult kidney transplants performed in the United States, January 1, 2012-June 30, 2014. In December 2016, the Scientific Registry of Transplant Recipients instituted a five-tier system for reporting transplant program performance. We compare the differentiation of program performance and the simulated misclassification rate of the five-tier system with the previous three-tier system based on the 95 percent credible interval. Scientific Registry of Transplant Recipients database. The five-tier system improved differentiation and maintained a low misclassification rate of less than 22 percent for programs differing by two tiers. The five-tier system will better inform health care consumers of transplant program performance. © Health Research and Educational Trust.
Use of DES Modeling for Determining Launch Availability for SLS
NASA Technical Reports Server (NTRS)
Watson, Michael; Staton, Eric; Cates, Grant; Finn, Ronald; Altino, Karen M.; Burns, K. Lee
2014-01-01
(1) NASA is developing a new heavy lift launch system for human and scientific exploration beyond Earth orbit comprising of the Space Launch System (SLS), Orion Multi-Purpose Crew Vehicle (MPCV), and Ground Systems Development and Operations (GSDO); (2) The desire of the system is to ensure a high confidence of successfully launching the exploration missions, especially those that require multiple launches, have a narrow Earth departure window, and high investment costs; and (3) This presentation discusses the process used by a Cross-Program team to develop the Exploration Systems Development (ESD) Launch Availability (LA) Technical Performance Measure (TPM) and allocate it to each of the Programs through the use of Discrete Event Simulations (DES).
A systematic review of phacoemulsification cataract surgery in virtual reality simulators.
Lam, Chee Kiang; Sundaraj, Kenneth; Sulaiman, Mohd Nazri
2013-01-01
The aim of this study was to review the capability of virtual reality simulators in the application of phacoemulsification cataract surgery training. Our review included the scientific publications on cataract surgery simulators that had been developed by different groups of researchers along with commercialized surgical training products, such as EYESI® and PhacoVision®. The review covers the simulation of the main cataract surgery procedures, i.e., corneal incision, capsulorrhexis, phacosculpting, and intraocular lens implantation in various virtual reality surgery simulators. Haptics realism and visual realism of the procedures are the main elements in imitating the actual surgical environment. The involvement of ophthalmology in research on virtual reality since the early 1990s has made a great impact on the development of surgical simulators. Most of the latest cataract surgery training systems are able to offer high fidelity in visual feedback and haptics feedback, but visual realism, such as the rotational movements of an eyeball with response to the force applied by surgical instruments, is still lacking in some of them. The assessment of the surgical tasks carried out on the simulators showed a significant difference in the performance before and after the training.
Alam, Fahad; LeBlanc, Vicki R; Baxter, Alan; Tarshis, Jordan; Piquette, Dominique; Gu, Yuqi; Filipkowska, Caroline; Krywenky, Ashley; Kester-Greene, Nicole; Cardinal, Pierre; Au, Shelly; Lam, Sandy; Boet, Sylvain; Clinical Trials Group, Perioperative Anesthesia
2018-04-21
The proportion of older acute care physicians (ACPs) has been steadily increasing. Ageing is associated with physiological changes and prospective research investigating how such age-related physiological changes affect clinical performance, including crisis resource management (CRM) skills, is lacking. There is a gap in the literature on whether physician's age influences baseline CRM performance and also learning from simulation. We aim to investigate whether ageing is associated with baseline CRM skills of ACPs (emergency, critical care and anaesthesia) using simulated crisis scenarios and to assess whether ageing influences learning from simulation-based education. This is a prospective cohort multicentre study recruiting ACPs from the Universities of Toronto and Ottawa, Canada. Each participant will manage an advanced cardiovascular life support crisis-simulated scenario (pretest) and then be debriefed on their CRM skills. They will then manage another simulated crisis scenario (immediate post-test). Three months after, participants will return to manage a third simulated crisis scenario (retention post-test). The relationship between biological age and chronological age will be assessed by measuring the participants CRM skills and their ability to learn from high-fidelity simulation. This protocol was approved by Sunnybrook Health Sciences Centre Research Ethics Board (REB Number 140-2015) and the Ottawa Health Science Network Research Ethics Board (#20150173-01H). The results will be disseminated in a peer-reviewed journal and at scientific meetings. NCT02683447; Pre-results. © Article author(s) (or their employer(s) unless otherwise stated in the text of the article) 2018. All rights reserved. No commercial use is permitted unless otherwise expressly granted.
Procedural virtual reality simulation in minimally invasive surgery.
Våpenstad, Cecilie; Buzink, Sonja N
2013-02-01
Simulation of procedural tasks has the potential to bridge the gap between basic skills training outside the operating room (OR) and performance of complex surgical tasks in the OR. This paper provides an overview of procedural virtual reality (VR) simulation currently available on the market and presented in scientific literature for laparoscopy (LS), flexible gastrointestinal endoscopy (FGE), and endovascular surgery (EVS). An online survey was sent to companies and research groups selling or developing procedural VR simulators, and a systematic search was done for scientific publications presenting or applying VR simulators to train or assess procedural skills in the PUBMED and SCOPUS databases. The results of five simulator companies were included in the survey. In the literature review, 116 articles were analyzed (45 on LS, 43 on FGE, 28 on EVS), presenting a total of 23 simulator systems. The companies stated to altogether offer 78 procedural tasks (33 for LS, 12 for FGE, 33 for EVS), of which 17 also were found in the literature review. Although study type and used outcomes vary between the three different fields, approximately 90 % of the studies presented in the retrieved publications for LS found convincing evidence to confirm the validity or added value of procedural VR simulation. This was the case in approximately 75 % for FGE and EVS. Procedural training using VR simulators has been found to improve clinical performance. There is nevertheless a large amount of simulated procedural tasks that have not been validated. Future research should focus on the optimal use of procedural simulators in the most effective training setups and further investigate the benefits of procedural VR simulation to improve clinical outcome.
Connectivity: Performance Portable Algorithms for graph connectivity v. 0.1
DOE Office of Scientific and Technical Information (OSTI.GOV)
Slota, George; Rajamanickam, Sivasankaran; Madduri, Kamesh
Graphs occur in several places in real world from road networks, social networks and scientific simulations. Connectivity is a graph analysis software to graph connectivity in modern architectures like multicore CPUs, Xeon Phi and GPUs.
NASA Astrophysics Data System (ADS)
Yamada, Yoshiyuki; Gouda, Naoteru; Yano, Taihei; Kobayashi, Yukiyasu; Tsujimoto, Takuji; Suganuma, Masahiro; Niwa, Yoshito; Sako, Nobutada; Hatsutori, Yoichi; Tanaka, Takashi
2006-06-01
We explain simulation tools in JASMINE project (JASMINE simulator). The JASMINE project stands at the stage where its basic design will be determined in a few years. Then it is very important to simulate the data stream generated by astrometric fields at JASMINE in order to support investigations into error budgets, sampling strategy, data compression, data analysis, scientific performances, etc. Of course, component simulations are needed, but total simulations which include all components from observation target to satellite system are also very important. We find that new software technologies, such as Object Oriented(OO) methodologies are ideal tools for the simulation system of JASMINE(the JASMINE simulator). In this article, we explain the framework of the JASMINE simulator.
Disease management research using event graphs.
Allore, H G; Schruben, L W
2000-08-01
Event Graphs, conditional representations of stochastic relationships between discrete events, simulate disease dynamics. In this paper, we demonstrate how Event Graphs, at an appropriate abstraction level, also extend and organize scientific knowledge about diseases. They can identify promising treatment strategies and directions for further research and provide enough detail for testing combinations of new medicines and interventions. Event Graphs can be enriched to incorporate and validate data and test new theories to reflect an expanding dynamic scientific knowledge base and establish performance criteria for the economic viability of new treatments. To illustrate, an Event Graph is developed for mastitis, a costly dairy cattle disease, for which extensive scientific literature exists. With only a modest amount of imagination, the methodology presented here can be seen to apply modeling to any disease, human, plant, or animal. The Event Graph simulation presented here is currently being used in research and in a new veterinary epidemiology course. Copyright 2000 Academic Press.
Scientific work environments in the next decade
NASA Technical Reports Server (NTRS)
Gomez, Julian E.
1989-01-01
The applications of contemporary computer graphics to scientific visualization is described, with emphasis on the nonintuitive problems. A radically different approach is proposed which centers on the idea of the scientist being in the simulation display space rather than observing it on a screen. Interaction is performed with nonstandard input devices to preserve the feeling of being immersed in the three-dimensional display space. Construction of such a system could begin now with currently available technology.
NASA Technical Reports Server (NTRS)
Parsons, C. L. (Editor)
1989-01-01
The Multimode Airborne Radar Altimeter (MARA), a flexible airborne radar remote sensing facility developed by NASA's Goddard Space Flight Center, is discussed. This volume describes the scientific justification for the development of the instrument and the translation of these scientific requirements into instrument design goals. Values for key instrument parameters are derived to accommodate these goals, and simulations and analytical models are used to estimate the developed system's performance.
Big Data Processing for a Central Texas Groundwater Case Study
NASA Astrophysics Data System (ADS)
Cantu, A.; Rivera, O.; Martínez, A.; Lewis, D. H.; Gentle, J. N., Jr.; Fuentes, G.; Pierce, S. A.
2016-12-01
As computational methods improve, scientists are able to expand the level and scale of experimental simulation and testing that is completed for case studies. This study presents a comparative analysis of multiple models for the Barton Springs segment of the Edwards aquifer. Several numerical simulations using state-mandated MODFLOW models ran on Stampede, a High Performance Computing system housed at the Texas Advanced Computing Center, were performed for multiple scenario testing. One goal of this multidisciplinary project aims to visualize and compare the output data of the groundwater model using the statistical programming language R to find revealing data patterns produced by different pumping scenarios. Presenting data in a friendly post-processing format is covered in this paper. Visualization of the data and creating workflows applicable to the management of the data are tasks performed after data extraction. Resulting analyses provide an example of how supercomputing can be used to accelerate evaluation of scientific uncertainty and geological knowledge in relation to policy and management decisions. Understanding the aquifer behavior helps policy makers avoid negative impact on the endangered species, environmental services and aids in maximizing the aquifer yield.
Serrano, Antonio; Liebner, Jeffrey; Hines, Justin K
2016-01-01
Despite significant efforts to reform undergraduate science education, students often perform worse on assessments of perceptions of science after introductory courses, demonstrating a need for new educational interventions to reverse this trend. To address this need, we created An Inexplicable Disease, an engaging, active-learning case study that is unusual because it aims to simulate scientific inquiry by allowing students to iteratively investigate the Kuru epidemic of 1957 in a choose-your-own-experiment format in large lectures. The case emphasizes the importance of specialization and communication in science and is broadly applicable to courses of any size and sub-discipline of the life sciences.
Non-negative Tensor Factorization for Robust Exploratory Big-Data Analytics
DOE Office of Scientific and Technical Information (OSTI.GOV)
Alexandrov, Boian; Vesselinov, Velimir Valentinov; Djidjev, Hristo Nikolov
Currently, large multidimensional datasets are being accumulated in almost every field. Data are: (1) collected by distributed sensor networks in real-time all over the globe, (2) produced by large-scale experimental measurements or engineering activities, (3) generated by high-performance simulations, and (4) gathered by electronic communications and socialnetwork activities, etc. Simultaneous analysis of these ultra-large heterogeneous multidimensional datasets is often critical for scientific discoveries, decision-making, emergency response, and national and global security. The importance of such analyses mandates the development of the next-generation of robust machine learning (ML) methods and tools for bigdata exploratory analysis.
Toward a Climate OSSE for NASA Earth Sciences
NASA Astrophysics Data System (ADS)
Leroy, S. S.; Collins, W. D.; Feldman, D.; Field, R. D.; Ming, Y.; Pawson, S.; Sanderson, B.; Schmidt, G. A.
2016-12-01
In the Continuity Study, the National Academy of Sciences advised that future space missions be rated according to five categories: the importance of a well-defined scientific objective, the utility of the observation in addressing the scientific objective, the quality with which the observation can be made, the probability of the mission's success, and the mission's affordability. The importance, probability, and affordability are evaluated subjectively by scientific consensus, by engineering review panels, and by cost models; however, the utility and quality can be evaluated objectively by a climate observation system simulation experiment (COSSE). A discussion of the philosophical underpinnings of a COSSE for NASA Earth Sciences will be presented. A COSSE is built upon a perturbed physics ensemble of a sophisticated climate model that can simulate a mission's prospective observations and its well-defined quantitative scientific objective and that can capture the uncertainty associated with each. A strong correlation between observation and scientific objective after consideration of physical uncertainty leads to a high quality. Persistence of a high correlation after inclusion of the proposed measurement error leads to a high utility. There are five criteria that govern that nature of a particular COSSE: (1) whether the mission's scientific objective is one of hypothesis testing or climate prediction, (2) whether the mission is empirical or inferential, (3) whether the core climate model captures essential physical uncertainties, (4) the level of detail of the simulated observations, and (5) whether complementarity or redundancy of information is to be valued. Computation of the quality and utility is done using Bayesian statistics, as has been done previously for multi-decadal climate prediction conditioned on existing data. We advocate for a new program within NASA Earth Sciences to establish a COSSE capability. Creation of a COSSE program within NASA Earth Sciences will require answers from the climate research community to basic questions, such as whether a COSSE capability should be centralized or de-centralized. Most importantly, the quantified scientific objective of a proposed mission must be defined with extreme specificity for a COSSE to be applied.
Bypassing the Kohn-Sham equations with machine learning.
Brockherde, Felix; Vogt, Leslie; Li, Li; Tuckerman, Mark E; Burke, Kieron; Müller, Klaus-Robert
2017-10-11
Last year, at least 30,000 scientific papers used the Kohn-Sham scheme of density functional theory to solve electronic structure problems in a wide variety of scientific fields. Machine learning holds the promise of learning the energy functional via examples, bypassing the need to solve the Kohn-Sham equations. This should yield substantial savings in computer time, allowing larger systems and/or longer time-scales to be tackled, but attempts to machine-learn this functional have been limited by the need to find its derivative. The present work overcomes this difficulty by directly learning the density-potential and energy-density maps for test systems and various molecules. We perform the first molecular dynamics simulation with a machine-learned density functional on malonaldehyde and are able to capture the intramolecular proton transfer process. Learning density models now allows the construction of accurate density functionals for realistic molecular systems.Machine learning allows electronic structure calculations to access larger system sizes and, in dynamical simulations, longer time scales. Here, the authors perform such a simulation using a machine-learned density functional that avoids direct solution of the Kohn-Sham equations.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Wu, Chase Qishi; Zhu, Michelle Mengxia
The advent of large-scale collaborative scientific applications has demonstrated the potential for broad scientific communities to pool globally distributed resources to produce unprecedented data acquisition, movement, and analysis. System resources including supercomputers, data repositories, computing facilities, network infrastructures, storage systems, and display devices have been increasingly deployed at national laboratories and academic institutes. These resources are typically shared by large communities of users over Internet or dedicated networks and hence exhibit an inherent dynamic nature in their availability, accessibility, capacity, and stability. Scientific applications using either experimental facilities or computation-based simulations with various physical, chemical, climatic, and biological models featuremore » diverse scientific workflows as simple as linear pipelines or as complex as a directed acyclic graphs, which must be executed and supported over wide-area networks with massively distributed resources. Application users oftentimes need to manually configure their computing tasks over networks in an ad hoc manner, hence significantly limiting the productivity of scientists and constraining the utilization of resources. The success of these large-scale distributed applications requires a highly adaptive and massively scalable workflow platform that provides automated and optimized computing and networking services. This project is to design and develop a generic Scientific Workflow Automation and Management Platform (SWAMP), which contains a web-based user interface specially tailored for a target application, a set of user libraries, and several easy-to-use computing and networking toolkits for application scientists to conveniently assemble, execute, monitor, and control complex computing workflows in heterogeneous high-performance network environments. SWAMP will enable the automation and management of the entire process of scientific workflows with the convenience of a few mouse clicks while hiding the implementation and technical details from end users. Particularly, we will consider two types of applications with distinct performance requirements: data-centric and service-centric applications. For data-centric applications, the main workflow task involves large-volume data generation, catalog, storage, and movement typically from supercomputers or experimental facilities to a team of geographically distributed users; while for service-centric applications, the main focus of workflow is on data archiving, preprocessing, filtering, synthesis, visualization, and other application-specific analysis. We will conduct a comprehensive comparison of existing workflow systems and choose the best suited one with open-source code, a flexible system structure, and a large user base as the starting point for our development. Based on the chosen system, we will develop and integrate new components including a black box design of computing modules, performance monitoring and prediction, and workflow optimization and reconfiguration, which are missing from existing workflow systems. A modular design for separating specification, execution, and monitoring aspects will be adopted to establish a common generic infrastructure suited for a wide spectrum of science applications. We will further design and develop efficient workflow mapping and scheduling algorithms to optimize the workflow performance in terms of minimum end-to-end delay, maximum frame rate, and highest reliability. We will develop and demonstrate the SWAMP system in a local environment, the grid network, and the 100Gpbs Advanced Network Initiative (ANI) testbed. The demonstration will target scientific applications in climate modeling and high energy physics and the functions to be demonstrated include workflow deployment, execution, steering, and reconfiguration. Throughout the project period, we will work closely with the science communities in the fields of climate modeling and high energy physics including Spallation Neutron Source (SNS) and Large Hadron Collider (LHC) projects to mature the system for production use.« less
Preparing for in situ processing on upcoming leading-edge supercomputers
Kress, James; Churchill, Randy Michael; Klasky, Scott; ...
2016-10-01
High performance computing applications are producing increasingly large amounts of data and placing enormous stress on current capabilities for traditional post-hoc visualization techniques. Because of the growing compute and I/O imbalance, data reductions, including in situ visualization, are required. These reduced data are used for analysis and visualization in a variety of different ways. Many of he visualization and analysis requirements are known a priori, but when they are not, scientists are dependent on the reduced data to accurately represent the simulation in post hoc analysis. The contributions of this paper is a description of the directions we are pursuingmore » to assist a large scale fusion simulation code succeed on the next generation of supercomputers. Finally, these directions include the role of in situ processing for performing data reductions, as well as the tradeoffs between data size and data integrity within the context of complex operations in a typical scientific workflow.« less
Using Java for distributed computing in the Gaia satellite data processing
NASA Astrophysics Data System (ADS)
O'Mullane, William; Luri, Xavier; Parsons, Paul; Lammers, Uwe; Hoar, John; Hernandez, Jose
2011-10-01
In recent years Java has matured to a stable easy-to-use language with the flexibility of an interpreter (for reflection etc.) but the performance and type checking of a compiled language. When we started using Java for astronomical applications around 1999 they were the first of their kind in astronomy. Now a great deal of astronomy software is written in Java as are many business applications. We discuss the current environment and trends concerning the language and present an actual example of scientific use of Java for high-performance distributed computing: ESA's mission Gaia. The Gaia scanning satellite will perform a galactic census of about 1,000 million objects in our galaxy. The Gaia community has chosen to write its processing software in Java. We explore the manifold reasons for choosing Java for this large science collaboration. Gaia processing is numerically complex but highly distributable, some parts being embarrassingly parallel. We describe the Gaia processing architecture and its realisation in Java. We delve into the astrometric solution which is the most advanced and most complex part of the processing. The Gaia simulator is also written in Java and is the most mature code in the system. This has been successfully running since about 2005 on the supercomputer "Marenostrum" in Barcelona. We relate experiences of using Java on a large shared machine. Finally we discuss Java, including some of its problems, for scientific computing.
In situ visualization for large-scale combustion simulations.
Yu, Hongfeng; Wang, Chaoli; Grout, Ray W; Chen, Jacqueline H; Ma, Kwan-Liu
2010-01-01
As scientific supercomputing moves toward petascale and exascale levels, in situ visualization stands out as a scalable way for scientists to view the data their simulations generate. This full picture is crucial particularly for capturing and understanding highly intermittent transient phenomena, such as ignition and extinction events in turbulent combustion.
Cost efficient CFD simulations: Proper selection of domain partitioning strategies
NASA Astrophysics Data System (ADS)
Haddadi, Bahram; Jordan, Christian; Harasek, Michael
2017-10-01
Computational Fluid Dynamics (CFD) is one of the most powerful simulation methods, which is used for temporally and spatially resolved solutions of fluid flow, heat transfer, mass transfer, etc. One of the challenges of Computational Fluid Dynamics is the extreme hardware demand. Nowadays super-computers (e.g. High Performance Computing, HPC) featuring multiple CPU cores are applied for solving-the simulation domain is split into partitions for each core. Some of the different methods for partitioning are investigated in this paper. As a practical example, a new open source based solver was utilized for simulating packed bed adsorption, a common separation method within the field of thermal process engineering. Adsorption can for example be applied for removal of trace gases from a gas stream or pure gases production like Hydrogen. For comparing the performance of the partitioning methods, a 60 million cell mesh for a packed bed of spherical adsorbents was created; one second of the adsorption process was simulated. Different partitioning methods available in OpenFOAM® (Scotch, Simple, and Hierarchical) have been used with different numbers of sub-domains. The effect of the different methods and number of processor cores on the simulation speedup and also energy consumption were investigated for two different hardware infrastructures (Vienna Scientific Clusters VSC 2 and VSC 3). As a general recommendation an optimum number of cells per processor core was calculated. Optimized simulation speed, lower energy consumption and consequently the cost effects are reported here.
NASA Technical Reports Server (NTRS)
Huntington, J. L.; Schwartz, D. E.; Marshall, J. R.
1991-01-01
The Gas-Grain Simulation Facility (GGSF) will provide a microgravity environment where undesirable environmental effects are reduced, and thus, experiments involving interactions between small particles and grains can be more suitably performed. Slated for flight aboard the Shuttle in 1992, the ESA glovebox will serve as a scientific and technological testbed for GGSF exobiology experiments as well as generating some basic scientific data. Initial glovebox experiments will test a method of generating a stable, mono-dispersed cloud of fine particles using a vibrating sprinkler system. In the absence of gravity and atmospheric turbulence, it will be possible to determine the influence of interparticle forces in controlling the rate and mode of aggregation. The experimental chamber can be purged of suspended matter to enable multiple repetitions of the experiments. Of particular interest will be the number of particles per unit volume of the chamber, because it is suspected that aggregation will occur extremely rapidly if the number exceeds a critical value. All aggregation events will be recorded on high-resolution video film. Changes in the experimental procedure as a result of surprise events will be accompanied by real-time interaction with the mission specialist during the Shuttle flight.
Paradigms and strategies for scientific computing on distributed memory concurrent computers
DOE Office of Scientific and Technical Information (OSTI.GOV)
Foster, I.T.; Walker, D.W.
1994-06-01
In this work we examine recent advances in parallel languages and abstractions that have the potential for improving the programmability and maintainability of large-scale, parallel, scientific applications running on high performance architectures and networks. This paper focuses on Fortran M, a set of extensions to Fortran 77 that supports the modular design of message-passing programs. We describe the Fortran M implementation of a particle-in-cell (PIC) plasma simulation application, and discuss issues in the optimization of the code. The use of two other methodologies for parallelizing the PIC application are considered. The first is based on the shared object abstraction asmore » embodied in the Orca language. The second approach is the Split-C language. In Fortran M, Orca, and Split-C the ability of the programmer to control the granularity of communication is important is designing an efficient implementation.« less
Dynamic file-access characteristics of a production parallel scientific workload
NASA Technical Reports Server (NTRS)
Kotz, David; Nieuwejaar, Nils
1994-01-01
Multiprocessors have permitted astounding increases in computational performance, but many cannot meet the intense I/O requirements of some scientific applications. An important component of any solution to this I/O bottleneck is a parallel file system that can provide high-bandwidth access to tremendous amounts of data in parallel to hundreds or thousands of processors. Most successful systems are based on a solid understanding of the expected workload, but thus far there have been no comprehensive workload characterizations of multiprocessor file systems. This paper presents the results of a three week tracing study in which all file-related activity on a massively parallel computer was recorded. Our instrumentation differs from previous efforts in that it collects information about every I/O request and about the mix of jobs running in a production environment. We also present the results of a trace-driven caching simulation and recommendations for designers of multiprocessor file systems.
GOCE gravity field simulation based on actual mission scenario
NASA Astrophysics Data System (ADS)
Pail, R.; Goiginger, H.; Mayrhofer, R.; Höck, E.; Schuh, W.-D.; Brockmann, J. M.; Krasbutter, I.; Fecher, T.; Gruber, T.
2009-04-01
In the framework of the ESA-funded project "GOCE High-level Processing Facility", an operational hardware and software system for the scientific processing (Level 1B to Level 2) of GOCE data has been set up by the European GOCE Gravity Consortium EGG-C. One key component of this software system is the processing of a spherical harmonic Earth's gravity field model and the corresponding full variance-covariance matrix from the precise GOCE orbit and calibrated and corrected satellite gravity gradiometry (SGG) data. In the framework of the time-wise approach a combination of several processing strategies for the optimum exploitation of the information content of the GOCE data has been set up: The Quick-Look Gravity Field Analysis is applied to derive a fast diagnosis of the GOCE system performance and to monitor the quality of the input data. In the Core Solver processing a rigorous high-precision solution of the very large normal equation systems is derived by applying parallel processing techniques on a PC cluster. Before the availability of real GOCE data, by means of a realistic numerical case study, which is based on the actual GOCE orbit and mission scenario and simulation data stemming from the most recent ESA end-to-end simulation, the expected GOCE gravity field performance is evaluated. Results from this simulation as well as recently developed features of the software system are presented. Additionally some aspects on data combination with complementary data sources are addressed.
The X-IFU end-to-end simulations performed for the TES array optimization exercise
NASA Astrophysics Data System (ADS)
Peille, Philippe; Wilms, J.; Brand, T.; Cobo, B.; Ceballos, M. T.; Dauser, T.; Smith, S. J.; Barret, D.; den Herder, J. W.; Piro, L.; Barcons, X.; Pointecouteau, E.; Bandler, S.; den Hartog, R.; de Plaa, J.
2015-09-01
The focal plane assembly of the Athena X-ray Integral Field Unit (X-IFU) includes as the baseline an array of ~4000 single size calorimeters based on Transition Edge Sensors (TES). Other sensor array configurations could however be considered, combining TES of different properties (e.g. size). In attempting to improve the X-IFU performance in terms of field of view, count rate performance, and even spectral resolution, two alternative TES array configurations to the baseline have been simulated, each combining a small and a large pixel array. With the X-IFU end-to-end simulator, a sub-sample of the Athena core science goals, selected by the X-IFU science team as potentially driving the optimal TES array configuration, has been simulated for the results to be scientifically assessed and compared. In this contribution, we will describe the simulation set-up for the various array configurations, and highlight some of the results of the test cases simulated.
Java Performance for Scientific Applications on LLNL Computer Systems
DOE Office of Scientific and Technical Information (OSTI.GOV)
Kapfer, C; Wissink, A
2002-05-10
Languages in use for high performance computing at the laboratory--Fortran (f77 and f90), C, and C++--have many years of development behind them and are generally considered the fastest available. However, Fortran and C do not readily extend to object-oriented programming models, limiting their capability for very complex simulation software. C++ facilitates object-oriented programming but is a very complex and error-prone language. Java offers a number of capabilities that these other languages do not. For instance it implements cleaner (i.e., easier to use and less prone to errors) object-oriented models than C++. It also offers networking and security as part ofmore » the language standard, and cross-platform executables that make it architecture neutral, to name a few. These features have made Java very popular for industrial computing applications. The aim of this paper is to explain the trade-offs in using Java for large-scale scientific applications at LLNL. Despite its advantages, the computational science community has been reluctant to write large-scale computationally intensive applications in Java due to concerns over its poor performance. However, considerable progress has been made over the last several years. The Java Grande Forum [1] has been promoting the use of Java for large-scale computing. Members have introduced efficient array libraries, developed fast just-in-time (JIT) compilers, and built links to existing packages used in high performance parallel computing.« less
Enabling Efficient Climate Science Workflows in High Performance Computing Environments
NASA Astrophysics Data System (ADS)
Krishnan, H.; Byna, S.; Wehner, M. F.; Gu, J.; O'Brien, T. A.; Loring, B.; Stone, D. A.; Collins, W.; Prabhat, M.; Liu, Y.; Johnson, J. N.; Paciorek, C. J.
2015-12-01
A typical climate science workflow often involves a combination of acquisition of data, modeling, simulation, analysis, visualization, publishing, and storage of results. Each of these tasks provide a myriad of challenges when running on a high performance computing environment such as Hopper or Edison at NERSC. Hurdles such as data transfer and management, job scheduling, parallel analysis routines, and publication require a lot of forethought and planning to ensure that proper quality control mechanisms are in place. These steps require effectively utilizing a combination of well tested and newly developed functionality to move data, perform analysis, apply statistical routines, and finally, serve results and tools to the greater scientific community. As part of the CAlibrated and Systematic Characterization, Attribution and Detection of Extremes (CASCADE) project we highlight a stack of tools our team utilizes and has developed to ensure that large scale simulation and analysis work are commonplace and provide operations that assist in everything from generation/procurement of data (HTAR/Globus) to automating publication of results to portals like the Earth Systems Grid Federation (ESGF), all while executing everything in between in a scalable environment in a task parallel way (MPI). We highlight the use and benefit of these tools by showing several climate science analysis use cases they have been applied to.
Subjective Quality Assessment of Underwater Video for Scientific Applications
Moreno-Roldán, José-Miguel; Luque-Nieto, Miguel-Ángel; Poncela, Javier; Díaz-del-Río, Víctor; Otero, Pablo
2015-01-01
Underwater video services could be a key application in the better scientific knowledge of the vast oceanic resources in our planet. However, limitations in the capacity of current available technology for underwater networks (UWSNs) raise the question of the feasibility of these services. When transmitting video, the main constraints are the limited bandwidth and the high propagation delays. At the same time the service performance depends on the needs of the target group. This paper considers the problems of estimations for the Mean Opinion Score (a standard quality measure) in UWSNs based on objective methods and addresses the topic of quality assessment in potential underwater video services from a subjective point of view. The experimental design and the results of a test planned according standardized psychometric methods are presented. The subjects used in the quality assessment test were ocean scientists. Video sequences were recorded in actual exploration expeditions and were processed to simulate conditions similar to those that might be found in UWSNs. Our experimental results show how videos are considered to be useful for scientific purposes even in very low bitrate conditions. PMID:26694400
Subjective Quality Assessment of Underwater Video for Scientific Applications.
Moreno-Roldán, José-Miguel; Luque-Nieto, Miguel-Ángel; Poncela, Javier; Díaz-del-Río, Víctor; Otero, Pablo
2015-12-15
Underwater video services could be a key application in the better scientific knowledge of the vast oceanic resources in our planet. However, limitations in the capacity of current available technology for underwater networks (UWSNs) raise the question of the feasibility of these services. When transmitting video, the main constraints are the limited bandwidth and the high propagation delays. At the same time the service performance depends on the needs of the target group. This paper considers the problems of estimations for the Mean Opinion Score (a standard quality measure) in UWSNs based on objective methods and addresses the topic of quality assessment in potential underwater video services from a subjective point of view. The experimental design and the results of a test planned according standardized psychometric methods are presented. The subjects used in the quality assessment test were ocean scientists. Video sequences were recorded in actual exploration expeditions and were processed to simulate conditions similar to those that might be found in UWSNs. Our experimental results show how videos are considered to be useful for scientific purposes even in very low bitrate conditions.
Development of a virtual reality training curriculum for phacoemulsification surgery.
Spiteri, A V; Aggarwal, R; Kersey, T L; Sira, M; Benjamin, L; Darzi, A W; Bloom, P A
2014-01-01
Training within a proficiency-based virtual reality (VR) curriculum may reduce errors during real surgical procedures. This study used a scientific methodology to develop a VR training curriculum for phacoemulsification surgery (PS). Ten novice-(n) (performed <10 cataract operations), 10 intermediate-(i) (50-200), and 10 experienced-(e) (>500) surgeons were recruited. Construct validity was defined as the ability to differentiate between the three levels of experience, based on the simulator-derived metrics for two abstract modules (four tasks) and three procedural modules (five tasks) on a high-fidelity VR simulator. Proficiency measures were based on the performance of experienced surgeons. Abstract modules demonstrated a 'ceiling effect' with construct validity established between groups (n) and (i) but not between groups (i) and (e)-Forceps 1 (46, 87, and 95; P<0.001). Increasing difficulty of task showed significantly reduced performance in (n) but minimal difference for (i) and (e)-Anti-tremor 4 (0, 51, and 59; P<0.001), Forceps 4 (11, 73, and 94; P<0.001). Procedural modules were found to be construct valid between groups (n) and (i) and between groups (i) and (e)-Lens-cracking (0, 22, and 51; P<0.05) and Phaco-quadrants (16, 53, and 87; P<0.05). This was also the case with Capsulorhexis (0, 19, and 63; P<0.05) with the performance decreasing in the (n) and (i) group but improving in the (e) group (0, 55, and 73; P<0.05) and (0, 48, and 76; P<0.05) as task difficulty increased. Experienced/intermediate benchmark skill levels are defined allowing the development of a proficiency-based VR training curriculum for PS for novices using a structured scientific methodology.
NASA Astrophysics Data System (ADS)
Buccheri, Grazia; Abt Gürber, Nadja; Brühwiler, Christian
2011-01-01
Many countries belonging to the Organisation for Economic Co-operation and Development (OECD) note a shortage of highly qualified scientific-technical personnel, whereas demand for such employees is growing. Therefore, how to motivate (female) high performers in science or mathematics to pursue scientific careers is of special interest. The sample for this study is taken from the Programme for International Student Assessment (PISA) 2006. It comprises 7,819 high performers either in sciences or mathematics from representative countries of four different education systems which generally performed well or around the OECD average in PISA 2006: Switzerland, Finland, Australia, and Korea. The results give evidence that gender specificity and gender inequity in science education are a cross-national problem. Interests in specific science disciplines only partly support vocational choices in scientific-technical fields. Instead, gender and gender stereotypes play a significant role. Enhancing the utility of a scientific vocational choice is expected to soften the gender impact.
ERIC Educational Resources Information Center
Kraemer, Sara; Thorn, Christopher A.
2010-01-01
The purpose of this exploratory study was to identify and describe some of the dimensions of scientific collaborations using high throughput computing (HTC) through the lens of a virtual team performance framework. A secondary purpose was to assess the viability of using a virtual team performance framework to study scientific collaborations using…
The WASCAL high-resolution climate projection ensemble for West Africa
NASA Astrophysics Data System (ADS)
Kunstmann, Harald; Heinzeller, Dominikus; Dieng, Diarra; Smiatek, Gerhard; Bliefernicht, Jan; Hamann, Ilse; Salack, Seyni
2017-04-01
With climate change being one of the most severe challenges to rural Africa in the 21st century, West Africa is facing an urgent need to develop effective adaptation and mitigation measures to protect its constantly growing population. We perform ensemble-based regional climate simulations at a high resolution of 12km for West Africa to allow a scientifically sound derivation of climate change adaptation measures. Based on the RCP4.5 scenario, our ensemble consist of three simulation experiments with the Weather Research & Forecasting Tool (WRF) and one additional experiment with the Consortium for Small-scale Modelling Model COSMO in Climate Mode (COSMO-CLM). We discuss the model performance over the validation period 1980-2010, including a novel, station-based precipitation database for West Africa obtained within the WASCAL (West African Science Service Centre for Climate Change and Adapted Land Use) program. Particular attention is paid to the representation of the dynamics of the West African Summer Monsoon and to the added value of our high-resolution models over existing data sets. We further present results on the climate change signal obtained for the two future periods 2020-2050 and 2070-2100 and compare them to current state-of-the-art projections from the CORDEX-Africa project. While the temperature change signal is similar to that obtained within CORDEX-Africa, our simulations predict a wetter future for the Coast of Guinea and the southern Soudano area and a slight drying in the northernmost part of the Sahel.
Cosmological N-body Simulation
NASA Astrophysics Data System (ADS)
Lake, George
1994-05-01
.90ex> }}} The ``N'' in N-body calculations has doubled every year for the last two decades. To continue this trend, the UW N-body group is working on algorithms for the fast evaluation of gravitational forces on parallel computers and establishing rigorous standards for the computations. In these algorithms, the computational cost per time step is ~ 10(3) pairwise forces per particle. A new adaptive time integrator enables us to perform high quality integrations that are fully temporally and spatially adaptive. SPH--smoothed particle hydrodynamics will be added to simulate the effects of dissipating gas and magnetic fields. The importance of these calculations is two-fold. First, they determine the nonlinear consequences of theories for the structure of the Universe. Second, they are essential for the interpretation of observations. Every galaxy has six coordinates of velocity and position. Observations determine two sky coordinates and a line of sight velocity that bundles universal expansion (distance) together with a random velocity created by the mass distribution. Simulations are needed to determine the underlying structure and masses. The importance of simulations has moved from ex post facto explanation to an integral part of planning large observational programs. I will show why high quality simulations with ``large N'' are essential to accomplish our scientific goals. This year, our simulations have N >~ 10(7) . This is sufficient to tackle some niche problems, but well short of our 5 year goal--simulating The Sloan Digital Sky Survey using a few Billion particles (a Teraflop-year simulation). Extrapolating past trends, we would have to ``wait'' 7 years for this hundred-fold improvement. Like past gains, significant changes in the computational methods are required for these advances. I will describe new algorithms, algorithmic hacks and a dedicated computer to perform Billion particle simulations. Finally, I will describe research that can be enabled by Petaflop computers. This research is supported by the NASA HPCC/ESS program.
Nanosheet Supported Single-Metal Atom Bifunctional Catalyst for Overall Water Splitting.
Ling, Chongyi; Shi, Li; Ouyang, Yixin; Zeng, Xiao Cheng; Wang, Jinlan
2017-08-09
Nanosheet supported single-atom catalysts (SACs) can make full use of metal atoms and yet entail high selectivity and activity, and bifunctional catalysts can enable higher performance while lowering the cost than two separate unifunctional catalysts. Supported single-atom bifunctional catalysts are therefore of great economic interest and scientific importance. Here, on the basis of first-principles computations, we report a design of the first single-atom bifunctional eletrocatalyst, namely, isolated nickel atom supported on β 12 boron monolayer (Ni 1 /β 12 -BM), to achieve overall water splitting. This nanosheet supported SAC exhibits remarkable electrocatalytic performance with the computed overpotential for oxygen/hydrogen evolution reaction being just 0.40/0.06 V. The ab initio molecular dynamics simulation shows that the SAC can survive up to 800 K elevated temperature, while enacting a high energy barrier of 1.68 eV to prevent isolated Ni atoms from clustering. A viable experimental route for the synthesis of Ni 1 /β 12 -BM SAC is demonstrated from computer simulation. The desired nanosheet supported single-atom bifunctional catalysts not only show great potential for achieving overall water splitting but also offer cost-effective opportunities for advancing clean energy technology.
The Caltech Concurrent Computation Program - Project description
NASA Technical Reports Server (NTRS)
Fox, G.; Otto, S.; Lyzenga, G.; Rogstad, D.
1985-01-01
The Caltech Concurrent Computation Program wwhich studies basic issues in computational science is described. The research builds on initial work where novel concurrent hardware, the necessary systems software to use it and twenty significant scientific implementations running on the initial 32, 64, and 128 node hypercube machines have been constructed. A major goal of the program will be to extend this work into new disciplines and more complex algorithms including general packages that decompose arbitrary problems in major application areas. New high-performance concurrent processors with up to 1024-nodes, over a gigabyte of memory and multigigaflop performance are being constructed. The implementations cover a wide range of problems in areas such as high energy and astrophysics, condensed matter, chemical reactions, plasma physics, applied mathematics, geophysics, simulation, CAD for VLSI, graphics and image processing. The products of the research program include the concurrent algorithms, hardware, systems software, and complete program implementations.
The computational challenges of Earth-system science.
O'Neill, Alan; Steenman-Clark, Lois
2002-06-15
The Earth system--comprising atmosphere, ocean, land, cryosphere and biosphere--is an immensely complex system, involving processes and interactions on a wide range of space- and time-scales. To understand and predict the evolution of the Earth system is one of the greatest challenges of modern science, with success likely to bring enormous societal benefits. High-performance computing, along with the wealth of new observational data, is revolutionizing our ability to simulate the Earth system with computer models that link the different components of the system together. There are, however, considerable scientific and technical challenges to be overcome. This paper will consider four of them: complexity, spatial resolution, inherent uncertainty and time-scales. Meeting these challenges requires a significant increase in the power of high-performance computers. The benefits of being able to make reliable predictions about the evolution of the Earth system should, on their own, amply repay this investment.
PetIGA: A framework for high-performance isogeometric analysis
Dalcin, Lisandro; Collier, Nathaniel; Vignal, Philippe; ...
2016-05-25
We present PetIGA, a code framework to approximate the solution of partial differential equations using isogeometric analysis. PetIGA can be used to assemble matrices and vectors which come from a Galerkin weak form, discretized with Non-Uniform Rational B-spline basis functions. We base our framework on PETSc, a high-performance library for the scalable solution of partial differential equations, which simplifies the development of large-scale scientific codes, provides a rich environment for prototyping, and separates parallelism from algorithm choice. We describe the implementation of PetIGA, and exemplify its use by solving a model nonlinear problem. To illustrate the robustness and flexibility ofmore » PetIGA, we solve some challenging nonlinear partial differential equations that include problems in both solid and fluid mechanics. Lastly, we show strong scaling results on up to 4096 cores, which confirm the suitability of PetIGA for large scale simulations.« less
A Look at the Impact of High-End Computing Technologies on NASA Missions
NASA Technical Reports Server (NTRS)
Biswas, Rupak; Dunbar, Jill; Hardman, John; Bailey, F. Ron; Wheeler, Lorien; Rogers, Stuart
2012-01-01
From its bold start nearly 30 years ago and continuing today, the NASA Advanced Supercomputing (NAS) facility at Ames Research Center has enabled remarkable breakthroughs in the space agency s science and engineering missions. Throughout this time, NAS experts have influenced the state-of-the-art in high-performance computing (HPC) and related technologies such as scientific visualization, system benchmarking, batch scheduling, and grid environments. We highlight the pioneering achievements and innovations originating from and made possible by NAS resources and know-how, from early supercomputing environment design and software development, to long-term simulation and analyses critical to design safe Space Shuttle operations and associated spinoff technologies, to the highly successful Kepler Mission s discovery of new planets now capturing the world s imagination.
Tackling some of the most intricate geophysical challenges via high-performance computing
NASA Astrophysics Data System (ADS)
Khosronejad, A.
2016-12-01
Recently, world has been witnessing significant enhancements in computing power of supercomputers. Computer clusters in conjunction with the advanced mathematical algorithms has set the stage for developing and applying powerful numerical tools to tackle some of the most intricate geophysical challenges that today`s engineers face. One such challenge is to understand how turbulent flows, in real-world settings, interact with (a) rigid and/or mobile complex bed bathymetry of waterways and sea-beds in the coastal areas; (b) objects with complex geometry that are fully or partially immersed; and (c) free-surface of waterways and water surface waves in the coastal area. This understanding is especially important because the turbulent flows in real-world environments are often bounded by geometrically complex boundaries, which dynamically deform and give rise to multi-scale and multi-physics transport phenomena, and characterized by multi-lateral interactions among various phases (e.g. air/water/sediment phases). Herein, I present some of the multi-scale and multi-physics geophysical fluid mechanics processes that I have attempted to study using an in-house high-performance computational model, the so-called VFS-Geophysics. More specifically, I will present the simulation results of turbulence/sediment/solute/turbine interactions in real-world settings. Parts of the simulations I present are performed to gain scientific insights into the processes such as sand wave formation (A. Khosronejad, and F. Sotiropoulos, (2014), Numerical simulation of sand waves in a turbulent open channel flow, Journal of Fluid Mechanics, 753:150-216), while others are carried out to predict the effects of climate change and large flood events on societal infrastructures ( A. Khosronejad, et al., (2016), Large eddy simulation of turbulence and solute transport in a forested headwater stream, Journal of Geophysical Research:, doi: 10.1002/2014JF003423).
NASA Technical Reports Server (NTRS)
Miller, Matthew J.; Lim, Darlene S. S.; Brady, Allyson; Cardman, Zena; Bell, Ernest; Garry, Brent; Reid, Donnie; Chappell, Steve; Abercromby, Andrew F. J.
2016-01-01
The Pavilion Lake Research Project (PLRP) is a unique platform where the combination of scientific research and human space exploration concepts can be tested in an underwater spaceflight analog environment. The 2015 PLRP field season was performed at Pavilion Lake, Canada, where science-driven exploration techniques focusing on microbialite characterization and acquisition were evaluated within the context of crew and robotic extravehicular activity (EVA) operations. The primary objectives of this analog study were to detail the capabilities, decision-making process, and operational concepts required to meet non-simulated scientific objectives during 5-minute one-way communication latency utilizing crew and robotic assets. Furthermore, this field study served as an opportunity build upon previous tests at PLRP, NASA Desert Research and Technology Studies (DRATS), and NASA Extreme Environment Mission Operations (NEEMO) to characterize the functional roles and responsibilities of the personnel involved in the distributed flight control team and identify operational constraints imposed by science-driven EVA operations. The relationship and interaction between ground and flight crew was found to be dependent on the specific scientific activities being addressed. Furthermore, the addition of a second intravehicular operator was found to be highly enabling when conducting science-driven EVAs. Future human spaceflight activities will need to cope with the added complexity of dynamic and rapid execution of scientific priorities both during and between EVA execution to ensure scientific objectives are achieved.
NASA Astrophysics Data System (ADS)
Simon, Nicole A.
Virtual laboratory experiments using interactive computer simulations are not being employed as viable alternatives to laboratory science curriculum at extensive enough rates within higher education. Rote traditional lab experiments are currently the norm and are not addressing inquiry, Critical Thinking, and cognition throughout the laboratory experience, linking with educational technologies (Pyatt & Sims, 2007; 2011; Trundle & Bell, 2010). A causal-comparative quantitative study was conducted with 150 learners enrolled at a two-year community college, to determine the effects of simulation laboratory experiments on Higher-Order Learning, Critical Thinking Skills, and Cognitive Load. The treatment population used simulated experiments, while the non-treatment sections performed traditional expository experiments. A comparison was made using the Revised Two-Factor Study Process survey, Motivated Strategies for Learning Questionnaire, and the Scientific Attitude Inventory survey, using a Repeated Measures ANOVA test for treatment or non-treatment. A main effect of simulated laboratory experiments was found for both Higher-Order Learning, [F (1, 148) = 30.32,p = 0.00, eta2 = 0.12] and Critical Thinking Skills, [F (1, 148) = 14.64,p = 0.00, eta 2 = 0.17] such that simulations showed greater increases than traditional experiments. Post-lab treatment group self-reports indicated increased marginal means (+4.86) in Higher-Order Learning and Critical Thinking Skills, compared to the non-treatment group (+4.71). Simulations also improved the scientific skills and mastery of basic scientific subject matter. It is recommended that additional research recognize that learners' Critical Thinking Skills change due to different instructional methodologies that occur throughout a semester.
Rodríguez-Navarro, Alonso
2011-01-01
Background Conventional scientometric predictors of research performance such as the number of papers, citations, and papers in the top 1% of highly cited papers cannot be validated in terms of the number of Nobel Prize achievements across countries and institutions. The purpose of this paper is to find a bibliometric indicator that correlates with the number of Nobel Prize achievements. Methodology/Principal Findings This study assumes that the high-citation tail of citation distribution holds most of the information about high scientific performance. Here I propose the x-index, which is calculated from the number of national articles in the top 1% and 0.1% of highly cited papers and has a subtractive term to discount highly cited papers that are not scientific breakthroughs. The x-index, the number of Nobel Prize achievements, and the number of national articles in Nature or Science are highly correlated. The high correlations among these independent parameters demonstrate that they are good measures of high scientific performance because scientific excellence is their only common characteristic. However, the x-index has superior features as compared to the other two parameters. Nobel Prize achievements are low frequency events and their number is an imprecise indicator, which in addition is zero in most institutions; the evaluation of research making use of the number of publications in prestigious journals is not advised. Conclusion The x-index is a simple and precise indicator for high research performance. PMID:21647383
Rodríguez-Navarro, Alonso
2011-01-01
Conventional scientometric predictors of research performance such as the number of papers, citations, and papers in the top 1% of highly cited papers cannot be validated in terms of the number of Nobel Prize achievements across countries and institutions. The purpose of this paper is to find a bibliometric indicator that correlates with the number of Nobel Prize achievements. This study assumes that the high-citation tail of citation distribution holds most of the information about high scientific performance. Here I propose the x-index, which is calculated from the number of national articles in the top 1% and 0.1% of highly cited papers and has a subtractive term to discount highly cited papers that are not scientific breakthroughs. The x-index, the number of Nobel Prize achievements, and the number of national articles in Nature or Science are highly correlated. The high correlations among these independent parameters demonstrate that they are good measures of high scientific performance because scientific excellence is their only common characteristic. However, the x-index has superior features as compared to the other two parameters. Nobel Prize achievements are low frequency events and their number is an imprecise indicator, which in addition is zero in most institutions; the evaluation of research making use of the number of publications in prestigious journals is not advised. The x-index is a simple and precise indicator for high research performance.
Web-based system for surgical planning and simulation
NASA Astrophysics Data System (ADS)
Eldeib, Ayman M.; Ahmed, Mohamed N.; Farag, Aly A.; Sites, C. B.
1998-10-01
The growing scientific knowledge and rapid progress in medical imaging techniques has led to an increasing demand for better and more efficient methods of remote access to high-performance computer facilities. This paper introduces a web-based telemedicine project that provides interactive tools for surgical simulation and planning. The presented approach makes use of client-server architecture based on new internet technology where clients use an ordinary web browser to view, send, receive and manipulate patients' medical records while the server uses the supercomputer facility to generate online semi-automatic segmentation, 3D visualization, surgical simulation/planning and neuroendoscopic procedures navigation. The supercomputer (SGI ONYX 1000) is located at the Computer Vision and Image Processing Lab, University of Louisville, Kentucky. This system is under development in cooperation with the Department of Neurological Surgery, Alliant Health Systems, Louisville, Kentucky. The server is connected via a network to the Picture Archiving and Communication System at Alliant Health Systems through a DICOM standard interface that enables authorized clients to access patients' images from different medical modalities.
NASA Astrophysics Data System (ADS)
Cheng, T.; Xu, Z.; Hong, S.
2017-12-01
Flood disasters frequently attack the urban area in Jinan City during past years, and the city is faced with severe road flooding which greatly threaten pedestrians' safety. Therefore, it is of great significance to investigate the pedestrian risk during floods under specific topographic condition. In this study, a model coupled hydrological and hydrodynamic processes is developed in the study area to simulate the flood routing process on the road for the "7.18" rainstorm and validated with post-disaster damage survey information. The risk of pedestrian is estimated with a flood risk assessment model. The result shows that the coupled model performs well in the rainstorm flood process. On the basis of the simulation result, the areas with extreme risk, medium risk, and mild risk are identified, respectively. Regions with high risk are generally located near the mountain front area with steep slopes. This study will provide scientific support for the flood control and disaster reduction in Jinan City.
Piltdown Man: Combining the Instruction of Scientific Ethics and Qualitative Analysis
NASA Astrophysics Data System (ADS)
Vincent, John B.
1999-11-01
In combination with lectures on scientific method and the problems of scientific misconduct in a freshman chemistry course at The University of Alabama, a laboratory experiment was developed to allow students to feel some of the sense of scientific discovery associated with the exposure of the Piltdown Man fraud. This is accomplished by modifying a commonly performed freshman chemistry laboratory experiment, qualitative analysis of group III metal ions. Pieces of chalk are treated with chromium, manganese, and iron to simulate the treatment used to forge the Piltdown "fossils"; students can use techniques in qualitative analysis schemes for the group III ions to determine whether the samples are "forgeries" and if so which metal ion(s) were used.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Arumugam, Kamesh
Efficient parallel implementations of scientific applications on multi-core CPUs with accelerators such as GPUs and Xeon Phis is challenging. This requires - exploiting the data parallel architecture of the accelerator along with the vector pipelines of modern x86 CPU architectures, load balancing, and efficient memory transfer between different devices. It is relatively easy to meet these requirements for highly structured scientific applications. In contrast, a number of scientific and engineering applications are unstructured. Getting performance on accelerators for these applications is extremely challenging because many of these applications employ irregular algorithms which exhibit data-dependent control-ow and irregular memory accesses. Furthermore,more » these applications are often iterative with dependency between steps, and thus making it hard to parallelize across steps. As a result, parallelism in these applications is often limited to a single step. Numerical simulation of charged particles beam dynamics is one such application where the distribution of work and memory access pattern at each time step is irregular. Applications with these properties tend to present significant branch and memory divergence, load imbalance between different processor cores, and poor compute and memory utilization. Prior research on parallelizing such irregular applications have been focused around optimizing the irregular, data-dependent memory accesses and control-ow during a single step of the application independent of the other steps, with the assumption that these patterns are completely unpredictable. We observed that the structure of computation leading to control-ow divergence and irregular memory accesses in one step is similar to that in the next step. It is possible to predict this structure in the current step by observing the computation structure of previous steps. In this dissertation, we present novel machine learning based optimization techniques to address the parallel implementation challenges of such irregular applications on different HPC architectures. In particular, we use supervised learning to predict the computation structure and use it to address the control-ow and memory access irregularities in the parallel implementation of such applications on GPUs, Xeon Phis, and heterogeneous architectures composed of multi-core CPUs with GPUs or Xeon Phis. We use numerical simulation of charged particles beam dynamics simulation as a motivating example throughout the dissertation to present our new approach, though they should be equally applicable to a wide range of irregular applications. The machine learning approach presented here use predictive analytics and forecasting techniques to adaptively model and track the irregular memory access pattern at each time step of the simulation to anticipate the future memory access pattern. Access pattern forecasts can then be used to formulate optimization decisions during application execution which improves the performance of the application at a future time step based on the observations from earlier time steps. In heterogeneous architectures, forecasts can also be used to improve the memory performance and resource utilization of all the processing units to deliver a good aggregate performance. We used these optimization techniques and anticipation strategy to design a cache-aware, memory efficient parallel algorithm to address the irregularities in the parallel implementation of charged particles beam dynamics simulation on different HPC architectures. Experimental result using a diverse mix of HPC architectures shows that our approach in using anticipation strategy is effective in maximizing data reuse, ensuring workload balance, minimizing branch and memory divergence, and in improving resource utilization.« less
NASA Astrophysics Data System (ADS)
Turinsky, Paul J.; Martin, William R.
2017-04-01
In this special issue of the Journal of Computational Physics, the research and development completed at the time of manuscript submission by the Consortium for Advanced Simulation of Light Water Reactors (CASL) is presented. CASL is the first of several Energy Innovation Hubs that have been created by the Department of Energy. The Hubs are modeled after the strong scientific management characteristics of the Manhattan Project and AT&T Bell Laboratories, and function as integrated research centers that combine basic and applied research with engineering to accelerate scientific discovery that addresses critical energy issues. Lifetime of a Hub is expected to be five or ten years depending upon performance, with CASL being granted a ten year lifetime.
NASA Astrophysics Data System (ADS)
Khodachenko, Maxim; Miller, Steven; Stoeckler, Robert; Topf, Florian
2010-05-01
Computational modeling and observational data analysis are two major aspects of the modern scientific research. Both appear nowadays under extensive development and application. Many of the scientific goals of planetary space missions require robust models of planetary objects and environments as well as efficient data analysis algorithms, to predict conditions for mission planning and to interpret the experimental data. Europe has great strength in these areas, but it is insufficiently coordinated; individual groups, models, techniques and algorithms need to be coupled and integrated. Existing level of scientific cooperation and the technical capabilities for operative communication, allow considerable progress in the development of a distributed international Research Infrastructure (RI) which is based on the existing in Europe computational modelling and data analysis centers, providing the scientific community with dedicated services in the fields of their computational and data analysis expertise. These services will appear as a product of the collaborative communication and joint research efforts of the numerical and data analysis experts together with planetary scientists. The major goal of the EUROPLANET-RI / EMDAF is to make computational models and data analysis algorithms associated with particular national RIs and teams, as well as their outputs, more readily available to their potential user community and more tailored to scientific user requirements, without compromising front-line specialized research on model and data analysis algorithms development and software implementation. This objective will be met through four keys subdivisions/tasks of EMAF: 1) an Interactive Catalogue of Planetary Models; 2) a Distributed Planetary Modelling Laboratory; 3) a Distributed Data Analysis Laboratory, and 4) enabling Models and Routines for High Performance Computing Grids. Using the advantages of the coordinated operation and efficient communication between the involved computational modelling, research and data analysis expert teams and their related research infrastructures, EMDAF will provide a 1) flexible, 2) scientific user oriented, 3) continuously developing and fast upgrading computational and data analysis service to support and intensify the European planetary scientific research. At the beginning EMDAF will create a set of demonstrators and operational tests of this service in key areas of European planetary science. This work will aim at the following objectives: (a) Development and implementation of tools for distant interactive communication between the planetary scientists and computing experts (including related RIs); (b) Development of standard routine packages, and user-friendly interfaces for operation of the existing numerical codes and data analysis algorithms by the specialized planetary scientists; (c) Development of a prototype of numerical modelling services "on demand" for space missions and planetary researchers; (d) Development of a prototype of data analysis services "on demand" for space missions and planetary researchers; (e) Development of a prototype of coordinated interconnected simulations of planetary phenomena and objects (global multi-model simulators); (f) Providing the demonstrators of a coordinated use of high performance computing facilities (super-computer networks), done in cooperation with European HPC Grid DEISA.
Evaluation of a portable evidential breath alcohol analyzer.
Razatos, Gerasimos; Luthi, Ruth; Kerrigan, Sarah
2005-10-04
The Scientific Laboratory Division (SLD) of the Department of Health acts by mandate as the regulatory agency for the Implied Consent Program for the State of New Mexico. The Laboratory is responsible for all blood and breath alcohol testing activities for law enforcement statewide. The geographical size and the nature of the state, characterized by a highly rural population, demands portable breath alcohol testing equipment. Moreover, future expansion and success of the breath-testing program has focused on instrument portability and data management as critical issues amongst law enforcement agencies and the courts. Thus, the Implied Consent Section of the SLD evaluated the performance of the Intoxilyzer 8000, a portable instrument, against the Intoxilyzer 5000, a stationary instrument, which is currently approved for use. Instrument performance was evaluated at various ethanol concentrations, ranging from 0.04 to 0.55 g/100mL in blood or g/210 L breath. Special attention was placed on instrument performance at the per se and aggravated DWI levels of 0.08 g/100mL and 0.16 g/dL, respectively, due to their legal significance. Precision and accuracy were evaluated using in-house ethanol controls in a wet bath simulator. Coefficients of variation using the Intoxilyzer 8000 ranged from 0.30 to 1.3% (n=102), while CVs for the Intoxilyzer 5000 were 0.7-2.1% (n=102). Calibration stability was assessed in addition to the distribution of data at concentrations between 0.04 and 0.55 g/210 L. Accuracy was 100-102% for the Intoxilyzer 5000 and 99-101% using the Intoxilyzer 8000. Linear regression analysis of more than 700 comparative measurements revealed an R(2) of 1.000 (y=1.005x-0.001), where the Intoxilyzer 5000 and the Intoxilyzer 8000 were plotted on the x- and y-axis respectively. Instrument response to mouth alcohol and volatile interferences was also investigated. Potential interferences were evaluated alone or in combination with ethanol using a wet bath simulator at 34.0 degrees C. The effects of extreme temperature and altitude were also examined using wet bath simulators and dry gas calibrant. Accuracy and precision were evaluated at high and low temperatures. High altitude performance was evaluated at 3534 m above sea level at a local ski resort. In addition to the scientific study, field evaluations were also conducted by law enforcement personnel. Based upon the results of the study, the Intoxilyzer 8000 was approved as an evidential breath alcohol analyzer in the State of New Mexico.
A virtual data language and system for scientific workflow management in data grid environments
NASA Astrophysics Data System (ADS)
Zhao, Yong
With advances in scientific instrumentation and simulation, scientific data is growing fast in both size and analysis complexity. So-called Data Grids aim to provide high performance, distributed data analysis infrastructure for data- intensive sciences, where scientists distributed worldwide need to extract information from large collections of data, and to share both data products and the resources needed to produce and store them. However, the description, composition, and execution of even logically simple scientific workflows are often complicated by the need to deal with "messy" issues like heterogeneous storage formats and ad-hoc file system structures. We show how these difficulties can be overcome via a typed workflow notation called virtual data language, within which issues of physical representation are cleanly separated from logical typing, and by the implementation of this notation within the context of a powerful virtual data system that supports distributed execution. The resulting language and system are capable of expressing complex workflows in a simple compact form, enacting those workflows in distributed environments, monitoring and recording the execution processes, and tracing the derivation history of data products. We describe the motivation, design, implementation, and evaluation of the virtual data language and system, and the application of the virtual data paradigm in various science disciplines, including astronomy, cognitive neuroscience.
NASA's Software Bank (Cassegrain Feed System)
NASA Technical Reports Server (NTRS)
1991-01-01
When Scientific-Atlanta had to design a new Cassegrain antenna, they found that the COSMIC program, "Machine Design of Cassegrain Feed System" allowed for computer simulation of the antenna's performance enabling pre-construction changes to be made. Significant cost savings were effected by the program.
Optimization of a Thermodynamic Model Using a Dakota Toolbox Interface
NASA Astrophysics Data System (ADS)
Cyrus, J.; Jafarov, E. E.; Schaefer, K. M.; Wang, K.; Clow, G. D.; Piper, M.; Overeem, I.
2016-12-01
Scientific modeling of the Earth physical processes is an important driver of modern science. The behavior of these scientific models is governed by a set of input parameters. It is crucial to choose accurate input parameters that will also preserve the corresponding physics being simulated in the model. In order to effectively simulate real world processes the models output data must be close to the observed measurements. To achieve this optimal simulation, input parameters are tuned until we have minimized the objective function, which is the error between the simulation model outputs and the observed measurements. We developed an auxiliary package, which serves as a python interface between the user and DAKOTA. The package makes it easy for the user to conduct parameter space explorations, parameter optimizations, as well as sensitivity analysis while tracking and storing results in a database. The ability to perform these analyses via a Python library also allows the users to combine analysis techniques, for example finding an approximate equilibrium with optimization then immediately explore the space around it. We used the interface to calibrate input parameters for the heat flow model, which is commonly used in permafrost science. We performed optimization on the first three layers of the permafrost model, each with two thermal conductivity coefficients input parameters. Results of parameter space explorations indicate that the objective function not always has a unique minimal value. We found that gradient-based optimization works the best for the objective functions with one minimum. Otherwise, we employ more advanced Dakota methods such as genetic optimization and mesh based convergence in order to find the optimal input parameters. We were able to recover 6 initially unknown thermal conductivity parameters within 2% accuracy of their known values. Our initial tests indicate that the developed interface for the Dakota toolbox could be used to perform analysis and optimization on a `black box' scientific model more efficiently than using just Dakota.
Efficient Use of Distributed Systems for Scientific Applications
NASA Technical Reports Server (NTRS)
Taylor, Valerie; Chen, Jian; Canfield, Thomas; Richard, Jacques
2000-01-01
Distributed computing has been regarded as the future of high performance computing. Nationwide high speed networks such as vBNS are becoming widely available to interconnect high-speed computers, virtual environments, scientific instruments and large data sets. One of the major issues to be addressed with distributed systems is the development of computational tools that facilitate the efficient execution of parallel applications on such systems. These tools must exploit the heterogeneous resources (networks and compute nodes) in distributed systems. This paper presents a tool, called PART, which addresses this issue for mesh partitioning. PART takes advantage of the following heterogeneous system features: (1) processor speed; (2) number of processors; (3) local network performance; and (4) wide area network performance. Further, different finite element applications under consideration may have different computational complexities, different communication patterns, and different element types, which also must be taken into consideration when partitioning. PART uses parallel simulated annealing to partition the domain, taking into consideration network and processor heterogeneity. The results of using PART for an explicit finite element application executing on two IBM SPs (located at Argonne National Laboratory and the San Diego Supercomputer Center) indicate an increase in efficiency by up to 36% as compared to METIS, a widely used mesh partitioning tool. The input to METIS was modified to take into consideration heterogeneous processor performance; METIS does not take into consideration heterogeneous networks. The execution times for these applications were reduced by up to 30% as compared to METIS. These results are given in Figure 1 for four irregular meshes with number of elements ranging from 30,269 elements for the Barth5 mesh to 11,451 elements for the Barth4 mesh. Future work with PART entails using the tool with an integrated application requiring distributed systems. In particular this application, illustrated in the document entails an integration of finite element and fluid dynamic simulations to address the cooling of turbine blades of a gas turbine engine design. It is not uncommon to encounter high-temperature, film-cooled turbine airfoils with 1,000,000s of degrees of freedom. This results because of the complexity of the various components of the airfoils, requiring fine-grain meshing for accuracy. Additional information is contained in the original.
SiMon: Simulation Monitor for Computational Astrophysics
NASA Astrophysics Data System (ADS)
Xuran Qian, Penny; Cai, Maxwell Xu; Portegies Zwart, Simon; Zhu, Ming
2017-09-01
Scientific discovery via numerical simulations is important in modern astrophysics. This relatively new branch of astrophysics has become possible due to the development of reliable numerical algorithms and the high performance of modern computing technologies. These enable the analysis of large collections of observational data and the acquisition of new data via simulations at unprecedented accuracy and resolution. Ideally, simulations run until they reach some pre-determined termination condition, but often other factors cause extensive numerical approaches to break down at an earlier stage. In those cases, processes tend to be interrupted due to unexpected events in the software or the hardware. In those cases, the scientist handles the interrupt manually, which is time-consuming and prone to errors. We present the Simulation Monitor (SiMon) to automatize the farming of large and extensive simulation processes. Our method is light-weight, it fully automates the entire workflow management, operates concurrently across multiple platforms and can be installed in user space. Inspired by the process of crop farming, we perceive each simulation as a crop in the field and running simulation becomes analogous to growing crops. With the development of SiMon we relax the technical aspects of simulation management. The initial package was developed for extensive parameter searchers in numerical simulations, but it turns out to work equally well for automating the computational processing and reduction of observational data reduction.
NASA Astrophysics Data System (ADS)
Akpan, Joseph Paul; Andre, Thomas
1999-06-01
Science teachers, school administrators, educators, and the scientific community are faced with ethical controversies over animal dissection in classrooms. Simulation has been proposed as a way of dealing with this issue. One intriguing previous finding was that use of an interactive videodisc dissection facilitated performance on a subsequent actual dissection. This study examined the prior use of simulation of frog dissection in improving students' actual dissection performance and learning of frog anatomy and morphology. There were three experimental conditions: simulation before dissection (SBD); dissection before simulation (DBS); or dissection-only (DO). Results of the study indicated that students receiving SBD performed significantly better than students receiving DBS or DO on both actual dissection and knowledge of the anatomy and morphology. Students' attitudes toward the use of animals for dissection did not change significantly from pretest to posttest and did not interact with treatment. The genders did not differ in achievement, but males were more favorable towards dissection and computers than were females.
Discrete Event-based Performance Prediction for Temperature Accelerated Dynamics
NASA Astrophysics Data System (ADS)
Junghans, Christoph; Mniszewski, Susan; Voter, Arthur; Perez, Danny; Eidenbenz, Stephan
2014-03-01
We present an example of a new class of tools that we call application simulators, parameterized fast-running proxies of large-scale scientific applications using parallel discrete event simulation (PDES). We demonstrate our approach with a TADSim application simulator that models the Temperature Accelerated Dynamics (TAD) method, which is an algorithmically complex member of the Accelerated Molecular Dynamics (AMD) family. The essence of the TAD application is captured without the computational expense and resource usage of the full code. We use TADSim to quickly characterize the runtime performance and algorithmic behavior for the otherwise long-running simulation code. We further extend TADSim to model algorithm extensions to standard TAD, such as speculative spawning of the compute-bound stages of the algorithm, and predict performance improvements without having to implement such a method. Focused parameter scans have allowed us to study algorithm parameter choices over far more scenarios than would be possible with the actual simulation. This has led to interesting performance-related insights into the TAD algorithm behavior and suggested extensions to the TAD method.
On the Efficacy of Source Code Optimizations for Cache-Based Systems
NASA Technical Reports Server (NTRS)
VanderWijngaart, Rob F.; Saphir, William C.
1998-01-01
Obtaining high performance without machine-specific tuning is an important goal of scientific application programmers. Since most scientific processing is done on commodity microprocessors with hierarchical memory systems, this goal of "portable performance" can be achieved if a common set of optimization principles is effective for all such systems. It is widely believed, or at least hoped, that portable performance can be realized. The rule of thumb for optimization on hierarchical memory systems is to maximize temporal and spatial locality of memory references by reusing data and minimizing memory access stride. We investigate the effects of a number of optimizations on the performance of three related kernels taken from a computational fluid dynamics application. Timing the kernels on a range of processors, we observe an inconsistent and often counterintuitive impact of the optimizations on performance. In particular, code variations that have a positive impact on one architecture can have a negative impact on another, and variations expected to be unimportant can produce large effects. Moreover, we find that cache miss rates - as reported by a cache simulation tool, and confirmed by hardware counters - only partially explain the results. By contrast, the compiler-generated assembly code provides more insight by revealing the importance of processor-specific instructions and of compiler maturity, both of which strongly, and sometimes unexpectedly, influence performance. We conclude that it is difficult to obtain performance portability on modern cache-based computers, and comment on the implications of this result.
On the Efficacy of Source Code Optimizations for Cache-Based Systems
NASA Technical Reports Server (NTRS)
VanderWijngaart, Rob F.; Saphir, William C.; Saini, Subhash (Technical Monitor)
1998-01-01
Obtaining high performance without machine-specific tuning is an important goal of scientific application programmers. Since most scientific processing is done on commodity microprocessors with hierarchical memory systems, this goal of "portable performance" can be achieved if a common set of optimization principles is effective for all such systems. It is widely believed, or at least hoped, that portable performance can be realized. The rule of thumb for optimization on hierarchical memory systems is to maximize temporal and spatial locality of memory references by reusing data and minimizing memory access stride. We investigate the effects of a number of optimizations on the performance of three related kernels taken from a computational fluid dynamics application. Timing the kernels on a range of processors, we observe an inconsistent and often counterintuitive impact of the optimizations on performance. In particular, code variations that have a positive impact on one architecture can have a negative impact on another, and variations expected to be unimportant can produce large effects. Moreover, we find that cache miss rates-as reported by a cache simulation tool, and confirmed by hardware counters-only partially explain the results. By contrast, the compiler-generated assembly code provides more insight by revealing the importance of processor-specific instructions and of compiler maturity, both of which strongly, and sometimes unexpectedly, influence performance. We conclude that it is difficult to obtain performance portability on modern cache-based computers, and comment on the implications of this result.
Papantoniou, Panagiotis
2018-04-03
The present research relies on 2 main objectives. The first is to investigate whether latent model analysis through a structural equation model can be implemented on driving simulator data in order to define an unobserved driving performance variable. Subsequently, the second objective is to investigate and quantify the effect of several risk factors including distraction sources, driver characteristics, and road and traffic environment on the overall driving performance and not in independent driving performance measures. For the scope of the present research, 95 participants from all age groups were asked to drive under different types of distraction (conversation with passenger, cell phone use) in urban and rural road environments with low and high traffic volume in a driving simulator experiment. Then, in the framework of the statistical analysis, a correlation table is presented investigating any of a broad class of statistical relationships between driving simulator measures and a structural equation model is developed in which overall driving performance is estimated as a latent variable based on several individual driving simulator measures. Results confirm the suitability of the structural equation model and indicate that the selection of the specific performance measures that define overall performance should be guided by a rule of representativeness between the selected variables. Moreover, results indicate that conversation with the passenger was not found to have a statistically significant effect, indicating that drivers do not change their performance while conversing with a passenger compared to undistracted driving. On the other hand, results support the hypothesis that cell phone use has a negative effect on driving performance. Furthermore, regarding driver characteristics, age, gender, and experience all have a significant effect on driving performance, indicating that driver-related characteristics play the most crucial role in overall driving performance. The findings of this study allow a new approach to the investigation of driving behavior in driving simulator experiments and in general. By the successful implementation of the structural equation model, driving behavior can be assessed in terms of overall performance and not through individual performance measures, which allows an important scientific step forward from piecemeal analyses to a sound combined analysis of the interrelationship between several risk factors and overall driving performance.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Bethel, W.
Building something which could be called {open_quotes}virtual reality{close_quotes} (VR) is something of a challenge, particularly when nobody really seems to agree on a definition of VR. The author wanted to combine scientific visualization with VR, resulting in an environment useful for assisting scientific research. He demonstrates the combination of VR and scientific visualization in a prototype application. The VR application constructed consists of a dataflow based system for performing scientific visualization (AVS), extensions to the system to support VR input devices and a numerical simulation ported into the dataflow environment. The VR system includes two inexpensive, off-the-shelf VR devices andmore » some custom code. A working system was assembled with about two man-months of effort. The system allows the user to specify parameters for a chemical flooding simulation as well as some viewing parameters using VR input devices, as well as view the output using VR output devices. In chemical flooding, there is a subsurface region that contains chemicals which are to be removed. Secondary oil recovery and environmental remediation are typical applications of chemical flooding. The process assumes one or more injection wells, and one or more production wells. Chemicals or water are pumped into the ground, mobilizing and displacing hydrocarbons or contaminants. The placement of the production and injection wells, and other parameters of the wells, are the most important variables in the simulation.« less
PFLOTRAN Verification: Development of a Testing Suite to Ensure Software Quality
NASA Astrophysics Data System (ADS)
Hammond, G. E.; Frederick, J. M.
2016-12-01
In scientific computing, code verification ensures the reliability and numerical accuracy of a model simulation by comparing the simulation results to experimental data or known analytical solutions. The model is typically defined by a set of partial differential equations with initial and boundary conditions, and verification ensures whether the mathematical model is solved correctly by the software. Code verification is especially important if the software is used to model high-consequence systems which cannot be physically tested in a fully representative environment [Oberkampf and Trucano (2007)]. Justified confidence in a particular computational tool requires clarity in the exercised physics and transparency in its verification process with proper documentation. We present a quality assurance (QA) testing suite developed by Sandia National Laboratories that performs code verification for PFLOTRAN, an open source, massively-parallel subsurface simulator. PFLOTRAN solves systems of generally nonlinear partial differential equations describing multiphase, multicomponent and multiscale reactive flow and transport processes in porous media. PFLOTRAN's QA test suite compares the numerical solutions of benchmark problems in heat and mass transport against known, closed-form, analytical solutions, including documentation of the exercised physical process models implemented in each PFLOTRAN benchmark simulation. The QA test suite development strives to follow the recommendations given by Oberkampf and Trucano (2007), which describes four essential elements in high-quality verification benchmark construction: (1) conceptual description, (2) mathematical description, (3) accuracy assessment, and (4) additional documentation and user information. Several QA tests within the suite will be presented, including details of the benchmark problems and their closed-form analytical solutions, implementation of benchmark problems in PFLOTRAN simulations, and the criteria used to assess PFLOTRAN's performance in the code verification procedure. References Oberkampf, W. L., and T. G. Trucano (2007), Verification and Validation Benchmarks, SAND2007-0853, 67 pgs., Sandia National Laboratories, Albuquerque, NM.
A Mars Rover Mission Simulation on Kilauea Volcano
NASA Technical Reports Server (NTRS)
Stoker, Carol; Cuzzi, Jeffery N. (Technical Monitor)
1995-01-01
A field experiment to simulate a rover mission on Mars was performed using the Russian Marsokhod rover deployed on Kilauea Volcano HI in February, 1995. A Russian Marsokhod rover chassis was equipped with American avionics equipment, stereo cameras on a pan and tilt platform, a digital high resolution body-mounted camera, and a manipulator arm on which was mounted a camera with a close-up lens. The six wheeled rover is 2 meters long and has a mass of 120 kg. The imaging system was designed to simulate that used on the planned "Mars Together" mission. The rover was deployed on Kilauea Volcano HI and operated from NASA Ames by a team of planetary geologists and exobiologists. Two modes of mission operations were simulated for three days each: (1) long time delay, low data bandwidth (simulating a Mars mission), and (2) live video, wide-bandwidth data (allowing active control simulating a Lunar rover mission or a Mars rover mission controlled from on or near the Martian surface). Simulated descent images (aerial photographs) were used to plan traverses to address a detailed set of science questions. The actual route taken was determined by the science team and the traverse path was frequently changed in response to the data acquired and to unforeseen operational issues. Traverses were thereby optimized to efficiently answer scientific questions. During the Mars simulation, the rover traversed a distance of 800 m. Based on the time delay between Earth and Mars, we estimate that the same operation would have taken 30 days to perform on Mars. This paper will describe the mission simulation and make recommendations about incorporating rovers into the Mars surveyor program.
Attracting Students to Space Science Fields: Mission to Mars
NASA Astrophysics Data System (ADS)
Congdon, Donald R.; Lovegrove, William P.; Samec, Ronald G.
Attracting high school students to space science is one of the main goals of Bob Jones University's annual Mission to Mars (MTM). MTM develops interest in space exploration through a highly realistic simulated trip to Mars. Students study and learn to appreciate the challenges of space travel including propulsion life support medicine planetary astronomy psychology robotics and communication. Broken into teams (Management Spacecraft Design Communications Life Support Navigation Robotics and Science) they address the problems specific to each aspect of the mission. Teams also learn to interact and recognize that a successful mission requires cooperation. Coordinated by the Management Team the students build a spacecraft and associated apparatus connect computers and communications equipment train astronauts on the mission simulator and program a Pathfinder-type robot. On the big day the astronauts enter the spacecraft as Mission Control gets ready to support them through the expected and unexpected of their mission. Aided by teamwork the astronauts must land on Mars perform their scientific mission on a simulated surface of mars and return home. We see the success of MTM not only in successful missions but in the students who come back year after year for another MTM.
Concept Verification Test - Evaluation of Spacelab/Payload operation concepts
NASA Technical Reports Server (NTRS)
Mcbrayer, R. O.; Watters, H. H.
1977-01-01
The Concept Verification Test (CVT) procedure is used to study Spacelab operational concepts by conducting mission simulations in a General Purpose Laboratory (GPL) which represents a possible design of Spacelab. In conjunction with the laboratory a Mission Development Simulator, a Data Management System Simulator, a Spacelab Simulator, and Shuttle Interface Simulator have been designed. (The Spacelab Simulator is more functionally and physically representative of the Spacelab than the GPL.) Four simulations of Spacelab mission experimentation were performed, two involving several scientific disciplines, one involving life sciences, and the last involving material sciences. The purpose of the CVT project is to support the pre-design and development of payload carriers and payloads, and to coordinate hardware, software, and operational concepts of different developers and users.
HIGH-FIDELITY SIMULATION-DRIVEN MODEL DEVELOPMENT FOR COARSE-GRAINED COMPUTATIONAL FLUID DYNAMICS
DOE Office of Scientific and Technical Information (OSTI.GOV)
Hanna, Botros N.; Dinh, Nam T.; Bolotnov, Igor A.
Nuclear reactor safety analysis requires identifying various credible accident scenarios and determining their consequences. For a full-scale nuclear power plant system behavior, it is impossible to obtain sufficient experimental data for a broad range of risk-significant accident scenarios. In single-phase flow convective problems, Direct Numerical Simulation (DNS) and Large Eddy Simulation (LES) can provide us with high fidelity results when physical data are unavailable. However, these methods are computationally expensive and cannot be afforded for simulation of long transient scenarios in nuclear accidents despite extraordinary advances in high performance scientific computing over the past decades. The major issue is themore » inability to make the transient computation parallel, thus making number of time steps required in high-fidelity methods unaffordable for long transients. In this work, we propose to apply a high fidelity simulation-driven approach to model sub-grid scale (SGS) effect in Coarse Grained Computational Fluid Dynamics CG-CFD. This approach aims to develop a statistical surrogate model instead of the deterministic SGS model. We chose to start with a turbulent natural convection case with volumetric heating in a horizontal fluid layer with a rigid, insulated lower boundary and isothermal (cold) upper boundary. This scenario of unstable stratification is relevant to turbulent natural convection in a molten corium pool during a severe nuclear reactor accident, as well as in containment mixing and passive cooling. The presented approach demonstrates how to create a correction for the CG-CFD solution by modifying the energy balance equation. A global correction for the temperature equation proves to achieve a significant improvement to the prediction of steady state temperature distribution through the fluid layer.« less
WFIRST: Data/Instrument Simulation Support at IPAC
NASA Astrophysics Data System (ADS)
Laine, Seppo; Akeson, Rachel; Armus, Lee; Bennett, Lee; Colbert, James; Helou, George; Kirkpatrick, J. Davy; Meshkat, Tiffany; Paladini, Roberta; Ramirez, Solange; Wang, Yun; Xie, Joan; Yan, Lin
2018-01-01
As part of WFIRST Science Center preparations, the IPAC Science Operations Center (ISOC) maintains a repository of 1) WFIRST data and instrument simulations, 2) tools to facilitate scientific performance and feasibility studies using the WFIRST, and 3) parameters summarizing the current design and predicted performance of the WFIRST telescope and instruments. The simulation repository provides access for the science community to simulation code, tools, and resulting analyses. Examples of simulation code with ISOC-built web-based interfaces include EXOSIMS (for estimating exoplanet yields in CGI surveys) and the Galaxy Survey Exposure Time Calculator. In the future the repository will provide an interface for users to run custom simulations of a wide range of coronagraph instrument (CGI) observations and sophisticated tools for designing microlensing experiments. We encourage those who are generating simulations or writing tools for exoplanet observations with WFIRST to contact the ISOC team so we can work with you to bring these to the attention of the broader astronomical community as we prepare for the exciting science that will be enabled by WFIRST.
NASA Astrophysics Data System (ADS)
Agaesse, Tristan; Lamibrac, Adrien; Büchi, Felix N.; Pauchet, Joel; Prat, Marc
2016-11-01
Understanding and modeling two-phase flows in the gas diffusion layer (GDL) of proton exchange membrane fuel cells are important in order to improve fuel cells performance. They are scientifically challenging because of the peculiarities of GDLs microstructures. In the present work, simulations on a pore network model are compared to X-ray tomographic images of water distributions during an ex-situ water invasion experiment. A method based on watershed segmentation was developed to extract a pore network from the 3D segmented image of the dry GDL. Pore network modeling and a full morphology model were then used to perform two-phase simulations and compared to the experimental data. The results show good agreement between experimental and simulated microscopic water distributions. Pore network extraction parameters were also benchmarked using the experimental data and results from full morphology simulations.
Hypothesis testing of scientific Monte Carlo calculations.
Wallerberger, Markus; Gull, Emanuel
2017-11-01
The steadily increasing size of scientific Monte Carlo simulations and the desire for robust, correct, and reproducible results necessitates rigorous testing procedures for scientific simulations in order to detect numerical problems and programming bugs. However, the testing paradigms developed for deterministic algorithms have proven to be ill suited for stochastic algorithms. In this paper we demonstrate explicitly how the technique of statistical hypothesis testing, which is in wide use in other fields of science, can be used to devise automatic and reliable tests for Monte Carlo methods, and we show that these tests are able to detect some of the common problems encountered in stochastic scientific simulations. We argue that hypothesis testing should become part of the standard testing toolkit for scientific simulations.
Hypothesis testing of scientific Monte Carlo calculations
NASA Astrophysics Data System (ADS)
Wallerberger, Markus; Gull, Emanuel
2017-11-01
The steadily increasing size of scientific Monte Carlo simulations and the desire for robust, correct, and reproducible results necessitates rigorous testing procedures for scientific simulations in order to detect numerical problems and programming bugs. However, the testing paradigms developed for deterministic algorithms have proven to be ill suited for stochastic algorithms. In this paper we demonstrate explicitly how the technique of statistical hypothesis testing, which is in wide use in other fields of science, can be used to devise automatic and reliable tests for Monte Carlo methods, and we show that these tests are able to detect some of the common problems encountered in stochastic scientific simulations. We argue that hypothesis testing should become part of the standard testing toolkit for scientific simulations.
Post-coronagraphic tip-tilt sensing for vortex phase masks: The QACITS technique
NASA Astrophysics Data System (ADS)
Huby, E.; Baudoz, P.; Mawet, D.; Absil, O.
2015-12-01
Context. Small inner working angle coronagraphs, such as the vortex phase mask, are essential to exploit the full potential of ground-based telescopes in the context of exoplanet detection and characterization. However, the drawback of this attractive feature is a high sensitivity to pointing errors, which degrades the performance of the coronagraph. Aims: We propose a tip-tilt retrieval technique based on the analysis of the final coronagraphic image, hereafter called Quadrant Analysis of Coronagraphic Images for Tip-tilt Sensing (QACITS). Methods: Under the assumption of small phase aberrations, we show that the behavior of the vortex phase mask can be simply described from the entrance pupil to the Lyot stop plane with Zernike polynomials. This convenient formalism is used to establish the theoretical basis of the QACITS technique. We performed simulations to demonstrate the validity and limits of the technique, including the case of a centrally obstructed pupil. Results: The QACITS technique principle is validated with experimental results in the case of an unobstructed circular aperture, as well as simulations in presence of a central obstruction. The typical configuration of the Keck telescope (24% central obstruction) has been simulated with additional high order aberrations. In these conditions, our simulations show that the QACITS technique is still adapted to centrally obstructed pupils and performs tip-tilt retrieval with a precision of 5 × 10-2λ/D when wavefront errors amount to λ/ 14 rms and 10-2λ/D for λ/ 70 rms errors (with λ the wavelength and D the pupil diameter). Conclusions: We have developed and demonstrated a tip-tilt sensing technique for vortex coronagraphs. The implementation of the QACITS technique is based on the analysis of the scientific image and does not require any modification of the original setup. Current facilities equipped with a vortex phase mask can thus directly benefit from this technique to improve the contrast performance close to the axis.
ERIC Educational Resources Information Center
Buccheri, Grazia; Gurber, Nadja Abt; Bruhwiler, Christian
2011-01-01
Many countries belonging to the Organisation for Economic Co-operation and Development (OECD) note a shortage of highly qualified scientific-technical personnel, whereas demand for such employees is growing. Therefore, how to motivate (female) high performers in science or mathematics to pursue scientific careers is of special interest. The sample…
The Planetary and Space Simulation Facilities at DLR Cologne
NASA Astrophysics Data System (ADS)
Rabbow, Elke; Parpart, André; Reitz, Günther
2016-06-01
Astrobiology strives to increase our knowledge on the origin, evolution and distribution of life, on Earth and beyond. In the past centuries, life has been found on Earth in environments with extreme conditions that were expected to be uninhabitable. Scientific investigations of the underlying metabolic mechanisms and strategies that lead to the high adaptability of these extremophile organisms increase our understanding of evolution and distribution of life on Earth. Life as we know it depends on the availability of liquid water. Exposure of organisms to defined and complex extreme environmental conditions, in particular those that limit the water availability, allows the investigation of the survival mechanisms as well as an estimation of the possibility of the distribution to and survivability on other celestial bodies of selected organisms. Space missions in low Earth orbit (LEO) provide access for experiments to complex environmental conditions not available on Earth, but studies on the molecular and cellular mechanisms of adaption to these hostile conditions and on the limits of life cannot be performed exclusively in space experiments. Experimental space is limited and allows only the investigation of selected endpoints. An additional intensive ground based program is required, with easy to access facilities capable to simulate space and planetary environments, in particular with focus on temperature, pressure, atmospheric composition and short wavelength solar ultraviolet radiation (UV). DLR Cologne operates a number of Planetary and Space Simulation facilities (PSI) where microorganisms from extreme terrestrial environments or known for their high adaptability are exposed for mechanistic studies. Space or planetary parameters are simulated individually or in combination in temperature controlled vacuum facilities equipped with a variety of defined and calibrated irradiation sources. The PSI support basic research and were recurrently used for pre-flight test programs for several astrobiological space missions. Parallel experiments on ground provided essential complementary data supporting the scientific interpretation of the data received from the space missions.
The SCEC Broadband Platform: Open-Source Software for Strong Ground Motion Simulation and Validation
NASA Astrophysics Data System (ADS)
Silva, F.; Goulet, C. A.; Maechling, P. J.; Callaghan, S.; Jordan, T. H.
2016-12-01
The Southern California Earthquake Center (SCEC) Broadband Platform (BBP) is a carefully integrated collection of open-source scientific software programs that can simulate broadband (0-100 Hz) ground motions for earthquakes at regional scales. The BBP can run earthquake rupture and wave propagation modeling software to simulate ground motions for well-observed historical earthquakes and to quantify how well the simulated broadband seismograms match the observed seismograms. The BBP can also run simulations for hypothetical earthquakes. In this case, users input an earthquake location and magnitude description, a list of station locations, and a 1D velocity model for the region of interest, and the BBP software then calculates ground motions for the specified stations. The BBP scientific software modules implement kinematic rupture generation, low- and high-frequency seismogram synthesis using wave propagation through 1D layered velocity structures, several ground motion intensity measure calculations, and various ground motion goodness-of-fit tools. These modules are integrated into a software system that provides user-defined, repeatable, calculation of ground-motion seismograms, using multiple alternative ground motion simulation methods, and software utilities to generate tables, plots, and maps. The BBP has been developed over the last five years in a collaborative project involving geoscientists, earthquake engineers, graduate students, and SCEC scientific software developers. The SCEC BBP software released in 2016 can be compiled and run on recent Linux and Mac OS X systems with GNU compilers. It includes five simulation methods, seven simulation regions covering California, Japan, and Eastern North America, and the ability to compare simulation results against empirical ground motion models (aka GMPEs). The latest version includes updated ground motion simulation methods, a suite of new validation metrics and a simplified command line user interface.
Real science at the petascale.
Saksena, Radhika S; Boghosian, Bruce; Fazendeiro, Luis; Kenway, Owain A; Manos, Steven; Mazzeo, Marco D; Sadiq, S Kashif; Suter, James L; Wright, David; Coveney, Peter V
2009-06-28
We describe computational science research that uses petascale resources to achieve scientific results at unprecedented scales and resolution. The applications span a wide range of domains, from investigation of fundamental problems in turbulence through computational materials science research to biomedical applications at the forefront of HIV/AIDS research and cerebrovascular haemodynamics. This work was mainly performed on the US TeraGrid 'petascale' resource, Ranger, at Texas Advanced Computing Center, in the first half of 2008 when it was the largest computing system in the world available for open scientific research. We have sought to use this petascale supercomputer optimally across application domains and scales, exploiting the excellent parallel scaling performance found on up to at least 32 768 cores for certain of our codes in the so-called 'capability computing' category as well as high-throughput intermediate-scale jobs for ensemble simulations in the 32-512 core range. Furthermore, this activity provides evidence that conventional parallel programming with MPI should be successful at the petascale in the short to medium term. We also report on the parallel performance of some of our codes on up to 65 636 cores on the IBM Blue Gene/P system at the Argonne Leadership Computing Facility, which has recently been named the fastest supercomputer in the world for open science.
Convergence in France facing Big Data era and Exascale challenges for Climate Sciences
NASA Astrophysics Data System (ADS)
Denvil, Sébastien; Dufresne, Jean-Louis; Salas, David; Meurdesoif, Yann; Valcke, Sophie; Caubel, Arnaud; Foujols, Marie-Alice; Servonnat, Jérôme; Sénési, Stéphane; Derouillat, Julien; Voury, Pascal
2014-05-01
The presentation will introduce a french national project : CONVERGENCE that has been funded for four years. This project will tackle big data and computational challenges faced by climate modeling community in HPC context. Model simulations are central to the study of complex mechanisms and feedbacks in the climate system and to provide estimates of future and past climate changes. Recent trends in climate modelling are to add more physical components in the modelled system, increasing the resolution of each individual component and the more systematic use of large suites of simulations to address many scientific questions. Climate simulations may therefore differ in their initial state, parameter values, representation of physical processes, spatial resolution, model complexity, and degree of realism or degree of idealisation. In addition, there is a strong need for evaluating, improving and monitoring the performance of climate models using a large ensemble of diagnostics and better integration of model outputs and observational data. High performance computing is currently reaching the exascale and has the potential to produce this exponential increase of size and numbers of simulations. However, post-processing, analysis, and exploration of the generated data have stalled and there is a strong need for new tools to cope with the growing size and complexity of the underlying simulations and datasets. Exascale simulations require new scalable software tools to generate, manage and mine those simulations ,and data to extract the relevant information and to take the correct decision. The primary purpose of this project is to develop a platform capable of running large ensembles of simulations with a suite of models, to handle the complex and voluminous datasets generated, to facilitate the evaluation and validation of the models and the use of higher resolution models. We propose to gather interdisciplinary skills to design, using a component-based approach, a specific programming environment for scalable scientific simulations and analytics, integrating new and efficient ways of deploying and analysing the applications on High Performance Computing (HPC) system. CONVERGENCE, gathering HPC and informatics expertise that cuts across the individual partners and the broader HPC community, will allow the national climate community to leverage information technology (IT) innovations to address its specific needs. Our methodology consists in developing an ensemble of generic elements needed to run the French climate models with different grids and different resolution, ensuring efficient and reliable execution of these models, managing large volume and number of data and allowing analysis of the results and precise evaluation of the models. These elements include data structure definition and input-output (IO), code coupling and interpolation, as well as runtime and pre/post-processing environments. A common data and metadata structure will allow transferring consistent information between the various elements. All these generic elements will be open source and publicly available. The IPSL-CM and CNRM-CM climate models will make use of these elements that will constitute a national platform for climate modelling. This platform will be used, in its entirety, to optimise and tune the next version of the IPSL-CM model and to develop a global coupled climate model with a regional grid refinement. It will also be used, at least partially, to run ensembles of the CNRM-CM model at relatively high resolution and to run a very-high resolution prototype of this model. The climate models we developed are already involved in many international projects. For instance we participate to the CMIP (Coupled Model Intercomparison Project) project that is very demanding but has a high visibility: its results are widely used and are in particular synthesised in the IPCC (Intergovernmental Panel on Climate Change) assessment reports. The CONVERGENCE project will constitute an invaluable step for the French climate community to prepare and better contribute to the next phase of the CMIP project.
Simulation Data as Data Streams
DOE Office of Scientific and Technical Information (OSTI.GOV)
Abdulla, G; Arrighi, W; Critchlow, T
2003-11-18
Computational or scientific simulations are increasingly being applied to solve a variety of scientific problems. Domains such as astrophysics, engineering, chemistry, biology, and environmental studies are benefiting from this important capability. Simulations, however, produce enormous amounts of data that need to be analyzed and understood. In this overview paper, we describe scientific simulation data, its characteristics, and the way scientists generate and use the data. We then compare and contrast simulation data to data streams. Finally, we describe our approach to analyzing simulation data, present the AQSim (Ad-hoc Queries for Simulation data) system, and discuss some of the challenges thatmore » result from handling this kind of data.« less
DOE Office of Scientific and Technical Information (OSTI.GOV)
Read, Michael; Ives, Robert Lawrence; Marsden, David
The Phase II program developed an internal RF coupler that transforms the whispering gallery RF mode produced in gyrotron cavities to an HE11 waveguide mode propagating in corrugated waveguide. This power is extracted from the vacuum using a broadband, chemical vapor deposited (CVD) diamond, Brewster angle window capable of transmitting more than 1.5 MW CW of RF power over a broad range of frequencies. This coupling system eliminates the Mirror Optical Units now required to externally couple Gaussian output power into corrugated waveguide, significantly reducing system cost and increasing efficiency. The program simulated the performance using a broad range ofmore » advanced computer codes to optimize the design. Both a direct coupler and Brewster angle window were built and tested at low and high power. Test results confirmed the performance of both devices and demonstrated they are capable of achieving the required performance for scientific, defense, industrial, and medical applications.« less
Exploring Cloud Computing for Large-scale Scientific Applications
DOE Office of Scientific and Technical Information (OSTI.GOV)
Lin, Guang; Han, Binh; Yin, Jian
This paper explores cloud computing for large-scale data-intensive scientific applications. Cloud computing is attractive because it provides hardware and software resources on-demand, which relieves the burden of acquiring and maintaining a huge amount of resources that may be used only once by a scientific application. However, unlike typical commercial applications that often just requires a moderate amount of ordinary resources, large-scale scientific applications often need to process enormous amount of data in the terabyte or even petabyte range and require special high performance hardware with low latency connections to complete computation in a reasonable amount of time. To address thesemore » challenges, we build an infrastructure that can dynamically select high performance computing hardware across institutions and dynamically adapt the computation to the selected resources to achieve high performance. We have also demonstrated the effectiveness of our infrastructure by building a system biology application and an uncertainty quantification application for carbon sequestration, which can efficiently utilize data and computation resources across several institutions.« less
Simulation of wave interactions with MHD
NASA Astrophysics Data System (ADS)
Batchelor, D.; Alba, C.; Bateman, G.; Bernholdt, D.; Berry, L.; Bonoli, P.; Bramley, R.; Breslau, J.; Chance, M.; Chen, J.; Choi, M.; Elwasif, W.; Fu, G.; Harvey, R.; Jaeger, E.; Jardin, S.; Jenkins, T.; Keyes, D.; Klasky, S.; Kruger, S.; Ku, L.; Lynch, V.; McCune, D.; Ramos, J.; Schissel, D.; Schnack, D.; Wright, J.
2008-07-01
The broad scientific objectives of the SWIM (Simulation 01 Wave Interaction with MHD) project are twofold: (1) improve our understanding of interactions that both radio frequency (RF) wave and particle sources have on extended-MHD phenomena, and to substantially improve our capability for predicting and optimizing the performance of burning plasmas in devices such as ITER: and (2) develop an integrated computational system for treating multiphysics phenomena with the required flexibility and extensibility to serve as a prototype for the Fusion Simulation Project. The Integrated Plasma Simulator (IPS) has been implemented. Presented here are initial physics results on RP effects on MHD instabilities in tokamaks as well as simulation results for tokamak discharge evolution using the IPS.
Vortices in high-performance high-temperature superconductors
Kwok, Wai-Kwong; Welp, Ulrich; Glatz, Andreas; ...
2016-09-21
The behavior of vortex matter in high-temperature superconductors (HTS) controls the entire electromagnetic response of the material, including its current carrying capacity. In this paper, we review the basic concepts of vortex pinning and its application to a complex mixed pinning landscape to enhance the critical current and to reduce its anisotropy. We focus on recent scientific advances that have resulted in large enhancements of the in-field critical current in state-of-the-art second generation (2G) YBCO coated conductors and on the prospect of an isotropic, high-critical current superconductor in the iron-based superconductors. Finally, we discuss an emerging new paradigm of criticalmore » current by design—a drive to achieve a quantitative correlation between the observed critical current density and mesoscale mixed pinning landscapes by using realistic input parameters in an innovative and powerful large-scale time dependent Ginzburg–Landau approach to simulating vortex dynamics.« less
NASA Astrophysics Data System (ADS)
Silva, F.; Maechling, P. J.; Goulet, C.; Somerville, P.; Jordan, T. H.
2013-12-01
The Southern California Earthquake Center (SCEC) Broadband Platform is a collaborative software development project involving SCEC researchers, graduate students, and the SCEC Community Modeling Environment. The SCEC Broadband Platform is open-source scientific software that can generate broadband (0-100Hz) ground motions for earthquakes, integrating complex scientific modules that implement rupture generation, low and high-frequency seismogram synthesis, non-linear site effects calculation, and visualization into a software system that supports easy on-demand computation of seismograms. The Broadband Platform operates in two primary modes: validation simulations and scenario simulations. In validation mode, the Broadband Platform runs earthquake rupture and wave propagation modeling software to calculate seismograms of a historical earthquake for which observed strong ground motion data is available. Also in validation mode, the Broadband Platform calculates a number of goodness of fit measurements that quantify how well the model-based broadband seismograms match the observed seismograms for a certain event. Based on these results, the Platform can be used to tune and validate different numerical modeling techniques. During the past year, we have modified the software to enable the addition of a large number of historical events, and we are now adding validation simulation inputs and observational data for 23 historical events covering the Eastern and Western United States, Japan, Taiwan, Turkey, and Italy. In scenario mode, the Broadband Platform can run simulations for hypothetical (scenario) earthquakes. In this mode, users input an earthquake description, a list of station names and locations, and a 1D velocity model for their region of interest, and the Broadband Platform software then calculates ground motions for the specified stations. By establishing an interface between scientific modules with a common set of input and output files, the Broadband Platform facilitates the addition of new scientific methods, which are written by earth scientists in a number of languages such as C, C++, Fortran, and Python. The Broadband Platform's modular design also supports the reuse of existing software modules as building blocks to create new scientific methods. Additionally, the Platform implements a wrapper around each scientific module, converting input and output files to and from the specific formats required (or produced) by individual scientific codes. Working in close collaboration with scientists and research engineers, the SCEC software development group continues to add new capabilities to the Broadband Platform and to release new versions as open-source scientific software distributions that can be compiled and run on many Linux computer systems. Our latest release includes the addition of 3 new simulation methods and several new data products, such as map and distance-based goodness of fit plots. Finally, as the number and complexity of scenarios simulated using the Broadband Platform increase, we have added batching utilities to substantially improve support for running large-scale simulations on computing clusters.
NASA Astrophysics Data System (ADS)
Schruff, T.; Liang, R.; Rüde, U.; Schüttrumpf, H.; Frings, R. M.
2018-01-01
The knowledge of structural properties of granular materials such as porosity is highly important in many application-oriented and scientific fields. In this paper we present new results of computer-based packing simulations where we use the non-smooth granular dynamics (NSGD) method to simulate gravitational random dense packing of spherical particles with various particle size distributions and two types of depositional conditions. A bin packing scenario was used to compare simulation results to laboratory porosity measurements and to quantify the sensitivity of the NSGD regarding critical simulation parameters such as time step size. The results of the bin packing simulations agree well with laboratory measurements across all particle size distributions with all absolute errors below 1%. A large-scale packing scenario with periodic side walls was used to simulate the packing of up to 855,600 spherical particles with various particle size distributions (PSD). Simulation outcomes are used to quantify the effect of particle-domain-size ratio on the packing compaction. A simple correction model, based on the coordination number, is employed to compensate for this effect on the porosity and to determine the relationship between PSD and porosity. Promising accuracy and stability results paired with excellent computational performance recommend the application of NSGD for large-scale packing simulations, e.g. to further enhance the generation of representative granular deposits.
Analytics-Driven Lossless Data Compression for Rapid In-situ Indexing, Storing, and Querying
DOE Office of Scientific and Technical Information (OSTI.GOV)
Jenkins, John; Arkatkar, Isha; Lakshminarasimhan, Sriram
2013-01-01
The analysis of scientific simulations is highly data-intensive and is becoming an increasingly important challenge. Peta-scale data sets require the use of light-weight query-driven analysis methods, as opposed to heavy-weight schemes that optimize for speed at the expense of size. This paper is an attempt in the direction of query processing over losslessly compressed scientific data. We propose a co-designed double-precision compression and indexing methodology for range queries by performing unique-value-based binning on the most significant bytes of double precision data (sign, exponent, and most significant mantissa bits), and inverting the resulting metadata to produce an inverted index over amore » reduced data representation. Without the inverted index, our method matches or improves compression ratios over both general-purpose and floating-point compression utilities. The inverted index is light-weight, and the overall storage requirement for both reduced column and index is less than 135%, whereas existing DBMS technologies can require 200-400%. As a proof-of-concept, we evaluate univariate range queries that additionally return column values, a critical component of data analytics, against state-of-the-art bitmap indexing technology, showing multi-fold query performance improvements.« less
ERIC Educational Resources Information Center
Moli, Lemuel; Delserieys, Alice Pedregosa; Impedovo, Maria Antonietta; Castera, Jeremy
2017-01-01
This paper presents a study on discovery learning of scientific concepts with the support of computer simulation. In particular, the paper will focus on the effect of the levels of guidance on students with a low degree of experience in informatics and educational technology. The first stage of this study was to identify the common misconceptions…
A Measurement and Simulation Based Methodology for Cache Performance Modeling and Tuning
NASA Technical Reports Server (NTRS)
Waheed, Abdul; Yan, Jerry; Saini, Subhash (Technical Monitor)
1998-01-01
We present a cache performance modeling methodology that facilitates the tuning of uniprocessor cache performance for applications executing on shared memory multiprocessors by accurately predicting the effects of source code level modifications. Measurements on a single processor are initially used for identifying parts of code where cache utilization improvements may significantly impact the overall performance. Cache simulation based on trace-driven techniques can be carried out without gathering detailed address traces. Minimal runtime information for modeling cache performance of a selected code block includes: base virtual addresses of arrays, virtual addresses of variables, and loop bounds for that code block. Rest of the information is obtained from the source code. We show that the cache performance predictions are as reliable as those obtained through trace-driven simulations. This technique is particularly helpful to the exploration of various "what-if' scenarios regarding the cache performance impact for alternative code structures. We explain and validate this methodology using a simple matrix-matrix multiplication program. We then apply this methodology to predict and tune the cache performance of two realistic scientific applications taken from the Computational Fluid Dynamics (CFD) domain.
Advanced Simulation & Computing FY15 Implementation Plan Volume 2, Rev. 0.5
DOE Office of Scientific and Technical Information (OSTI.GOV)
McCoy, Michel; Archer, Bill; Matzen, M. Keith
2014-09-16
The Stockpile Stewardship Program (SSP) is a single, highly integrated technical program for maintaining the surety and reliability of the U.S. nuclear stockpile. The SSP uses nuclear test data, computational modeling and simulation, and experimental facilities to advance understanding of nuclear weapons. It includes stockpile surveillance, experimental research, development and engineering programs, and an appropriately scaled production capability to support stockpile requirements. This integrated national program requires the continued use of experimental facilities and programs, and the computational enhancements to support these programs. The Advanced Simulation and Computing Program (ASC) is a cornerstone of the SSP, providing simulation capabilities andmore » computational resources that support annual stockpile assessment and certification, study advanced nuclear weapons design and manufacturing processes, analyze accident scenarios and weapons aging, and provide the tools to enable stockpile Life Extension Programs (LEPs) and the resolution of Significant Finding Investigations (SFIs). This requires a balance of resource, including technical staff, hardware, simulation software, and computer science solutions. As the program approaches the end of its second decade, ASC is intently focused on increasing predictive capabilities in a three-dimensional (3D) simulation environment while maintaining support to the SSP. The program continues to improve its unique tools for solving progressively more difficult stockpile problems (sufficient resolution, dimensionality, and scientific details), quantify critical margins and uncertainties, and resolve increasingly difficult analyses needed for the SSP. Where possible, the program also enables the use of high-performance simulation and computing tools to address broader national security needs, such as foreign nuclear weapon assessments and counternuclear terrorism.« less
Virtual Observatory and Distributed Data Mining
NASA Astrophysics Data System (ADS)
Borne, Kirk D.
2012-03-01
New modes of discovery are enabled by the growth of data and computational resources (i.e., cyberinfrastructure) in the sciences. This cyberinfrastructure includes structured databases, virtual observatories (distributed data, as described in Section 20.2.1 of this chapter), high-performance computing (petascale machines), distributed computing (e.g., the Grid, the Cloud, and peer-to-peer networks), intelligent search and discovery tools, and innovative visualization environments. Data streams from experiments, sensors, and simulations are increasingly complex and growing in volume. This is true in most sciences, including astronomy, climate simulations, Earth observing systems, remote sensing data collections, and sensor networks. At the same time, we see an emerging confluence of new technologies and approaches to science, most clearly visible in the growing synergism of the four modes of scientific discovery: sensors-modeling-computing-data (Eastman et al. 2005). This has been driven by numerous developments, including the information explosion, development of large-array sensors, acceleration in high-performance computing (HPC) power, advances in algorithms, and efficient modeling techniques. Among these, the most extreme is the growth in new data. Specifically, the acquisition of data in all scientific disciplines is rapidly accelerating and causing a data glut (Bell et al. 2007). It has been estimated that data volumes double every year—for example, the NCSA (National Center for Supercomputing Applications) reported that their users cumulatively generated one petabyte of data over the first 19 years of NCSA operation, but they then generated their next one petabyte in the next year alone, and the data production has been growing by almost 100% each year after that (Butler 2008). The NCSA example is just one of many demonstrations of the exponential (annual data-doubling) growth in scientific data collections. In general, this putative data-doubling is an inevitable result of several compounding factors: the proliferation of data-generating devices, sensors, projects, and enterprises; the 18-month doubling of the digital capacity of these microprocessor-based sensors and devices (commonly referred to as "Moore’s law"); the move to digital for nearly all forms of information; the increase in human-generated data (both unstructured information on the web and structured data from experiments, models, and simulation); and the ever-expanding capability of higher density media to hold greater volumes of data (i.e., data production expands to fill the available storage space). These factors are consequently producing an exponential data growth rate, which will soon (if not already) become an insurmountable technical challenge even with the great advances in computation and algorithms. This technical challenge is compounded by the ever-increasing geographic dispersion of important data sources—the data collections are not stored uniformly at a single location, or with a single data model, or in uniform formats and modalities (e.g., images, databases, structured and unstructured files, and XML data sets)—the data are in fact large, distributed, heterogeneous, and complex. The greatest scientific research challenge with these massive distributed data collections is consequently extracting all of the rich information and knowledge content contained therein, thus requiring new approaches to scientific research. This emerging data-intensive and data-oriented approach to scientific research is sometimes called discovery informatics or X-informatics (where X can be any science, such as bio, geo, astro, chem, eco, or anything; Agresti 2003; Gray 2003; Borne 2010). This data-oriented approach to science is now recognized by some (e.g., Mahootian and Eastman 2009; Hey et al. 2009) as the fourth paradigm of research, following (historically) experiment/observation, modeling/analysis, and computational science.
Evaluating lossy data compression on climate simulation data within a large ensemble
Baker, Allison H.; Hammerling, Dorit M.; Mickelson, Sheri A.; ...
2016-12-07
High-resolution Earth system model simulations generate enormous data volumes, and retaining the data from these simulations often strains institutional storage resources. Further, these exceedingly large storage requirements negatively impact science objectives, for example, by forcing reductions in data output frequency, simulation length, or ensemble size. To lessen data volumes from the Community Earth System Model (CESM), we advocate the use of lossy data compression techniques. While lossy data compression does not exactly preserve the original data (as lossless compression does), lossy techniques have an advantage in terms of smaller storage requirements. To preserve the integrity of the scientific simulation data,more » the effects of lossy data compression on the original data should, at a minimum, not be statistically distinguishable from the natural variability of the climate system, and previous preliminary work with data from CESM has shown this goal to be attainable. However, to ultimately convince climate scientists that it is acceptable to use lossy data compression, we provide climate scientists with access to publicly available climate data that have undergone lossy data compression. In particular, we report on the results of a lossy data compression experiment with output from the CESM Large Ensemble (CESM-LE) Community Project, in which we challenge climate scientists to examine features of the data relevant to their interests, and attempt to identify which of the ensemble members have been compressed and reconstructed. We find that while detecting distinguishing features is certainly possible, the compression effects noticeable in these features are often unimportant or disappear in post-processing analyses. In addition, we perform several analyses that directly compare the original data to the reconstructed data to investigate the preservation, or lack thereof, of specific features critical to climate science. Overall, we conclude that applying lossy data compression to climate simulation data is both advantageous in terms of data reduction and generally acceptable in terms of effects on scientific results.« less
Evaluating lossy data compression on climate simulation data within a large ensemble
NASA Astrophysics Data System (ADS)
Baker, Allison H.; Hammerling, Dorit M.; Mickelson, Sheri A.; Xu, Haiying; Stolpe, Martin B.; Naveau, Phillipe; Sanderson, Ben; Ebert-Uphoff, Imme; Samarasinghe, Savini; De Simone, Francesco; Carbone, Francesco; Gencarelli, Christian N.; Dennis, John M.; Kay, Jennifer E.; Lindstrom, Peter
2016-12-01
High-resolution Earth system model simulations generate enormous data volumes, and retaining the data from these simulations often strains institutional storage resources. Further, these exceedingly large storage requirements negatively impact science objectives, for example, by forcing reductions in data output frequency, simulation length, or ensemble size. To lessen data volumes from the Community Earth System Model (CESM), we advocate the use of lossy data compression techniques. While lossy data compression does not exactly preserve the original data (as lossless compression does), lossy techniques have an advantage in terms of smaller storage requirements. To preserve the integrity of the scientific simulation data, the effects of lossy data compression on the original data should, at a minimum, not be statistically distinguishable from the natural variability of the climate system, and previous preliminary work with data from CESM has shown this goal to be attainable. However, to ultimately convince climate scientists that it is acceptable to use lossy data compression, we provide climate scientists with access to publicly available climate data that have undergone lossy data compression. In particular, we report on the results of a lossy data compression experiment with output from the CESM Large Ensemble (CESM-LE) Community Project, in which we challenge climate scientists to examine features of the data relevant to their interests, and attempt to identify which of the ensemble members have been compressed and reconstructed. We find that while detecting distinguishing features is certainly possible, the compression effects noticeable in these features are often unimportant or disappear in post-processing analyses. In addition, we perform several analyses that directly compare the original data to the reconstructed data to investigate the preservation, or lack thereof, of specific features critical to climate science. Overall, we conclude that applying lossy data compression to climate simulation data is both advantageous in terms of data reduction and generally acceptable in terms of effects on scientific results.
Evaluating lossy data compression on climate simulation data within a large ensemble
DOE Office of Scientific and Technical Information (OSTI.GOV)
Baker, Allison H.; Hammerling, Dorit M.; Mickelson, Sheri A.
High-resolution Earth system model simulations generate enormous data volumes, and retaining the data from these simulations often strains institutional storage resources. Further, these exceedingly large storage requirements negatively impact science objectives, for example, by forcing reductions in data output frequency, simulation length, or ensemble size. To lessen data volumes from the Community Earth System Model (CESM), we advocate the use of lossy data compression techniques. While lossy data compression does not exactly preserve the original data (as lossless compression does), lossy techniques have an advantage in terms of smaller storage requirements. To preserve the integrity of the scientific simulation data,more » the effects of lossy data compression on the original data should, at a minimum, not be statistically distinguishable from the natural variability of the climate system, and previous preliminary work with data from CESM has shown this goal to be attainable. However, to ultimately convince climate scientists that it is acceptable to use lossy data compression, we provide climate scientists with access to publicly available climate data that have undergone lossy data compression. In particular, we report on the results of a lossy data compression experiment with output from the CESM Large Ensemble (CESM-LE) Community Project, in which we challenge climate scientists to examine features of the data relevant to their interests, and attempt to identify which of the ensemble members have been compressed and reconstructed. We find that while detecting distinguishing features is certainly possible, the compression effects noticeable in these features are often unimportant or disappear in post-processing analyses. In addition, we perform several analyses that directly compare the original data to the reconstructed data to investigate the preservation, or lack thereof, of specific features critical to climate science. Overall, we conclude that applying lossy data compression to climate simulation data is both advantageous in terms of data reduction and generally acceptable in terms of effects on scientific results.« less
Design Considerations of a Virtual Laboratory for Advanced X-ray Sources
NASA Astrophysics Data System (ADS)
Luginsland, J. W.; Frese, M. H.; Frese, S. D.; Watrous, J. J.; Heileman, G. L.
2004-11-01
The field of scientific computation has greatly advanced in the last few years, resulting in the ability to perform complex computer simulations that can predict the performance of real-world experiments in a number of fields of study. Among the forces driving this new computational capability is the advent of parallel algorithms, allowing calculations in three-dimensional space with realistic time scales. Electromagnetic radiation sources driven by high-voltage, high-current electron beams offer an area to further push the state-of-the-art in high fidelity, first-principles simulation tools. The physics of these x-ray sources combine kinetic plasma physics (electron beams) with dense fluid-like plasma physics (anode plasmas) and x-ray generation (bremsstrahlung). There are a number of mature techniques and software packages for dealing with the individual aspects of these sources, such as Particle-In-Cell (PIC), Magneto-Hydrodynamics (MHD), and radiation transport codes. The current effort is focused on developing an object-oriented software environment using the Rational© Unified Process and the Unified Modeling Language (UML) to provide a framework where multiple 3D parallel physics packages, such as a PIC code (ICEPIC), a MHD code (MACH), and a x-ray transport code (ITS) can co-exist in a system-of-systems approach to modeling advanced x-ray sources. Initial software design and assessments of the various physics algorithms' fidelity will be presented.
Fully accelerating quantum Monte Carlo simulations of real materials on GPU clusters
NASA Astrophysics Data System (ADS)
Esler, Kenneth
2011-03-01
Quantum Monte Carlo (QMC) has proved to be an invaluable tool for predicting the properties of matter from fundamental principles, combining very high accuracy with extreme parallel scalability. By solving the many-body Schrödinger equation through a stochastic projection, it achieves greater accuracy than mean-field methods and better scaling with system size than quantum chemical methods, enabling scientific discovery across a broad spectrum of disciplines. In recent years, graphics processing units (GPUs) have provided a high-performance and low-cost new approach to scientific computing, and GPU-based supercomputers are now among the fastest in the world. The multiple forms of parallelism afforded by QMC algorithms make the method an ideal candidate for acceleration in the many-core paradigm. We present the results of porting the QMCPACK code to run on GPU clusters using the NVIDIA CUDA platform. Using mixed precision on GPUs and MPI for intercommunication, we observe typical full-application speedups of approximately 10x to 15x relative to quad-core CPUs alone, while reproducing the double-precision CPU results within statistical error. We discuss the algorithm modifications necessary to achieve good performance on this heterogeneous architecture and present the results of applying our code to molecules and bulk materials. Supported by the U.S. DOE under Contract No. DOE-DE-FG05-08OR23336 and by the NSF under No. 0904572.
Harnessing the power of emerging petascale platforms
NASA Astrophysics Data System (ADS)
Mellor-Crummey, John
2007-07-01
As part of the US Department of Energy's Scientific Discovery through Advanced Computing (SciDAC-2) program, science teams are tackling problems that require computational simulation and modeling at the petascale. A grand challenge for computer science is to develop software technology that makes it easier to harness the power of these systems to aid scientific discovery. As part of its activities, the SciDAC-2 Center for Scalable Application Development Software (CScADS) is building open source software tools to support efficient scientific computing on the emerging leadership-class platforms. In this paper, we describe two tools for performance analysis and tuning that are being developed as part of CScADS: a tool for analyzing scalability and performance, and a tool for optimizing loop nests for better node performance. We motivate these tools by showing how they apply to S3D, a turbulent combustion code under development at Sandia National Laboratory. For S3D, our node performance analysis tool helped uncover several performance bottlenecks. Using our loop nest optimization tool, we transformed S3D's most costly loop nest to reduce execution time by a factor of 2.94 for a processor working on a 503 domain.
NASA Astrophysics Data System (ADS)
Yin, Xunqiang; Shi, Junqiang; Qiao, Fangli
2018-05-01
Due to the high cost of ocean observation system, the scientific design of observation network becomes much important. The current network of the high frequency radar system in the Gulf of Thailand has been studied using a three-dimensional coastal ocean model. At first, the observations from current radars have been assimilated into this coastal model and the forecast results have improved due to the data assimilation. But the results also show that further optimization of the observing network is necessary. And then, a series of experiments were carried out to assess the performance of the existing high frequency ground wave radar surface current observation system. The simulated surface current data in three regions were assimilated sequentially using an efficient ensemble Kalman filter data assimilation scheme. The experimental results showed that the coastal surface current observation system plays a positive role in improving the numerical simulation of the currents. Compared with the control experiment without assimilation, the simulation precision of surface and subsurface current had been improved after assimilated the surface currents observed at current networks. However, the improvement for three observing regions was quite different and current observing network in the Gulf of Thailand is not effective and a further optimization is required. Based on these evaluations, a manual scheme has been designed by discarding the redundant and inefficient locations and adding new stations where the performance after data assimilation is still low. For comparison, an objective scheme based on the idea of data assimilation has been obtained. Results show that all the two schemes of observing network perform better than the original network and optimal scheme-based data assimilation is much superior to the manual scheme that based on the evaluation of original observing network in the Gulf of Thailand. The distributions of the optimal network of radars could be a useful guidance for future design of observing system in this region.
Accelerating scientific discovery : 2007 annual report.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Beckman, P.; Dave, P.; Drugan, C.
2008-11-14
As a gateway for scientific discovery, the Argonne Leadership Computing Facility (ALCF) works hand in hand with the world's best computational scientists to advance research in a diverse span of scientific domains, ranging from chemistry, applied mathematics, and materials science to engineering physics and life sciences. Sponsored by the U.S. Department of Energy's (DOE) Office of Science, researchers are using the IBM Blue Gene/L supercomputer at the ALCF to study and explore key scientific problems that underlie important challenges facing our society. For instance, a research team at the University of California-San Diego/ SDSC is studying the molecular basis ofmore » Parkinson's disease. The researchers plan to use the knowledge they gain to discover new drugs to treat the disease and to identify risk factors for other diseases that are equally prevalent. Likewise, scientists from Pratt & Whitney are using the Blue Gene to understand the complex processes within aircraft engines. Expanding our understanding of jet engine combustors is the secret to improved fuel efficiency and reduced emissions. Lessons learned from the scientific simulations of jet engine combustors have already led Pratt & Whitney to newer designs with unprecedented reductions in emissions, noise, and cost of ownership. ALCF staff members provide in-depth expertise and assistance to those using the Blue Gene/L and optimizing user applications. Both the Catalyst and Applications Performance Engineering and Data Analytics (APEDA) teams support the users projects. In addition to working with scientists running experiments on the Blue Gene/L, we have become a nexus for the broader global community. In partnership with the Mathematics and Computer Science Division at Argonne National Laboratory, we have created an environment where the world's most challenging computational science problems can be addressed. Our expertise in high-end scientific computing enables us to provide guidance for applications that are transitioning to petascale as well as to produce software that facilitates their development, such as the MPICH library, which provides a portable and efficient implementation of the MPI standard--the prevalent programming model for large-scale scientific applications--and the PETSc toolkit that provides a programming paradigm that eases the development of many scientific applications on high-end computers.« less
ERIC Educational Resources Information Center
Fogarty, Ian; Geelan, David
2013-01-01
Students in 4 Canadian high school physics classes completed instructional sequences in two key physics topics related to motion--Straight Line Motion and Newton's First Law. Different sequences of laboratory investigation, teacher explanation (lecture) and the use of computer-based scientific visualizations (animations and simulations) were…
Hadoop for High-Performance Climate Analytics: Use Cases and Lessons Learned
NASA Technical Reports Server (NTRS)
Tamkin, Glenn
2013-01-01
Scientific data services are a critical aspect of the NASA Center for Climate Simulations mission (NCCS). Hadoop, via MapReduce, provides an approach to high-performance analytics that is proving to be useful to data intensive problems in climate research. It offers an analysis paradigm that uses clusters of computers and combines distributed storage of large data sets with parallel computation. The NCCS is particularly interested in the potential of Hadoop to speed up basic operations common to a wide range of analyses. In order to evaluate this potential, we prototyped a series of canonical MapReduce operations over a test suite of observational and climate simulation datasets. The initial focus was on averaging operations over arbitrary spatial and temporal extents within Modern Era Retrospective- Analysis for Research and Applications (MERRA) data. After preliminary results suggested that this approach improves efficiencies within data intensive analytic workflows, we invested in building a cyber infrastructure resource for developing a new generation of climate data analysis capabilities using Hadoop. This resource is focused on reducing the time spent in the preparation of reanalysis data used in data-model inter-comparison, a long sought goal of the climate community. This paper summarizes the related use cases and lessons learned.
NASA Technical Reports Server (NTRS)
Nguyen, Daniel H.; Skladany, Lynn M.; Prats, Benito D.; Griffin, Thomas J. (Technical Monitor)
2001-01-01
The Hubble Space Telescope (HST) is one of NASA's most productive astronomical observatories. Launched in 1990, the HST continues to gather scientific data to help scientists around the world discover amazing wonders of the universe. To maintain HST in the fore front of scientific discoveries, NASA has routinely conducted servicing missions to refurbish older equipment as well as to replace existing scientific instruments with better, more powerful instruments. In early 2002, NASA will conduct its fourth servicing mission to the HST. This servicing mission is named Servicing Mission 3B (SM3B). During SM3B, one of the major refurbishment efforts will be to install new rigid-panel solar arrays as a replacement for the existing flexible-foil solar arrays. This is necessary in order to increase electrical power availability for the new scientific instruments. Prior to installing the new solar arrays on HST, the HST project must be certain that the new solar arrays will not cause any performance degradations to the observatory. One of the major concerns is any disturbance that can cause pointing Loss of Lock (LOL) for the telescope. While in orbit, the solar-array temperature transitions quickly from sun to shadow. The resulting thermal expansion and contraction can cause a "mechanical disturbance" which may result in LOL. To better characterize this behavior, a test was conducted at the European Space Research and Technology Centre (ESTEC) in the Large Space Simulator (LSS) thermal-vacuum chamber. In this test, the Sun simulator was used to simulate on-orbit effects on the solar arrays. This paper summarizes the thermal performance of the Solar Array-3 (SA3) during the Disturbance Verification Test (DVT). The test was conducted between 26 October 2000 and 30 October 2000. Included in this paper are: (1) brief description of the SA3's components and its thermal design; (2) a summary of the on-orbit temperature predictions; (3) pretest thermal preparations; (4) a description of the chamber and thermal monitoring sensors; and (6) presentation of test thermal data results versus flight predictions.
Simulation and Experimentation in an Astronomy Laboratory, Part II
NASA Astrophysics Data System (ADS)
Maloney, F. P.; Maurone, P. A.; Hones, M.
1995-12-01
The availability of low-cost, high-performance computing hardware and software has transformed the manner by which astronomical concepts can be re-discovered and explored in a laboratory that accompanies an astronomy course for non-scientist students. We report on a strategy for allowing each student to understand fundamental scientific principles by interactively confronting astronomical and physical phenomena, through direct observation and by computer simulation. Direct observation of physical phenomena, such as Hooke's Law, begins by using a computer and hardware interface as a data-collection and presentation tool. In this way, the student is encouraged to explore the physical conditions of the experiment and re-discover the fundamentals involved. The hardware frees the student from the tedium of manual data collection and presentation, and permits experimental design which utilizes data that would otherwise be too fleeting, too imprecise, or too voluminous. Computer simulation of astronomical phenomena allows the student to travel in time and space, freed from the vagaries of weather, to re-discover such phenomena as the daily and yearly cycles, the reason for the seasons, the saros, and Kepler's Laws. By integrating the knowledge gained by experimentation and simulation, the student can understand both the scientific concepts and the methods by which they are discovered and explored. Further, students are encouraged to place these discoveries in an historical context, by discovering, for example, the night sky as seen by the survivors of the sinking Titanic, or Halley's comet as depicted on the Bayeux tapestry. We report on the continuing development of these laboratory experiments. Futher details and the text for the experiments are available at the following site: http://astro4.ast.vill.edu/ This work is supported by a grant from The Pew Charitable Trusts.
NASA Astrophysics Data System (ADS)
Memon, Shahbaz; Vallot, Dorothée; Zwinger, Thomas; Neukirchen, Helmut
2017-04-01
Scientific communities generate complex simulations through orchestration of semi-structured analysis pipelines which involves execution of large workflows on multiple, distributed and heterogeneous computing and data resources. Modeling ice dynamics of glaciers requires workflows consisting of many non-trivial, computationally expensive processing tasks which are coupled to each other. From this domain, we present an e-Science use case, a workflow, which requires the execution of a continuum ice flow model and a discrete element based calving model in an iterative manner. Apart from the execution, this workflow also contains data format conversion tasks that support the execution of ice flow and calving by means of transition through sequential, nested and iterative steps. Thus, the management and monitoring of all the processing tasks including data management and transfer of the workflow model becomes more complex. From the implementation perspective, this workflow model was initially developed on a set of scripts using static data input and output references. In the course of application usage when more scripts or modifications introduced as per user requirements, the debugging and validation of results were more cumbersome to achieve. To address these problems, we identified a need to have a high-level scientific workflow tool through which all the above mentioned processes can be achieved in an efficient and usable manner. We decided to make use of the e-Science middleware UNICORE (Uniform Interface to Computing Resources) that allows seamless and automated access to different heterogenous and distributed resources which is supported by a scientific workflow engine. Based on this, we developed a high-level scientific workflow model for coupling of massively parallel High-Performance Computing (HPC) jobs: a continuum ice sheet model (Elmer/Ice) and a discrete element calving and crevassing model (HiDEM). In our talk we present how the use of a high-level scientific workflow middleware enables reproducibility of results more convenient and also provides a reusable and portable workflow template that can be deployed across different computing infrastructures. Acknowledgements This work was kindly supported by NordForsk as part of the Nordic Center of Excellence (NCoE) eSTICC (eScience Tools for Investigating Climate Change at High Northern Latitudes) and the Top-level Research Initiative NCoE SVALI (Stability and Variation of Arctic Land Ice).
Study on Earthquake Emergency Evacuation Drill Trainer Development
NASA Astrophysics Data System (ADS)
ChangJiang, L.
2016-12-01
With the improvement of China's urbanization, to ensure people survive the earthquake needs scientific routine emergency evacuation drills. Drawing on cellular automaton, shortest path algorithm and collision avoidance, we designed a model of earthquake emergency evacuation drill for school scenes. Based on this model, we made simulation software for earthquake emergency evacuation drill. The software is able to perform the simulation of earthquake emergency evacuation drill by building spatial structural model and selecting the information of people's location grounds on actual conditions of constructions. Based on the data of simulation, we can operate drilling in the same building. RFID technology could be used here for drill data collection which read personal information and send it to the evacuation simulation software via WIFI. Then the simulation software would contrast simulative data with the information of actual evacuation process, such as evacuation time, evacuation path, congestion nodes and so on. In the end, it would provide a contrastive analysis report to report assessment result and optimum proposal. We hope the earthquake emergency evacuation drill software and trainer can provide overall process disposal concept for earthquake emergency evacuation drill in assembly occupancies. The trainer can make the earthquake emergency evacuation more orderly, efficient, reasonable and scientific to fulfill the increase in coping capacity of urban hazard.
NASA Technical Reports Server (NTRS)
Fogleman, Guy (Editor); Huntington, Judith L. (Editor); Schwartz, Deborah E. (Editor); Fonda, Mark L. (Editor)
1989-01-01
An overview of the Gas-Grain Simulation Facility (GGSF) project and its current status is provided. The proceedings of the Gas-Grain Simulation Facility Experiments Workshop are recorded. The goal of the workshop was to define experiments for the GGSF--a small particle microgravity research facility. The workshop addressed the opportunity for performing, in Earth orbit, a wide variety of experiments that involve single small particles (grains) or clouds of particles. The first volume includes the executive summary, overview, scientific justification, history, and planned development of the Facility.
Evaluation of Cache-based Superscalar and Cacheless Vector Architectures for Scientific Computations
NASA Technical Reports Server (NTRS)
Oliker, Leonid; Carter, Jonathan; Shalf, John; Skinner, David; Ethier, Stephane; Biswas, Rupak; Djomehri, Jahed; VanderWijngaart, Rob
2003-01-01
The growing gap between sustained and peak performance for scientific applications has become a well-known problem in high performance computing. The recent development of parallel vector systems offers the potential to bridge this gap for a significant number of computational science codes and deliver a substantial increase in computing capabilities. This paper examines the intranode performance of the NEC SX6 vector processor and the cache-based IBM Power3/4 superscalar architectures across a number of key scientific computing areas. First, we present the performance of a microbenchmark suite that examines a full spectrum of low-level machine characteristics. Next, we study the behavior of the NAS Parallel Benchmarks using some simple optimizations. Finally, we evaluate the perfor- mance of several numerical codes from key scientific computing domains. Overall results demonstrate that the SX6 achieves high performance on a large fraction of our application suite and in many cases significantly outperforms the RISC-based architectures. However, certain classes of applications are not easily amenable to vectorization and would likely require extensive reengineering of both algorithm and implementation to utilize the SX6 effectively.
Impacts of a STSE High School Biology Course on the Scientific Literacy of Hong Kong Students
ERIC Educational Resources Information Center
Lau, Kwok-chi
2013-01-01
The PISA performance of Hong Kong has prompted this study to investigate if scientific literacy (SL) of Hong Kong students can be improved further through a high school biology course employing the STSE approach. A STSE course was developed in accordance to the contexts of Hong Kong and a framework for the assessment of scientific literacy was…
Virtual Reality Simulation for the Operating Room
Gallagher, Anthony G.; Ritter, E Matt; Champion, Howard; Higgins, Gerald; Fried, Marvin P.; Moses, Gerald; Smith, C Daniel; Satava, Richard M.
2005-01-01
Summary Background Data: To inform surgeons about the practical issues to be considered for successful integration of virtual reality simulation into a surgical training program. The learning and practice of minimally invasive surgery (MIS) makes unique demands on surgical training programs. A decade ago Satava proposed virtual reality (VR) surgical simulation as a solution for this problem. Only recently have robust scientific studies supported that vision Methods: A review of the surgical education, human-factor, and psychology literature to identify important factors which will impinge on the successful integration of VR training into a surgical training program. Results: VR is more likely to be successful if it is systematically integrated into a well-thought-out education and training program which objectively assesses technical skills improvement proximate to the learning experience. Validated performance metrics should be relevant to the surgical task being trained but in general will require trainees to reach an objectively determined proficiency criterion, based on tightly defined metrics and perform at this level consistently. VR training is more likely to be successful if the training schedule takes place on an interval basis rather than massed into a short period of extensive practice. High-fidelity VR simulations will confer the greatest skills transfer to the in vivo surgical situation, but less expensive VR trainers will also lead to considerably improved skills generalizations. Conclusions: VR for improved performance of MIS is now a reality. However, VR is only a training tool that must be thoughtfully introduced into a surgical training curriculum for it to successfully improve surgical technical skills. PMID:15650649
Tangible Landscape: Cognitively Grasping the Flow of Water
NASA Astrophysics Data System (ADS)
Harmon, B. A.; Petrasova, A.; Petras, V.; Mitasova, H.; Meentemeyer, R. K.
2016-06-01
Complex spatial forms like topography can be challenging to understand, much less intentionally shape, given the heavy cognitive load of visualizing and manipulating 3D form. Spatiotemporal processes like the flow of water over a landscape are even more challenging to understand and intentionally direct as they are dependent upon their context and require the simulation of forces like gravity and momentum. This cognitive work can be offloaded onto computers through 3D geospatial modeling, analysis, and simulation. Interacting with computers, however, can also be challenging, often requiring training and highly abstract thinking. Tangible computing - an emerging paradigm of human-computer interaction in which data is physically manifested so that users can feel it and directly manipulate it - aims to offload this added cognitive work onto the body. We have designed Tangible Landscape, a tangible interface powered by an open source geographic information system (GRASS GIS), so that users can naturally shape topography and interact with simulated processes with their hands in order to make observations, generate and test hypotheses, and make inferences about scientific phenomena in a rapid, iterative process. Conceptually Tangible Landscape couples a malleable physical model with a digital model of a landscape through a continuous cycle of 3D scanning, geospatial modeling, and projection. We ran a flow modeling experiment to test whether tangible interfaces like this can effectively enhance spatial performance by offloading cognitive processes onto computers and our bodies. We used hydrological simulations and statistics to quantitatively assess spatial performance. We found that Tangible Landscape enhanced 3D spatial performance and helped users understand water flow.
A Comparison of Compressed Sensing and Sparse Recovery Algorithms Applied to Simulation Data
Fan, Ya Ju; Kamath, Chandrika
2016-09-01
The move toward exascale computing for scientific simulations is placing new demands on compression techniques. It is expected that the I/O system will not be able to support the volume of data that is expected to be written out. To enable quantitative analysis and scientific discovery, we are interested in techniques that compress high-dimensional simulation data and can provide perfect or near-perfect reconstruction. In this paper, we explore the use of compressed sensing (CS) techniques to reduce the size of the data before they are written out. Using large-scale simulation data, we investigate how the sufficient sparsity condition and themore » contrast in the data affect the quality of reconstruction and the degree of compression. Also, we provide suggestions for the practical implementation of CS techniques and compare them with other sparse recovery methods. Finally, our results show that despite longer times for reconstruction, compressed sensing techniques can provide near perfect reconstruction over a range of data with varying sparsity.« less
A Comparison of Compressed Sensing and Sparse Recovery Algorithms Applied to Simulation Data
DOE Office of Scientific and Technical Information (OSTI.GOV)
Fan, Ya Ju; Kamath, Chandrika
The move toward exascale computing for scientific simulations is placing new demands on compression techniques. It is expected that the I/O system will not be able to support the volume of data that is expected to be written out. To enable quantitative analysis and scientific discovery, we are interested in techniques that compress high-dimensional simulation data and can provide perfect or near-perfect reconstruction. In this paper, we explore the use of compressed sensing (CS) techniques to reduce the size of the data before they are written out. Using large-scale simulation data, we investigate how the sufficient sparsity condition and themore » contrast in the data affect the quality of reconstruction and the degree of compression. Also, we provide suggestions for the practical implementation of CS techniques and compare them with other sparse recovery methods. Finally, our results show that despite longer times for reconstruction, compressed sensing techniques can provide near perfect reconstruction over a range of data with varying sparsity.« less
Effect of DM Actuator Errors on the WFIRST/AFTA Coronagraph Contrast Performance
NASA Technical Reports Server (NTRS)
Sidick, Erkin; Shi, Fang
2015-01-01
The WFIRST/AFTA 2.4 m space telescope currently under study includes a stellar coronagraph for the imaging and the spectral characterization of extrasolar planets. The coronagraph employs two sequential deformable mirrors (DMs) to compensate for phase and amplitude errors in creating dark holes. DMs are critical elements in high contrast coronagraphs, requiring precision and stability measured in picometers to enable detection of Earth-like exoplanets. Working with a low-order wavefront-sensor the DM that is conjugate to a pupil can also be used to correct low-order wavefront drift during a scientific observation. However, not all actuators in a DM have the same gain. When using such a DM in low-order wavefront sensing and control subsystem, the actuator gain errors introduce high-spatial frequency errors to the DM surface and thus worsen the contrast performance of the coronagraph. We have investigated the effects of actuator gain errors and the actuator command digitization errors on the contrast performance of the coronagraph through modeling and simulations, and will present our results in this paper.
Yang, Chaowei; Wu, Huayi; Huang, Qunying; Li, Zhenlong; Li, Jing
2011-01-01
Contemporary physical science studies rely on the effective analyses of geographically dispersed spatial data and simulations of physical phenomena. Single computers and generic high-end computing are not sufficient to process the data for complex physical science analysis and simulations, which can be successfully supported only through distributed computing, best optimized through the application of spatial principles. Spatial computing, the computing aspect of a spatial cyberinfrastructure, refers to a computing paradigm that utilizes spatial principles to optimize distributed computers to catalyze advancements in the physical sciences. Spatial principles govern the interactions between scientific parameters across space and time by providing the spatial connections and constraints to drive the progression of the phenomena. Therefore, spatial computing studies could better position us to leverage spatial principles in simulating physical phenomena and, by extension, advance the physical sciences. Using geospatial science as an example, this paper illustrates through three research examples how spatial computing could (i) enable data intensive science with efficient data/services search, access, and utilization, (ii) facilitate physical science studies with enabling high-performance computing capabilities, and (iii) empower scientists with multidimensional visualization tools to understand observations and simulations. The research examples demonstrate that spatial computing is of critical importance to design computing methods to catalyze physical science studies with better data access, phenomena simulation, and analytical visualization. We envision that spatial computing will become a core technology that drives fundamental physical science advancements in the 21st century. PMID:21444779
Yang, Chaowei; Wu, Huayi; Huang, Qunying; Li, Zhenlong; Li, Jing
2011-04-05
Contemporary physical science studies rely on the effective analyses of geographically dispersed spatial data and simulations of physical phenomena. Single computers and generic high-end computing are not sufficient to process the data for complex physical science analysis and simulations, which can be successfully supported only through distributed computing, best optimized through the application of spatial principles. Spatial computing, the computing aspect of a spatial cyberinfrastructure, refers to a computing paradigm that utilizes spatial principles to optimize distributed computers to catalyze advancements in the physical sciences. Spatial principles govern the interactions between scientific parameters across space and time by providing the spatial connections and constraints to drive the progression of the phenomena. Therefore, spatial computing studies could better position us to leverage spatial principles in simulating physical phenomena and, by extension, advance the physical sciences. Using geospatial science as an example, this paper illustrates through three research examples how spatial computing could (i) enable data intensive science with efficient data/services search, access, and utilization, (ii) facilitate physical science studies with enabling high-performance computing capabilities, and (iii) empower scientists with multidimensional visualization tools to understand observations and simulations. The research examples demonstrate that spatial computing is of critical importance to design computing methods to catalyze physical science studies with better data access, phenomena simulation, and analytical visualization. We envision that spatial computing will become a core technology that drives fundamental physical science advancements in the 21st century.
An engineering closure for heavily under-resolved coarse-grid CFD in large applications
NASA Astrophysics Data System (ADS)
Class, Andreas G.; Yu, Fujiang; Jordan, Thomas
2016-11-01
Even though high performance computation allows very detailed description of a wide range of scales in scientific computations, engineering simulations used for design studies commonly merely resolve the large scales thus speeding up simulation time. The coarse-grid CFD (CGCFD) methodology is developed for flows with repeated flow patterns as often observed in heat exchangers or porous structures. It is proposed to use inviscid Euler equations on a very coarse numerical mesh. This coarse mesh needs not to conform to the geometry in all details. To reinstall physics on all smaller scales cheap subgrid models are employed. Subgrid models are systematically constructed by analyzing well-resolved generic representative simulations. By varying the flow conditions in these simulations correlations are obtained. These comprehend for each individual coarse mesh cell a volume force vector and volume porosity. Moreover, for all vertices, surface porosities are derived. CGCFD is related to the immersed boundary method as both exploit volume forces and non-body conformal meshes. Yet, CGCFD differs with respect to the coarser mesh and the use of Euler equations. We will describe the methodology based on a simple test case and the application of the method to a 127 pin wire-wrap fuel bundle.
Resilient workflows for computational mechanics platforms
NASA Astrophysics Data System (ADS)
Nguyên, Toàn; Trifan, Laurentiu; Désidéri, Jean-Antoine
2010-06-01
Workflow management systems have recently been the focus of much interest and many research and deployment for scientific applications worldwide [26, 27]. Their ability to abstract the applications by wrapping application codes have also stressed the usefulness of such systems for multidiscipline applications [23, 24]. When complex applications need to provide seamless interfaces hiding the technicalities of the computing infrastructures, their high-level modeling, monitoring and execution functionalities help giving production teams seamless and effective facilities [25, 31, 33]. Software integration infrastructures based on programming paradigms such as Python, Mathlab and Scilab have also provided evidence of the usefulness of such approaches for the tight coupling of multidisciplne application codes [22, 24]. Also high-performance computing based on multi-core multi-cluster infrastructures open new opportunities for more accurate, more extensive and effective robust multi-discipline simulations for the decades to come [28]. This supports the goal of full flight dynamics simulation for 3D aircraft models within the next decade, opening the way to virtual flight-tests and certification of aircraft in the future [23, 24, 29].
NASA Technical Reports Server (NTRS)
Hasler, A. F.; Starr, David (Technical Monitor)
2001-01-01
Fritz Hasler (NASA/Goddard) will demonstrate the latest Blue Marble Digital Earth technology. We will fly in from space through Terra, Landsat 7, to 1 m Ikonos "Spy Satellite" data to Washington, NYC, Chicago, and LA. You will see animations using the new 1 km global datasets from the EOS Terra satellite. Spectacular new animations from Terra, Landsat 7, and SeaWiFS will be presented. See the latest animations of the super hurricanes like, Floyd, Luis, and Mitch, from GOES & TRMM. See movies assembled using new low cost HDTV nonlinear editing equipment that is revolutionizing the way we communicate scientific results. See climate change in action with Global Land & Ocean productivity changes over the last 20 years. Remote sensing observations of ocean SST, height, winds, color, and El Nino from GOES, AVHRR, SSMI & SeaWiFS are put in context with atmospheric and ocean simulations. Compare symmetrical equatorial eddies observed by GOES with the simulations.
MPAS-Ocean NESAP Status Report
DOE Office of Scientific and Technical Information (OSTI.GOV)
Petersen, Mark Roger; Arndt, William; Keen, Noel
NESAP performance improvements on MPAS-Ocean have resulted in a 5% to 7% speed-up on each of the examined systems including Cori-KNL, Cori-Haswell, and Edison. These tests were configured to emulate a production workload by using 128 nodes and a high-resolution ocean domain. Overall, the gap between standard and many-core architecture performance has been narrowed, but Cori-KNL remains considerably under-performing relative to Edison. NESAP code alterations affected 600 lines of code, and most of these improvements will benefit other MPAS codes (sea ice, land ice) that are also components within ACME. Modifications are fully tested within MPAS. Testing in ACME acrossmore » many platforms is underway, and must be completed before the code is merged. In addition, a ten-year production ACME global simulation was conducted on Cori-KNL in late 2016 with the pre-NESAP code in order to test readiness and configurations for scientific studies. Next steps include assessing performance across a range of nodes, threads per node, and ocean resolutions on Cori-KNL.« less
Hypobaric chamber for the study of oral health problems in a simulated spacecraft environment
NASA Technical Reports Server (NTRS)
Brown, L. R.
1974-01-01
A hypobaric chamber was constructed to house two marmo-sets simultaneously in a space-simulated environment for periods of 14, 28 and 56 days which coincided with the anticipated Skylab missions. This report details the fabrication, operation, and performance of the chamber and very briefly reviews the scientific data from nine chamber trials involving 18 animals. The possible application of this model system to studies unrelated to oral health or space missions is discussed.
Performance analysis of LDPC codes on OOK terahertz wireless channels
NASA Astrophysics Data System (ADS)
Chun, Liu; Chang, Wang; Jun-Cheng, Cao
2016-02-01
Atmospheric absorption, scattering, and scintillation are the major causes to deteriorate the transmission quality of terahertz (THz) wireless communications. An error control coding scheme based on low density parity check (LDPC) codes with soft decision decoding algorithm is proposed to improve the bit-error-rate (BER) performance of an on-off keying (OOK) modulated THz signal through atmospheric channel. The THz wave propagation characteristics and channel model in atmosphere is set up. Numerical simulations validate the great performance of LDPC codes against the atmospheric fading and demonstrate the huge potential in future ultra-high speed beyond Gbps THz communications. Project supported by the National Key Basic Research Program of China (Grant No. 2014CB339803), the National High Technology Research and Development Program of China (Grant No. 2011AA010205), the National Natural Science Foundation of China (Grant Nos. 61131006, 61321492, and 61204135), the Major National Development Project of Scientific Instrument and Equipment (Grant No. 2011YQ150021), the National Science and Technology Major Project (Grant No. 2011ZX02707), the International Collaboration and Innovation Program on High Mobility Materials Engineering of the Chinese Academy of Sciences, and the Shanghai Municipal Commission of Science and Technology (Grant No. 14530711300).
FreeDam - A webtool for free-electron laser-induced damage in femtosecond X-ray crystallography
NASA Astrophysics Data System (ADS)
Jönsson, H. Olof; Östlin, Christofer; Scott, Howard A.; Chapman, Henry N.; Aplin, Steve J.; Tîmneanu, Nicuşor; Caleman, Carl
2018-03-01
Over the last decade X-ray free-electron laser (XFEL) sources have been made available to the scientific community. One of the most successful uses of these new machines has been protein crystallography. When samples are exposed to the intense short X-ray pulses provided by the XFELs, the sample quickly becomes highly ionized and the atomic structure is affected. Here we present a webtool dubbed FreeDam based on non-thermal plasma simulations, for estimation of radiation damage in free-electron laser experiments in terms of ionization, temperatures and atomic displacements. The aim is to make this tool easily accessible to scientists who are planning and performing experiments at XFELs.
NASA Technical Reports Server (NTRS)
Webster, W., Jr.; Frawley, J. J.; Stefanik, M.
1984-01-01
Simulation studies established that the main (core), crustal and electrojet components of the Earth's magnetic field can be observed with greater resolution or over a longer time-base than is presently possible by using the capabilities provided by the space station. Two systems are studied. The first, a large lifetime, magnetic monitor would observe the main field and its time variation. The second, a remotely-piloted, magnetic probe would observe the crustal field at low altitude and the electrojet field in situ. The system design and the scientific performance of these systems is assessed. The advantages of the space station are reviewed.
Study of Background Rejection Systems for the IXO Mission.
NASA Astrophysics Data System (ADS)
Laurent, Philippe; Limousin, O.; Tatischeff, V.
2009-01-01
The scientific performances of the IXO mission will necessitate a very low detector background level. This will imply thorough background simulations, and efficient background rejection systems. It necessitates also a very good knowledge of the detectors to be shielded. In APC, Paris, and CEA, Saclay, we got experience on these activities by conceiving and optimising in parallel the high energy detector and the active and passive background rejection system of the Simbol-X mission. Considering that this work may be naturally extended to other X-ray missions, we have initiated with CNES a R&D project on the study of background rejection systems mainly in view the IXO project. We will detail this activity in the poster.
Fero, Laura J; O'Donnell, John M; Zullo, Thomas G; Dabbs, Annette DeVito; Kitutu, Julius; Samosky, Joseph T; Hoffman, Leslie A
2010-10-01
This paper is a report of an examination of the relationship between metrics of critical thinking skills and performance in simulated clinical scenarios. Paper and pencil assessments are commonly used to assess critical thinking but may not reflect simulated performance. In 2007, a convenience sample of 36 nursing students participated in measurement of critical thinking skills and simulation-based performance using videotaped vignettes, high-fidelity human simulation, the California Critical Thinking Disposition Inventory and California Critical Thinking Skills Test. Simulation-based performance was rated as 'meeting' or 'not meeting' overall expectations. Test scores were categorized as strong, average, or weak. Most (75.0%) students did not meet overall performance expectations using videotaped vignettes or high-fidelity human simulation; most difficulty related to problem recognition and reporting findings to the physician. There was no difference between overall performance based on method of assessment (P = 0.277). More students met subcategory expectations for initiating nursing interventions (P ≤ 0.001) using high-fidelity human simulation. The relationship between videotaped vignette performance and critical thinking disposition or skills scores was not statistically significant, except for problem recognition and overall critical thinking skills scores (Cramer's V = 0.444, P = 0.029). There was a statistically significant relationship between overall high-fidelity human simulation performance and overall critical thinking disposition scores (Cramer's V = 0.413, P = 0.047). Students' performance reflected difficulty meeting expectations in simulated clinical scenarios. High-fidelity human simulation performance appeared to approximate scores on metrics of critical thinking best. Further research is needed to determine if simulation-based performance correlates with critical thinking skills in the clinical setting. © 2010 The Authors. Journal of Advanced Nursing © 2010 Blackwell Publishing Ltd.
Fero, Laura J.; O’Donnell, John M.; Zullo, Thomas G.; Dabbs, Annette DeVito; Kitutu, Julius; Samosky, Joseph T.; Hoffman, Leslie A.
2018-01-01
Aim This paper is a report of an examination of the relationship between metrics of critical thinking skills and performance in simulated clinical scenarios. Background Paper and pencil assessments are commonly used to assess critical thinking but may not reflect simulated performance. Methods In 2007, a convenience sample of 36 nursing students participated in measurement of critical thinking skills and simulation-based performance using videotaped vignettes, high-fidelity human simulation, the California Critical Thinking Disposition Inventory and California Critical Thinking Skills Test. Simulation- based performance was rated as ‘meeting’ or ‘not meeting’ overall expectations. Test scores were categorized as strong, average, or weak. Results Most (75·0%) students did not meet overall performance expectations using videotaped vignettes or high-fidelity human simulation; most difficulty related to problem recognition and reporting findings to the physician. There was no difference between overall performance based on method of assessment (P = 0·277). More students met subcategory expectations for initiating nursing interventions (P ≤ 0·001) using high-fidelity human simulation. The relationship between video-taped vignette performance and critical thinking disposition or skills scores was not statistically significant, except for problem recognition and overall critical thinking skills scores (Cramer’s V = 0·444, P = 0·029). There was a statistically significant relationship between overall high-fidelity human simulation performance and overall critical thinking disposition scores (Cramer’s V = 0·413, P = 0·047). Conclusion Students’ performance reflected difficulty meeting expectations in simulated clinical scenarios. High-fidelity human simulation performance appeared to approximate scores on metrics of critical thinking best. Further research is needed to determine if simulation-based performance correlates with critical thinking skills in the clinical setting. PMID:20636471
A scientific workflow framework for (13)C metabolic flux analysis.
Dalman, Tolga; Wiechert, Wolfgang; Nöh, Katharina
2016-08-20
Metabolic flux analysis (MFA) with (13)C labeling data is a high-precision technique to quantify intracellular reaction rates (fluxes). One of the major challenges of (13)C MFA is the interactivity of the computational workflow according to which the fluxes are determined from the input data (metabolic network model, labeling data, and physiological rates). Here, the workflow assembly is inevitably determined by the scientist who has to consider interacting biological, experimental, and computational aspects. Decision-making is context dependent and requires expertise, rendering an automated evaluation process hardly possible. Here, we present a scientific workflow framework (SWF) for creating, executing, and controlling on demand (13)C MFA workflows. (13)C MFA-specific tools and libraries, such as the high-performance simulation toolbox 13CFLUX2, are wrapped as web services and thereby integrated into a service-oriented architecture. Besides workflow steering, the SWF features transparent provenance collection and enables full flexibility for ad hoc scripting solutions. To handle compute-intensive tasks, cloud computing is supported. We demonstrate how the challenges posed by (13)C MFA workflows can be solved with our approach on the basis of two proof-of-concept use cases. Copyright © 2015 Elsevier B.V. All rights reserved.
A Perspective on Coupled Multiscale Simulation and Validation in Nuclear Materials
DOE Office of Scientific and Technical Information (OSTI.GOV)
M. P. Short; D. Gaston; C. R. Stanek
2014-01-01
The field of nuclear materials encompasses numerous opportunities to address and ultimately solve longstanding industrial problems by improving the fundamental understanding of materials through the integration of experiments with multiscale modeling and high-performance simulation. A particularly noteworthy example is an ongoing study of axial power distortions in a nuclear reactor induced by corrosion deposits, known as CRUD (Chalk River unidentified deposits). We describe how progress is being made toward achieving scientific advances and technological solutions on two fronts. Specifically, the study of thermal conductivity of CRUD phases has augmented missing data as well as revealed new mechanisms. Additionally, the developmentmore » of a multiscale simulation framework shows potential for the validation of a new capability to predict the power distribution of a reactor, in effect direct evidence of technological impact. The material- and system-level challenges identified in the study of CRUD are similar to other well-known vexing problems in nuclear materials, such as irradiation accelerated corrosion, stress corrosion cracking, and void swelling; they all involve connecting materials science fundamentals at the atomistic- and mesoscales to technology challenges at the macroscale.« less
Goldstone, Robert L; Landy, David H; Son, Ji Y
2010-04-01
Although the field of perceptual learning has mostly been concerned with low- to middle-level changes to perceptual systems due to experience, we consider high-level perceptual changes that accompany learning in science and mathematics. In science, we explore the transfer of a scientific principle (competitive specialization) across superficially dissimilar pedagogical simulations. We argue that transfer occurs when students develop perceptual interpretations of an initial simulation and simply continue to use the same interpretational bias when interacting with a second simulation. In arithmetic and algebraic reasoning, we find that proficiency in mathematics involves executing spatially explicit transformations to notational elements. People learn to attend mathematical operations in the order in which they should be executed, and the extent to which students employ their perceptual attention in this manner is positively correlated with their mathematical experience. For both science and mathematics, relatively sophisticated performance is achieved not by ignoring perceptual features in favor of deep conceptual features, but rather by adapting perceptual processing so as to conform with and support formally sanctioned responses. These "rigged-up perceptual systems" offer a promising approach to educational reform. Copyright © 2009 Cognitive Science Society, Inc.
NASA Astrophysics Data System (ADS)
Sahraoui, Nassim M.; Houat, Samir; Saidi, Nawal
2017-05-01
We perform a contribution with a simulation study of the mixed convection in horizontal channel heated from below. The lattice Boltzmann method (LBM) is used with the Boussinesq approximation to solve the coupled phenomenon that governs the systems thermo-hydrodynamics. The double populations thermal lattice Boltzmann model (TLBM) is used with the D2Q5 for the thermal field and D2Q9 model for the dynamic field. A comparison of the results of the averaged Nusselt number obtained by the TLBM with other references is presented for an area stretching. The streamlines, the vortices, the isotherms, the velocity profiles and other parameters of the study, are presented at a certain time tT which is chosen arbitrarily. The results presented here are in good agreement with those reported in the scientific literature which gives us high expectations about the reliability of the TLBM to simulate this kind of physical phenomena. Contribution to the topical issue "Materials for Energy harvesting, conversion and storage II (ICOME 2016)", edited by Jean-Michel Nunzi, Rachid Bennacer and Mohammed El Ganaoui
Design of 4x1 microstrip patch antenna array for 5.8 GHz ISM band applications
NASA Astrophysics Data System (ADS)
Valjibhai, Gohil Jayesh; Bhatia, Deepak
2013-01-01
This paper describes the new design of four element antenna array using corporate feed technique. The proposed antenna array is developed on the Rogers 5880 dielectric material. The antenna array works on 5.8 GHz ISM band. The industrial, scientific and medical (ISM) radio bands are radio bands (portions of the radio spectrum) reserved internationally for the use of radio frequency (RF) energy for industrial, scientific and medical purposes other than communications. The array antennas have VSWR < 1.6 from 5.725 - 5.875 GHz. The simulated return loss characteristic of the antenna array is - 39.3 dB at 5.8 GHz. The gain of the antenna array is 12.3 dB achieved. The directivity of the broadside radiation pattern is 12.7 dBi at the 5.8 GHz operating frequency. The antenna array is simulated using High frequency structure simulation software.
Easy GROMACS: A Graphical User Interface for GROMACS Molecular Dynamics Simulation Package
NASA Astrophysics Data System (ADS)
Dizkirici, Ayten; Tekpinar, Mustafa
2015-03-01
GROMACS is a widely used molecular dynamics simulation package. Since it is a command driven program, it is difficult to use this program for molecular biologists, biochemists, new graduate students and undergraduate researchers who are interested in molecular dynamics simulations. To alleviate the problem for those researchers, we wrote a graphical user interface that simplifies protein preparation for a classical molecular dynamics simulation. Our program can work with various GROMACS versions and it can perform essential analyses of GROMACS trajectories as well as protein preparation. We named our open source program `Easy GROMACS'. Easy GROMACS can give researchers more time for scientific research instead of dealing with technical intricacies.
Learning Science through Computer Games and Simulations
ERIC Educational Resources Information Center
Honey, Margaret A., Ed.; Hilton, Margaret, Ed.
2011-01-01
At a time when scientific and technological competence is vital to the nation's future, the weak performance of U.S. students in science reflects the uneven quality of current science education. Although young children come to school with innate curiosity and intuitive ideas about the world around them, science classes rarely tap this potential.…
Integrating Numerical Computation into the Modeling Instruction Curriculum
ERIC Educational Resources Information Center
Caballero, Marcos D.; Burk, John B.; Aiken, John M.; Thoms, Brian D.; Douglas, Scott S.; Scanlon, Erin M.; Schatz, Michael F.
2014-01-01
Numerical computation (the use of a computer to solve, simulate, or visualize a physical problem) has fundamentally changed the way scientific research is done. Systems that are too difficult to solve in closed form are probed using computation. Experiments that are impossible to perform in the laboratory are studied numerically. Consequently, in…
Evaluation of high fidelity patient simulator in assessment of performance of anaesthetists.
Weller, J M; Bloch, M; Young, S; Maze, M; Oyesola, S; Wyner, J; Dob, D; Haire, K; Durbridge, J; Walker, T; Newble, D
2003-01-01
There is increasing emphasis on performance-based assessment of clinical competence. The High Fidelity Patient Simulator (HPS) may be useful for assessment of clinical practice in anaesthesia, but needs formal evaluation of validity, reliability, feasibility and effect on learning. We set out to assess the reliability of a global rating scale for scoring simulator performance in crisis management. Using a global rating scale, three judges independently rated videotapes of anaesthetists in simulated crises in the operating theatre. Five anaesthetists then independently rated subsets of these videotapes. There was good agreement between raters for medical management, behavioural attributes and overall performance. Agreement was high for both the initial judges and the five additional raters. Using a global scale to assess simulator performance, we found good inter-rater reliability for scoring performance in a crisis. We estimate that two judges should provide a reliable assessment. High fidelity simulation should be studied further for assessing clinical performance.
NASA Astrophysics Data System (ADS)
Robinson, Wayne D.; Patt, Frederick S.; Franz, Bryan A.; Turpie, Kevin R.; McClain, Charles R.
2009-08-01
One of the roles of the VIIRS Ocean Science Team (VOST) is to assess the performance of the instrument and scientific processing software that generates ocean color parameters such as normalized water-leaving radiances and chlorophyll. A VIIRS data simulator is being developed to help aid in this work. The simulator will create a sufficient set of simulated Sensor Data Records (SDR) so that the ocean component of the VIIRS processing system can be tested. It will also have the ability to study the impact of instrument artifacts on the derived parameter quality. The simulator will use existing resources available to generate the geolocation information and to transform calibrated radiances to geophysical parameters and visa-versa. In addition, the simulator will be able to introduce land features, cloud fields, and expected VIIRS instrument artifacts. The design of the simulator and its progress will be presented.
Rodrigues, João P G L M; Melquiond, Adrien S J; Bonvin, Alexandre M J J
2016-01-01
Molecular modelling and simulations are nowadays an integral part of research in areas ranging from physics to chemistry to structural biology, as well as pharmaceutical drug design. This popularity is due to the development of high-performance hardware and of accurate and efficient molecular mechanics algorithms by the scientific community. These improvements are also benefitting scientific education. Molecular simulations, their underlying theory, and their applications are particularly difficult to grasp for undergraduate students. Having hands-on experience with the methods contributes to a better understanding and solidification of the concepts taught during the lectures. To this end, we have created a computer practical class, which has been running for the past five years, composed of several sessions where students characterize the conformational landscape of small peptides using molecular dynamics simulations in order to gain insights on their binding to protein receptors. In this report, we detail the ingredients and recipe necessary to establish and carry out this practical, as well as some of the questions posed to the students and their expected results. Further, we cite some examples of the students' written reports, provide statistics, and share their feedbacks on the structure and execution of the sessions. These sessions were implemented alongside a theoretical molecular modelling course but have also been used successfully as a standalone tutorial during specialized workshops. The availability of the material on our web page also facilitates this integration and dissemination and lends strength to the thesis of open-source science and education. © 2016 The International Union of Biochemistry and Molecular Biology.
Extravehicular Activity Operations Concepts Under Communication Latency and Bandwidth Constraints
NASA Technical Reports Server (NTRS)
Beaton, Kara H.; Chappell, Steven P.; Abercromby, Andrew F. J.; Miller, Matthew J.; Nawotniak, Shannon Kobs; Hughes, Scott; Brady, Allyson; Lim, Darlene S. S.
2017-01-01
The Biologic Analog Science Associated with Lava Terrains (BASALT) project is a multi-year program dedicated to iteratively develop, implement, and evaluate concepts of operations (ConOps) and supporting capabilities intended to enable and enhance human scientific exploration of Mars. This pa-per describes the planning, execution, and initial results from the first field deployment, referred to as BASALT-1, which consisted of a series of 10 simulated extravehicular activities (EVAs) on volcanic flows in Idaho's Craters of the Moon (COTM) National Monument. The ConOps and capabilities deployed and tested during BASALT-1 were based on previous NASA trade studies and analog testing. Our primary research question was whether those ConOps and capabilities work acceptably when performing real (non-simulated) biological and geological scientific exploration under 4 different Mars-to-Earth communication conditions: 5 and 15 min one-way light time (OWLT) communication latencies and low (0.512 Mb/s uplink, 1.54 Mb/s downlink) and high (5.0 Mb/s uplink, 10.0 Mb/s downlink) bandwidth conditions representing the lower and higher limits of technical communication capabilities currently proposed for future human exploration missions. The synthesized results of BASALT-1 with respect to the ConOps and capabilities assessment were derived from a variety of sources, including EVA task timing data, network analytic data, and subjective ratings and comments regarding the scientific and operational acceptability of the ConOp and the extent to which specific capabilities were enabling and enhancing, and are presented here. BASALT-1 established preliminary findings that baseline ConOp, software systems, and communication protocols were scientifically and operationally acceptable with minor improvements desired by the "Mars" extravehicular (EV) and intravehicular (IV) crewmembers, but unacceptable with improvements required by the "Earth" Mission Support Center. These data will provide a basis for guiding and prioritizing capability development for future BASALT deployments and, ultimately, future human exploration missions.
NASA Astrophysics Data System (ADS)
Naumov, D.; Fischer, T.; Böttcher, N.; Watanabe, N.; Walther, M.; Rink, K.; Bilke, L.; Shao, H.; Kolditz, O.
2014-12-01
OpenGeoSys (OGS) is a scientific open source code for numerical simulation of thermo-hydro-mechanical-chemical processes in porous and fractured media. Its basic concept is to provide a flexible numerical framework for solving multi-field problems for applications in geoscience and hydrology as e.g. for CO2 storage applications, geothermal power plant forecast simulation, salt water intrusion, water resources management, etc. Advances in computational mathematics have revolutionized the variety and nature of the problems that can be addressed by environmental scientists and engineers nowadays and an intensive code development in the last years enables in the meantime the solutions of much larger numerical problems and applications. However, solving environmental processes along the water cycle at large scales, like for complete catchment or reservoirs, stays computationally still a challenging task. Therefore, we started a new OGS code development with focus on execution speed and parallelization. In the new version, a local data structure concept improves the instruction and data cache performance by a tight bundling of data with an element-wise numerical integration loop. Dedicated analysis methods enable the investigation of memory-access patterns in the local and global assembler routines, which leads to further data structure optimization for an additional performance gain. The concept is presented together with a technical code analysis of the recent development and a large case study including transient flow simulation in the unsaturated / saturated zone of the Thuringian Syncline, Germany. The analysis is performed on a high-resolution mesh (up to 50M elements) with embedded fault structures.
Assessing the Added Value of Dynamical Downscaling in the Context of Hydrologic Implication
NASA Astrophysics Data System (ADS)
Lu, M.; IM, E. S.; Lee, M. H.
2017-12-01
There is a scientific consensus that high-resolution climate simulations downscaled by Regional Climate Models (RCMs) can provide valuable refined information over the target region. However, a significant body of hydrologic impact assessment has been performing using the climate information provided by Global Climate Models (GCMs) in spite of a fundamental spatial scale gap. It is probably based on the assumption that the substantial biases and spatial scale gap from GCMs raw data can be simply removed by applying the statistical bias correction and spatial disaggregation. Indeed, many previous studies argue that the benefit of dynamical downscaling using RCMs is minimal when linking climate data with the hydrological model, from the comparison of the impact between bias-corrected GCMs and bias-corrected RCMs on hydrologic simulations. It may be true for long-term averaged climatological pattern, but it is not necessarily the case when looking into variability across various temporal spectrum. In this study, we investigate the added value of dynamical downscaling focusing on the performance in capturing climate variability. For doing this, we evaluate the performance of the distributed hydrological model over the Korean river basin using the raw output from GCM and RCM, and bias-corrected output from GCM and RCM. The impacts of climate input data on streamflow simulation are comprehensively analyzed. [Acknowledgements]This research is supported by the Korea Agency for Infrastructure Technology Advancement (KAIA) grant funded by the Ministry of Land, Infrastructure and Transport (Grant 17AWMP-B083066-04).
Grachev, S V; Gorodnova, E A
2008-01-01
The authors presented an original material, devoted to first experience of teaching of theoretical bases of venture financing of scientifically-innovative projects in medical high school. The results and conclusions were based on data of the questionnaire performed by the authors. More than 90% of young scientist physicians recognized actuality of this problem for realization of their research work results into practice. Thus, experience of teaching of theoretical bases of venture financing of scientifically-innovative projects in medical high school proves reasonability of further development and inclusion the module "The venture financing of scientifically-innovative projects in biomedicine" in the training plan.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Fulton, John L.; Bylaska, Eric J.; Bogatko, Stuart A.
DFT-MD simulations (PBE96 and PBE0) with MD-XAFS scattering calculations (FEFF9) show near quantitative agreement with new and existing XAFS measurements for a comprehensive series of transition metal ions which interact with their hydration shells via complex mechanisms (high spin, covalency, charge transfer, etc.). This work was supported by the U.S. Department of Energy (DOE), Office of Basic Energy Sciences, Division of Chemical Sciences, Geosciences and Biosciences. Pacific Northwest National Laboratory (PNNL) is operated for the U.S. DOE by Battelle. A portion of the research was performed using EMSL, a national scientific user facility sponsored by the U.S. DOE's Office ofmore » Biological and Environmental Research and located at Pacific Northwest National Laboratory.« less
NASA Astrophysics Data System (ADS)
Zhang, Y. Y.; Shao, Q. X.; Ye, A. Z.; Xing, H. T.
2014-08-01
Integrated water system modeling is a reasonable approach to provide scientific understanding and possible solutions to tackle the severe water crisis faced over the world and to promote the implementation of integrated river basin management. Such a modeling practice becomes more feasible nowadays due to better computing facilities and available data sources. In this study, the process-oriented water system model (HEXM) is developed by integrating multiple water related processes including hydrology, biogeochemistry, environment and ecology, as well as the interference of human activities. The model was tested in the Shaying River Catchment, the largest, highly regulated and heavily polluted tributary of Huai River Basin in China. The results show that: HEXM is well integrated with good performance on the key water related components in the complex catchments. The simulated daily runoff series at all the regulated and less-regulated stations matches observations, especially for the high and low flow events. The average values of correlation coefficient and coefficient of efficiency are 0.81 and 0.63, respectively. The dynamics of observed daily ammonia-nitrogen (NH4N) concentration, as an important index to assess water environmental quality in China, are well captured with average correlation coefficient of 0.66. Furthermore, the spatial patterns of nonpoint source pollutant load and grain yield are also simulated properly, and the outputs have good agreements with the statistics at city scale. Our model shows clear superior performance in both calibration and validation in comparison with the widely used SWAT model. This model is expected to give a strong reference for water system modeling in complex basins, and provide the scientific foundation for the implementation of integrated river basin management all over the world as well as the technical guide for the reasonable regulation of dams and sluices and environmental improvement in river basins.
A Systematic Approach for Obtaining Performance on Matrix-Like Operations
NASA Astrophysics Data System (ADS)
Veras, Richard Michael
Scientific Computation provides a critical role in the scientific process because it allows us ask complex queries and test predictions that would otherwise be unfeasible to perform experimentally. Because of its power, Scientific Computing has helped drive advances in many fields ranging from Engineering and Physics to Biology and Sociology to Economics and Drug Development and even to Machine Learning and Artificial Intelligence. Common among these domains is the desire for timely computational results, thus a considerable amount of human expert effort is spent towards obtaining performance for these scientific codes. However, this is no easy task because each of these domains present their own unique set of challenges to software developers, such as domain specific operations, structurally complex data and ever-growing datasets. Compounding these problems are the myriads of constantly changing, complex and unique hardware platforms that an expert must target. Unfortunately, an expert is typically forced to reproduce their effort across multiple problem domains and hardware platforms. In this thesis, we demonstrate the automatic generation of expert level high-performance scientific codes for Dense Linear Algebra (DLA), Structured Mesh (Stencil), Sparse Linear Algebra and Graph Analytic. In particular, this thesis seeks to address the issue of obtaining performance on many complex platforms for a certain class of matrix-like operations that span across many scientific, engineering and social fields. We do this by automating a method used for obtaining high performance in DLA and extending it to structured, sparse and scale-free domains. We argue that it is through the use of the underlying structure found in the data from these domains that enables this process. Thus, obtaining performance for most operations does not occur in isolation of the data being operated on, but instead depends significantly on the structure of the data.
McRae, Marion E; Chan, Alice; Hulett, Renee; Lee, Ai Jin; Coleman, Bernice
2017-06-01
There are few reports of the effectiveness or satisfaction with simulation to learn cardiac surgical resuscitation skills. To test the effect of simulation on the self-confidence of nurses to perform cardiac surgical resuscitation simulation and nurses' satisfaction with the simulation experience. A convenience sample of sixty nurses rated their self-confidence to perform cardiac surgical resuscitation skills before and after two simulations. Simulation performance was assessed. Subjects completed the Satisfaction with Simulation Experience scale and demographics. Self-confidence scores to perform all cardiac surgical skills as measured by paired t-tests were significantly increased after the simulation (d=-0.50 to 1.78). Self-confidence and cardiac surgical work experience were not correlated with time to performance. Total satisfaction scores were high (mean 80.2, SD 1.06) indicating satisfaction with the simulation. There was no correlation of the satisfaction scores with cardiac surgical work experience (τ=-0.05, ns). Self-confidence scores to perform cardiac surgical resuscitation procedures were higher after the simulation. Nurses were highly satisfied with the simulation experience. Copyright © 2016 Elsevier Ltd. All rights reserved.
Millimetron and Earth-Space VLBI
NASA Astrophysics Data System (ADS)
Likhachev, S.
2014-01-01
The main scientific goal of the Millimetron mission operating in Space VLBI (SVLBI) mode will be the exploration of compact radio sources with extremely high angular resolution (better than one microsecond of arc). The space-ground interferometer Millimetron has an orbit around L2 point of the Earth - Sun system and allows operating with baselines up to a hundred Earth diameters. SVLBI observations will be accomplished by space and ground-based radio telescopes simultaneously. At the space telescope the received baseband signal is digitized and then transferred to the onboard memory storage (up to 100TB). The scientific and service data transfer to the ground tracking station is performed by means of both synchronization and communication radio links (1 GBps). Then the array of the scientific data is processed at the correlation center. Due to the (u,v) - plane coverage requirements for SVLBI imaging, it is necessary to propose observations at two different frequencies and two circular polarizations simultaneously with frequency switching. The total recording bandwidth (2x2x4 GHz) defines of the on-board memory size. The ground based support of the Millimetron mission in the VLBI-mode could be Atacama Large Millimeter Array (ALMA), Pico Valletta (Spain), Plateau de Bure interferometer (France), SMT telescope in the US (Arizona), LMT antenna (Mexico), SMA array, (Mauna Kea, USA), as well as the Green Bank and Effelsberg 100 m telescopes (for 22 GHz observations). We will present simulation results for Millimetron-ALMA interferometer. The sensitivity estimate of the space-ground interferometer will be compared to the requirements of the scientific goals of the mission. The possibility of multi-frequency synthesis (MFS) to obtain high quality images will also be considered.
Atmospheric Responses from Radiosonde Observations of the 2017 Total Solar Eclipse
NASA Astrophysics Data System (ADS)
Fowler, J.
2017-12-01
The Atmospheric Responses from Radiosonde Observations project during the August 21st, 2017 Total Solar Eclipse was to observe the atmospheric response under the shadow of the Moon using both research and operational earth science instruments run primarily by undergraduate students not formally trained in atmospheric science. During the eclipse, approximately 15 teams across the path of totality launched radiosonde balloon platforms in very rapid, serial sonde deployment. Our strategy was to combine a dense ground observation network with multiple radiosonde sites, located within and along the margins of the path of totality. This can demonstrate how dense observation networks leveraged among various programs can "fill the gaps" in data sparse regions allowing research ideas and questions that previously could not be approached with courser resolution data and improving the scientific understanding and prediction of geophysical and hazardous phenomenon. The core scientific objectives are (1) to make high-resolution surface and upper air observations in several sites along the eclipse path (2) to quantitatively study atmospheric responses to the rapid disappearance of the Sun across the United States, and (3) to assess the performance of high-resolution weather forecasting models in simulating the observed response. Such a scientific campaign, especially unique during a total solar eclipse, provides a rare but life-altering opportunity to attract and enable next-generation of observational scientists. It was an ideal "laboratory" for graduate, undergraduate, citizen scientists and k-12 students and staff to learn, explore and research in STEM.
NASA Astrophysics Data System (ADS)
Peter, Daniel; Videau, Brice; Pouget, Kevin; Komatitsch, Dimitri
2015-04-01
Improving the resolution of tomographic images is crucial to answer important questions on the nature of Earth's subsurface structure and internal processes. Seismic tomography is the most prominent approach where seismic signals from ground-motion records are used to infer physical properties of internal structures such as compressional- and shear-wave speeds, anisotropy and attenuation. Recent advances in regional- and global-scale seismic inversions move towards full-waveform inversions which require accurate simulations of seismic wave propagation in complex 3D media, providing access to the full 3D seismic wavefields. However, these numerical simulations are computationally very expensive and need high-performance computing (HPC) facilities for further improving the current state of knowledge. During recent years, many-core architectures such as graphics processing units (GPUs) have been added to available large HPC systems. Such GPU-accelerated computing together with advances in multi-core central processing units (CPUs) can greatly accelerate scientific applications. There are mainly two possible choices of language support for GPU cards, the CUDA programming environment and OpenCL language standard. CUDA software development targets NVIDIA graphic cards while OpenCL was adopted mainly by AMD graphic cards. In order to employ such hardware accelerators for seismic wave propagation simulations, we incorporated a code generation tool BOAST into an existing spectral-element code package SPECFEM3D_GLOBE. This allows us to use meta-programming of computational kernels and generate optimized source code for both CUDA and OpenCL languages, running simulations on either CUDA or OpenCL hardware accelerators. We show here applications of forward and adjoint seismic wave propagation on CUDA/OpenCL GPUs, validating results and comparing performances for different simulations and hardware usages.
High Performance Visualization using Query-Driven Visualizationand Analytics
DOE Office of Scientific and Technical Information (OSTI.GOV)
Bethel, E. Wes; Campbell, Scott; Dart, Eli
2006-06-15
Query-driven visualization and analytics is a unique approach for high-performance visualization that offers new capabilities for knowledge discovery and hypothesis testing. The new capabilities akin to finding needles in haystacks are the result of combining technologies from the fields of scientific visualization and scientific data management. This approach is crucial for rapid data analysis and visualization in the petascale regime. This article describes how query-driven visualization is applied to a hero-sized network traffic analysis problem.
Software aspects of the Geant4 validation repository
NASA Astrophysics Data System (ADS)
Dotti, Andrea; Wenzel, Hans; Elvira, Daniel; Genser, Krzysztof; Yarba, Julia; Carminati, Federico; Folger, Gunter; Konstantinov, Dmitri; Pokorski, Witold; Ribon, Alberto
2017-10-01
The Geant4, GeantV and GENIE collaborations regularly perform validation and regression tests for simulation results. DoSSiER (Database of Scientific Simulation and Experimental Results) is being developed as a central repository to store the simulation results as well as the experimental data used for validation. DoSSiER is easily accessible via a web application. In addition, a web service allows for programmatic access to the repository to extract records in JSON or XML exchange formats. In this article, we describe the functionality and the current status of various components of DoSSiER as well as the technology choices we made.
Enhancing GIS Capabilities for High Resolution Earth Science Grids
NASA Astrophysics Data System (ADS)
Koziol, B. W.; Oehmke, R.; Li, P.; O'Kuinghttons, R.; Theurich, G.; DeLuca, C.
2017-12-01
Applications for high performance GIS will continue to increase as Earth system models pursue more realistic representations of Earth system processes. Finer spatial resolution model input and output, unstructured or irregular modeling grids, data assimilation, and regional coordinate systems present novel challenges for GIS frameworks operating in the Earth system modeling domain. This presentation provides an overview of two GIS-driven applications that combine high performance software with big geospatial datasets to produce value-added tools for the modeling and geoscientific community. First, a large-scale interpolation experiment using National Hydrography Dataset (NHD) catchments, a high resolution rectilinear CONUS grid, and the Earth System Modeling Framework's (ESMF) conservative interpolation capability will be described. ESMF is a parallel, high-performance software toolkit that provides capabilities (e.g. interpolation) for building and coupling Earth science applications. ESMF is developed primarily by the NOAA Environmental Software Infrastructure and Interoperability (NESII) group. The purpose of this experiment was to test and demonstrate the utility of high performance scientific software in traditional GIS domains. Special attention will be paid to the nuanced requirements for dealing with high resolution, unstructured grids in scientific data formats. Second, a chunked interpolation application using ESMF and OpenClimateGIS (OCGIS) will demonstrate how spatial subsetting can virtually remove computing resource ceilings for very high spatial resolution interpolation operations. OCGIS is a NESII-developed Python software package designed for the geospatial manipulation of high-dimensional scientific datasets. An overview of the data processing workflow, why a chunked approach is required, and how the application could be adapted to meet operational requirements will be discussed here. In addition, we'll provide a general overview of OCGIS's parallel subsetting capabilities including challenges in the design and implementation of a scientific data subsetter.
Zhang, Xinyuan; Zheng, Nan; Rosania, Gus R
2008-09-01
Cell-based molecular transport simulations are being developed to facilitate exploratory cheminformatic analysis of virtual libraries of small drug-like molecules. For this purpose, mathematical models of single cells are built from equations capturing the transport of small molecules across membranes. In turn, physicochemical properties of small molecules can be used as input to simulate intracellular drug distribution, through time. Here, with mathematical equations and biological parameters adjusted so as to mimic a leukocyte in the blood, simulations were performed to analyze steady state, relative accumulation of small molecules in lysosomes, mitochondria, and cytosol of this target cell, in the presence of a homogenous extracellular drug concentration. Similarly, with equations and parameters set to mimic an intestinal epithelial cell, simulations were also performed to analyze steady state, relative distribution and transcellular permeability in this non-target cell, in the presence of an apical-to-basolateral concentration gradient. With a test set of ninety-nine monobasic amines gathered from the scientific literature, simulation results helped analyze relationships between the chemical diversity of these molecules and their intracellular distributions.
Scientific Benefits of Space Science Models Archiving at Community Coordinated Modeling Center
NASA Technical Reports Server (NTRS)
Kuznetsova, Maria M.; Berrios, David; Chulaki, Anna; Hesse, Michael; MacNeice, Peter J.; Maddox, Marlo M.; Pulkkinen, Antti; Rastaetter, Lutz; Taktakishvili, Aleksandre
2009-01-01
The Community Coordinated Modeling Center (CCMC) hosts a set of state-of-the-art space science models ranging from the solar atmosphere to the Earth's upper atmosphere. CCMC provides a web-based Run-on-Request system, by which the interested scientist can request simulations for a broad range of space science problems. To allow the models to be driven by data relevant to particular events CCMC developed a tool that automatically downloads data from data archives and transform them to required formats. CCMC also provides a tailored web-based visualization interface for the model output, as well as the capability to download the simulation output in portable format. CCMC offers a variety of visualization and output analysis tools to aid scientists in interpretation of simulation results. During eight years since the Run-on-request system became available the CCMC archived the results of almost 3000 runs that are covering significant space weather events and time intervals of interest identified by the community. The simulation results archived at CCMC also include a library of general purpose runs with modeled conditions that are used for education and research. Archiving results of simulations performed in support of several Modeling Challenges helps to evaluate the progress in space weather modeling over time. We will highlight the scientific benefits of CCMC space science model archive and discuss plans for further development of advanced methods to interact with simulation results.
Machine learning strategies for systems with invariance properties
NASA Astrophysics Data System (ADS)
Ling, Julia; Jones, Reese; Templeton, Jeremy
2016-08-01
In many scientific fields, empirical models are employed to facilitate computational simulations of engineering systems. For example, in fluid mechanics, empirical Reynolds stress closures enable computationally-efficient Reynolds Averaged Navier Stokes simulations. Likewise, in solid mechanics, constitutive relations between the stress and strain in a material are required in deformation analysis. Traditional methods for developing and tuning empirical models usually combine physical intuition with simple regression techniques on limited data sets. The rise of high performance computing has led to a growing availability of high fidelity simulation data. These data open up the possibility of using machine learning algorithms, such as random forests or neural networks, to develop more accurate and general empirical models. A key question when using data-driven algorithms to develop these empirical models is how domain knowledge should be incorporated into the machine learning process. This paper will specifically address physical systems that possess symmetry or invariance properties. Two different methods for teaching a machine learning model an invariance property are compared. In the first method, a basis of invariant inputs is constructed, and the machine learning model is trained upon this basis, thereby embedding the invariance into the model. In the second method, the algorithm is trained on multiple transformations of the raw input data until the model learns invariance to that transformation. Results are discussed for two case studies: one in turbulence modeling and one in crystal elasticity. It is shown that in both cases embedding the invariance property into the input features yields higher performance at significantly reduced computational training costs.
Machine learning strategies for systems with invariance properties
DOE Office of Scientific and Technical Information (OSTI.GOV)
Ling, Julia; Jones, Reese E.; Templeton, Jeremy Alan
Here, in many scientific fields, empirical models are employed to facilitate computational simulations of engineering systems. For example, in fluid mechanics, empirical Reynolds stress closures enable computationally-efficient Reynolds-Averaged Navier-Stokes simulations. Likewise, in solid mechanics, constitutive relations between the stress and strain in a material are required in deformation analysis. Traditional methods for developing and tuning empirical models usually combine physical intuition with simple regression techniques on limited data sets. The rise of high-performance computing has led to a growing availability of high-fidelity simulation data, which open up the possibility of using machine learning algorithms, such as random forests or neuralmore » networks, to develop more accurate and general empirical models. A key question when using data-driven algorithms to develop these models is how domain knowledge should be incorporated into the machine learning process. This paper will specifically address physical systems that possess symmetry or invariance properties. Two different methods for teaching a machine learning model an invariance property are compared. In the first , a basis of invariant inputs is constructed, and the machine learning model is trained upon this basis, thereby embedding the invariance into the model. In the second method, the algorithm is trained on multiple transformations of the raw input data until the model learns invariance to that transformation. Results are discussed for two case studies: one in turbulence modeling and one in crystal elasticity. It is shown that in both cases embedding the invariance property into the input features yields higher performance with significantly reduced computational training costs.« less
Machine learning strategies for systems with invariance properties
Ling, Julia; Jones, Reese E.; Templeton, Jeremy Alan
2016-05-06
Here, in many scientific fields, empirical models are employed to facilitate computational simulations of engineering systems. For example, in fluid mechanics, empirical Reynolds stress closures enable computationally-efficient Reynolds-Averaged Navier-Stokes simulations. Likewise, in solid mechanics, constitutive relations between the stress and strain in a material are required in deformation analysis. Traditional methods for developing and tuning empirical models usually combine physical intuition with simple regression techniques on limited data sets. The rise of high-performance computing has led to a growing availability of high-fidelity simulation data, which open up the possibility of using machine learning algorithms, such as random forests or neuralmore » networks, to develop more accurate and general empirical models. A key question when using data-driven algorithms to develop these models is how domain knowledge should be incorporated into the machine learning process. This paper will specifically address physical systems that possess symmetry or invariance properties. Two different methods for teaching a machine learning model an invariance property are compared. In the first , a basis of invariant inputs is constructed, and the machine learning model is trained upon this basis, thereby embedding the invariance into the model. In the second method, the algorithm is trained on multiple transformations of the raw input data until the model learns invariance to that transformation. Results are discussed for two case studies: one in turbulence modeling and one in crystal elasticity. It is shown that in both cases embedding the invariance property into the input features yields higher performance with significantly reduced computational training costs.« less
What can the programming language Rust do for astrophysics?
NASA Astrophysics Data System (ADS)
Blanco-Cuaresma, Sergi; Bolmont, Emeline
2017-06-01
The astrophysics community uses different tools for computational tasks such as complex systems simulations, radiative transfer calculations or big data. Programming languages like Fortran, C or C++ are commonly present in these tools and, generally, the language choice was made based on the need for performance. However, this comes at a cost: safety. For instance, a common source of error is the access to invalid memory regions, which produces random execution behaviors and affects the scientific interpretation of the results. In 2015, Mozilla Research released the first stable version of a new programming language named Rust. Many features make this new language attractive for the scientific community, it is open source and it guarantees memory safety while offering zero-cost abstraction. We explore the advantages and drawbacks of Rust for astrophysics by re-implementing the fundamental parts of Mercury-T, a Fortran code that simulates the dynamical and tidal evolution of multi-planet systems.
Optical eye simulator for laser dazzle events.
Coelho, João M P; Freitas, José; Williamson, Craig A
2016-03-20
An optical simulator of the human eye and its application to laser dazzle events are presented. The simulator combines optical design software (ZEMAX) with a scientific programming language (MATLAB) and allows the user to implement and analyze a dazzle scenario using practical, real-world parameters. Contrary to conventional analytical glare analysis, this work uses ray tracing and the scattering model and parameters for each optical element of the eye. The theoretical background of each such element is presented in relation to the model. The overall simulator's calibration, validation, and performance analysis are achieved by comparison with a simpler model based uponCIE disability glare data. Results demonstrate that this kind of advanced optical eye simulation can be used to represent laser dazzle and has the potential to extend the range of applicability of analytical models.
Scientific Performance of a Nano-satellite MeV Telescope
DOE Office of Scientific and Technical Information (OSTI.GOV)
Lucchetta, Giulio; Berlato, Francesco; Rando, Riccardo
Over the past two decades, both X-ray and gamma-ray astronomy have experienced great progress. However, the region of the electromagnetic spectrum around ∼1 MeV is not so thoroughly explored. Future medium-sized gamma-ray telescopes will fill this gap in observations. As the timescale for the development and launch of a medium-class mission is ∼10 years, with substantial costs, we propose a different approach for the immediate future. In this paper, we evaluate the viability of a much smaller and cheaper detector: a nano-satellite Compton telescope, based on the CubeSat architecture. The scientific performance of this telescope would be well below thatmore » of the instrument expected for the future larger missions; however, via simulations, we estimate that such a compact telescope will achieve a performance similar to that of COMPTEL.« less
Software quality and process improvement in scientific simulation codes
DOE Office of Scientific and Technical Information (OSTI.GOV)
Ambrosiano, J.; Webster, R.
1997-11-01
This report contains viewgraphs on the quest to develope better simulation code quality through process modeling and improvement. This study is based on the experience of the authors and interviews with ten subjects chosen from simulation code development teams at LANL. This study is descriptive rather than scientific.
Gpu Implementation of a Viscous Flow Solver on Unstructured Grids
NASA Astrophysics Data System (ADS)
Xu, Tianhao; Chen, Long
2016-06-01
Graphics processing units have gained popularities in scientific computing over past several years due to their outstanding parallel computing capability. Computational fluid dynamics applications involve large amounts of calculations, therefore a latest GPU card is preferable of which the peak computing performance and memory bandwidth are much better than a contemporary high-end CPU. We herein focus on the detailed implementation of our GPU targeting Reynolds-averaged Navier-Stokes equations solver based on finite-volume method. The solver employs a vertex-centered scheme on unstructured grids for the sake of being capable of handling complex topologies. Multiple optimizations are carried out to improve the memory accessing performance and kernel utilization. Both steady and unsteady flow simulation cases are carried out using explicit Runge-Kutta scheme. The solver with GPU acceleration in this paper is demonstrated to have competitive advantages over the CPU targeting one.
GPS in dynamic monitoring of long-period structures
Celebi, M.
2000-01-01
Global Positioning System (GPS) technology with high sampling rates (??? 10 samples per second) allows scientifically justified and economically feasible dynamic measurements of relative displacements of long-period structures-otherwise difficult to measure directly by other means, such as the most commonly used accelerometers that require post-processing including double integration. We describe an experiment whereby the displacement responses of a simulated tall building are measured clearly and accurately in real-time. Such measurements can be used to assess average drift ratios and changes in dynamic characteristics, and therefore can be used by engineers and building owners or managers to assess the building performance during extreme motions caused by earthquakes and strong winds. By establishing threshold displacements or drift ratios and identifying changing dynamic characteristics, procedures can be developed to use such information to secure public safety and/or take steps to improve the performance of the building. Published by Elsevier Science Ltd.
Semantic Information Processing of Physical Simulation Based on Scientific Concept Vocabulary Model
NASA Astrophysics Data System (ADS)
Kino, Chiaki; Suzuki, Yoshio; Takemiya, Hiroshi
Scientific Concept Vocabulary (SCV) has been developed to actualize Cognitive methodology based Data Analysis System: CDAS which supports researchers to analyze large scale data efficiently and comprehensively. SCV is an information model for processing semantic information for physics and engineering. In the model of SCV, all semantic information is related to substantial data and algorisms. Consequently, SCV enables a data analysis system to recognize the meaning of execution results output from a numerical simulation. This method has allowed a data analysis system to extract important information from a scientific view point. Previous research has shown that SCV is able to describe simple scientific indices and scientific perceptions. However, it is difficult to describe complex scientific perceptions by currently-proposed SCV. In this paper, a new data structure for SCV has been proposed in order to describe scientific perceptions in more detail. Additionally, the prototype of the new model has been constructed and applied to actual data of numerical simulation. The result means that the new SCV is able to describe more complex scientific perceptions.
Simulating Sand Behavior through Terrain Subdivision and Particle Refinement
NASA Astrophysics Data System (ADS)
Clothier, M.
2013-12-01
Advances in computer graphics, GPUs, and parallel processing hardware have provided researchers with new methods to visualize scientific data. In fact, these advances have spurred new research opportunities between computer graphics and other disciplines, such as Earth sciences. Through collaboration, Earth and planetary scientists have benefited by using these advances in hardware technology to process large amounts of data for visualization and analysis. At Oregon State University, we are collaborating with the Oregon Space Grant and IGERT Ecosystem Informatics programs to investigate techniques for simulating the behavior of sand. In addition, we have also been collaborating with the Jet Propulsion Laboratory's DARTS Lab to exchange ideas on our research. The DARTS Lab specializes in the simulation of planetary vehicles, such as the Mars rovers. One aspect of their work is testing these vehicles in a virtual "sand box" to test their performance in different environments. Our research builds upon this idea to create a sand simulation framework to allow for more complex and diverse environments. As a basis for our framework, we have focused on planetary environments, such as the harsh, sandy regions on Mars. To evaluate our framework, we have used simulated planetary vehicles, such as a rover, to gain insight into the performance and interaction between the surface sand and the vehicle. Unfortunately, simulating the vast number of individual sand particles and their interaction with each other has been a computationally complex problem in the past. However, through the use of high-performance computing, we have developed a technique to subdivide physically active terrain regions across a large landscape. To achieve this, we only subdivide terrain regions where sand particles are actively participating with another object or force, such as a rover wheel. This is similar to a Level of Detail (LOD) technique, except that the density of subdivisions are determined by their proximity to the interacting object or force with the sand. To illustrate an example, as a rover wheel moves forward and approaches a particular sand region, that region will continue to subdivide until individual sand particles are represented. Conversely, if the rover wheel moves away, previously subdivided sand regions will recombine. Thus, individual sand particles are available when an interacting force is present but stored away if there is not. As such, this technique allows for many particles to be represented without the computational complexity. We have also further generalized these subdivision regions in our sand framework into any volumetric area suitable for use in the simulation. This allows for more compact subdivision regions and has fine-tuned our framework so that more emphasis can be placed on regions of actively participating sand. We feel that this increases the framework's usefulness across scientific applications and can provide for other research opportunities within the earth and planetary sciences. Through continued collaboration with our academic partners, we continue to build upon our sand simulation framework and look for other opportunities to utilize this research.
Preface to advances in numerical simulation of plasmas
NASA Astrophysics Data System (ADS)
Parker, Scott E.; Chacon, Luis
2016-10-01
This Journal of Computational Physics Special Issue, titled ;Advances in Numerical Simulation of Plasmas,; presents a snapshot of the international state of the art in the field of computational plasma physics. The articles herein are a subset of the topics presented as invited talks at the 24th International Conference on the Numerical Simulation of Plasmas (ICNSP), August 12-14, 2015 in Golden, Colorado. The choice of papers was highly selective. The ICNSP is held every other year and is the premier scientific meeting in the field of computational plasma physics.
[Earth and Space Sciences Project Services for NASA HPCC
NASA Technical Reports Server (NTRS)
Merkey, Phillip
2002-01-01
This grant supported the effort to characterize the problem domain of the Earth Science Technology Office's Computational Technologies Project, to engage the Beowulf Cluster Computing Community as well as the High Performance Computing Research Community so that we can predict the applicability of said technologies to the scientific community represented by the CT project and formulate long term strategies to provide the computational resources necessary to attain the anticipated scientific objectives of the CT project. Specifically, the goal of the evaluation effort is to use the information gathered over the course of the Round-3 investigations to quantify the trends in scientific expectations, the algorithmic requirements and capabilities of high-performance computers to satisfy this anticipated need.
[Earth Science Technology Office's Computational Technologies Project
NASA Technical Reports Server (NTRS)
Fischer, James (Technical Monitor); Merkey, Phillip
2005-01-01
This grant supported the effort to characterize the problem domain of the Earth Science Technology Office's Computational Technologies Project, to engage the Beowulf Cluster Computing Community as well as the High Performance Computing Research Community so that we can predict the applicability of said technologies to the scientific community represented by the CT project and formulate long term strategies to provide the computational resources necessary to attain the anticipated scientific objectives of the CT project. Specifically, the goal of the evaluation effort is to use the information gathered over the course of the Round-3 investigations to quantify the trends in scientific expectations, the algorithmic requirements and capabilities of high-performance computers to satisfy this anticipated need.
Improving the energy efficiency of sparse linear system solvers on multicore and manycore systems.
Anzt, H; Quintana-Ortí, E S
2014-06-28
While most recent breakthroughs in scientific research rely on complex simulations carried out in large-scale supercomputers, the power draft and energy spent for this purpose is increasingly becoming a limiting factor to this trend. In this paper, we provide an overview of the current status in energy-efficient scientific computing by reviewing different technologies used to monitor power draft as well as power- and energy-saving mechanisms available in commodity hardware. For the particular domain of sparse linear algebra, we analyse the energy efficiency of a broad collection of hardware architectures and investigate how algorithmic and implementation modifications can improve the energy performance of sparse linear system solvers, without negatively impacting their performance. © 2014 The Author(s) Published by the Royal Society. All rights reserved.
NASA Technical Reports Server (NTRS)
Schoenwald, Adam J.; Bradley, Damon C.; Mohammed, Priscilla N.; Piepmeier, Jeffrey R.; Wong, Mark
2016-01-01
In the field of microwave radiometry, Radio Frequency Interference (RFI) consistently degrades the value of scientific results. Through the use of digital receivers and signal processing, the effects of RFI on scientific measurements can be reduced depending on certain circumstances. As technology allows us to implement wider band digital receivers for radiometry, the problem of RFI mitigation changes. Our work focuses on finding a detector that outperforms real kurtosis in wide band scenarios. The algorithm implemented is a complex signal kurtosis detector which was modeled and simulated. The performance of both complex and real signal kurtosis is evaluated for continuous wave, pulsed continuous wave, and wide band quadrature phase shift keying (QPSK) modulations. The use of complex signal kurtosis increased the detectability of interference.
High Performance Input/Output for Parallel Computer Systems
NASA Technical Reports Server (NTRS)
Ligon, W. B.
1996-01-01
The goal of our project is to study the I/O characteristics of parallel applications used in Earth Science data processing systems such as Regional Data Centers (RDCs) or EOSDIS. Our approach is to study the runtime behavior of typical programs and the effect of key parameters of the I/O subsystem both under simulation and with direct experimentation on parallel systems. Our three year activity has focused on two items: developing a test bed that facilitates experimentation with parallel I/O, and studying representative programs from the Earth science data processing application domain. The Parallel Virtual File System (PVFS) has been developed for use on a number of platforms including the Tiger Parallel Architecture Workbench (TPAW) simulator, The Intel Paragon, a cluster of DEC Alpha workstations, and the Beowulf system (at CESDIS). PVFS provides considerable flexibility in configuring I/O in a UNIX- like environment. Access to key performance parameters facilitates experimentation. We have studied several key applications fiom levels 1,2 and 3 of the typical RDC processing scenario including instrument calibration and navigation, image classification, and numerical modeling codes. We have also considered large-scale scientific database codes used to organize image data.
ERIC Educational Resources Information Center
Hulshof, Casper D.; de Jong, Ton
2006-01-01
Students encounter many obstacles during scientific discovery learning with computer-based simulations. It is hypothesized that an effective type of support, that does not interfere with the scientific discovery learning process, should be delivered on a "just-in-time" base. This study explores the effect of facilitating access to…
Combining Phase Identification and Statistic Modeling for Automated Parallel Benchmark Generation
DOE Office of Scientific and Technical Information (OSTI.GOV)
Jin, Ye; Ma, Xiaosong; Liu, Qing Gary
2015-01-01
Parallel application benchmarks are indispensable for evaluating/optimizing HPC software and hardware. However, it is very challenging and costly to obtain high-fidelity benchmarks reflecting the scale and complexity of state-of-the-art parallel applications. Hand-extracted synthetic benchmarks are time-and labor-intensive to create. Real applications themselves, while offering most accurate performance evaluation, are expensive to compile, port, reconfigure, and often plainly inaccessible due to security or ownership concerns. This work contributes APPRIME, a novel tool for trace-based automatic parallel benchmark generation. Taking as input standard communication-I/O traces of an application's execution, it couples accurate automatic phase identification with statistical regeneration of event parameters tomore » create compact, portable, and to some degree reconfigurable parallel application benchmarks. Experiments with four NAS Parallel Benchmarks (NPB) and three real scientific simulation codes confirm the fidelity of APPRIME benchmarks. They retain the original applications' performance characteristics, in particular the relative performance across platforms.« less
NASA/ESA CV-990 spacelab simulation
NASA Technical Reports Server (NTRS)
Reller, J. O., Jr.
1976-01-01
Simplified techniques were applied to conduct an extensive spacelab simulation using the airborne laboratory. The scientific payload was selected to perform studies in upper atmospheric physics and infrared astronomy. The mission was successful and provided extensive data relevant to spacelab objectives on overall management of a complex international payload; experiment preparation, testing, and integration; training for proxy operation in space; data handling; multiexperimenter use of common experimenter facilities (telescopes); multiexperiment operation by experiment operators; selection criteria for spacelab experiment operators; and schedule requirements to prepare for such a spacelab mission.
Automatic sentence extraction for the detection of scientific paper relations
NASA Astrophysics Data System (ADS)
Sibaroni, Y.; Prasetiyowati, S. S.; Miftachudin, M.
2018-03-01
The relations between scientific papers are very useful for researchers to see the interconnection between scientific papers quickly. By observing the inter-article relationships, researchers can identify, among others, the weaknesses of existing research, performance improvements achieved to date, and tools or data typically used in research in specific fields. So far, methods that have been developed to detect paper relations include machine learning and rule-based methods. However, a problem still arises in the process of sentence extraction from scientific paper documents, which is still done manually. This manual process causes the detection of scientific paper relations longer and inefficient. To overcome this problem, this study performs an automatic sentences extraction while the paper relations are identified based on the citation sentence. The performance of the built system is then compared with that of the manual extraction system. The analysis results suggested that the automatic sentence extraction indicates a very high level of performance in the detection of paper relations, which is close to that of manual sentence extraction.
Chaste: An Open Source C++ Library for Computational Physiology and Biology
Mirams, Gary R.; Arthurs, Christopher J.; Bernabeu, Miguel O.; Bordas, Rafel; Cooper, Jonathan; Corrias, Alberto; Davit, Yohan; Dunn, Sara-Jane; Fletcher, Alexander G.; Harvey, Daniel G.; Marsh, Megan E.; Osborne, James M.; Pathmanathan, Pras; Pitt-Francis, Joe; Southern, James; Zemzemi, Nejib; Gavaghan, David J.
2013-01-01
Chaste — Cancer, Heart And Soft Tissue Environment — is an open source C++ library for the computational simulation of mathematical models developed for physiology and biology. Code development has been driven by two initial applications: cardiac electrophysiology and cancer development. A large number of cardiac electrophysiology studies have been enabled and performed, including high-performance computational investigations of defibrillation on realistic human cardiac geometries. New models for the initiation and growth of tumours have been developed. In particular, cell-based simulations have provided novel insight into the role of stem cells in the colorectal crypt. Chaste is constantly evolving and is now being applied to a far wider range of problems. The code provides modules for handling common scientific computing components, such as meshes and solvers for ordinary and partial differential equations (ODEs/PDEs). Re-use of these components avoids the need for researchers to ‘re-invent the wheel’ with each new project, accelerating the rate of progress in new applications. Chaste is developed using industrially-derived techniques, in particular test-driven development, to ensure code quality, re-use and reliability. In this article we provide examples that illustrate the types of problems Chaste can be used to solve, which can be run on a desktop computer. We highlight some scientific studies that have used or are using Chaste, and the insights they have provided. The source code, both for specific releases and the development version, is available to download under an open source Berkeley Software Distribution (BSD) licence at http://www.cs.ox.ac.uk/chaste, together with details of a mailing list and links to documentation and tutorials. PMID:23516352
Integrated Exoplanet Modeling with the GSFC Exoplanet Modeling & Analysis Center (EMAC)
NASA Astrophysics Data System (ADS)
Mandell, Avi M.; Hostetter, Carl; Pulkkinen, Antti; Domagal-Goldman, Shawn David
2018-01-01
Our ability to characterize the atmospheres of extrasolar planets will be revolutionized by JWST, WFIRST and future ground- and space-based telescopes. In preparation, the exoplanet community must develop an integrated suite of tools with which we can comprehensively predict and analyze observations of exoplanets, in order to characterize the planetary environments and ultimately search them for signs of habitability and life.The GSFC Exoplanet Modeling and Analysis Center (EMAC) will be a web-accessible high-performance computing platform with science support for modelers and software developers to host and integrate their scientific software tools, with the goal of leveraging the scientific contributions from the entire exoplanet community to improve our interpretations of future exoplanet discoveries. Our suite of models will include stellar models, models for star-planet interactions, atmospheric models, planet system science models, telescope models, instrument models, and finally models for retrieving signals from observational data. By integrating this suite of models, the community will be able to self-consistently calculate the emergent spectra from the planet whether from emission, scattering, or in transmission, and use these simulations to model the performance of current and new telescopes and their instrumentation.The EMAC infrastructure will not only provide a repository for planetary and exoplanetary community models, modeling tools and intermodal comparisons, but it will include a "run-on-demand" portal with each software tool hosted on a separate virtual machine. The EMAC system will eventually include a means of running or “checking in” new model simulations that are in accordance with the community-derived standards. Additionally, the results of intermodal comparisons will be used to produce open source publications that quantify the model comparisons and provide an overview of community consensus on model uncertainties on the climates of various planetary targets.
NASA Astrophysics Data System (ADS)
Suhandi, A.; Muslim; Samsudin, A.; Hermita, N.; Supriyatman
2018-05-01
In this study, the effectiveness of the use of Question-Driven Levels of Inquiry Based Instruction (QD-LOIBI) assisted visual multimedia supported teaching materials on enhancing senior high school students scientific explanation ability has been studied. QD-LOIBI was designed by following five-levels of inquiry proposed by Wenning. Visual multimedia used in teaching materials included image (photo), virtual simulation and video phenomena. QD-LOIBI assisted teaching materials supported by visual multimedia were tried out on senior high school students at one high school in one district in West Java. A quasi-experiment method with design one experiment group (n = 31) and one control group (n = 32) were used. Experimental group were given QD-LOIBI assisted teaching material supported by visual multimedia, whereas the control group were given QD-LOIBI assisted teaching materials not supported visual multimedia. Data on the ability of scientific explanation in both groups were collected by scientific explanation ability test in essay form concerning kinetic gas theory concept. The results showed that the number of students in the experimental class that has increased the category and quality of scientific explanation is greater than in the control class. These results indicate that the use of multimedia supported instructional materials developed for implementation of QD-LOIBI can improve students’ ability to provide explanations supported by scientific evidence gained from practicum activities and applicable concepts, laws, principles or theories.
A random Q-switched fiber laser
Tang, Yulong; Xu, Jianqiu
2015-01-01
Extensive studies have been performed on random lasers in which multiple-scattering feedback is used to generate coherent emission. Q-switching and mode-locking are well-known routes for achieving high peak power output in conventional lasers. However, in random lasers, the ubiquitous random cavities that are formed by multiple scattering inhibit energy storage, making Q-switching impossible. In this paper, widespread Rayleigh scattering arising from the intrinsic micro-scale refractive-index irregularities of fiber cores is used to form random cavities along the fiber. The Q-factor of the cavity is rapidly increased by stimulated Brillouin scattering just after the spontaneous emission is enhanced by random cavity resonances, resulting in random Q-switched pulses with high brightness and high peak power. This report is the first observation of high-brightness random Q-switched laser emission and is expected to stimulate new areas of scientific research and applications, including encryption, remote three-dimensional random imaging and the simulation of stellar lasing. PMID:25797520
NASA Astrophysics Data System (ADS)
Anantharaj, Valentine; Norman, Matthew; Evans, Katherine; Taylor, Mark; Worley, Patrick; Hack, James; Mayer, Benjamin
2014-05-01
During 2013, high-resolution climate model simulations accounted for over 100 million "core hours" using Titan at the Oak Ridge Leadership Computing Facility (OLCF). The suite of climate modeling experiments, primarily using the Community Earth System Model (CESM) at nearly 0.25 degree horizontal resolution, generated over a petabyte of data and nearly 100,000 files, ranging in sizes from 20 MB to over 100 GB. Effective utilization of leadership class resources requires careful planning and preparation. The application software, such as CESM, need to be ported, optimized and benchmarked for the target platform in order to meet the computational readiness requirements. The model configuration needs to be "tuned and balanced" for the experiments. This can be a complicated and resource intensive process, especially for high-resolution configurations using complex physics. The volume of I/O also increases with resolution; and new strategies may be required to manage I/O especially for large checkpoint and restart files that may require more frequent output for resiliency. It is also essential to monitor the application performance during the course of the simulation exercises. Finally, the large volume of data needs to be analyzed to derive the scientific results; and appropriate data and information delivered to the stakeholders. Titan is currently the largest supercomputer available for open science. The computational resources, in terms of "titan core hours" are allocated primarily via the Innovative and Novel Computational Impact on Theory and Experiment (INCITE) and ASCR Leadership Computing Challenge (ALCC) programs, both sponsored by the U.S. Department of Energy (DOE) Office of Science. Titan is a Cray XK7 system, capable of a theoretical peak performance of over 27 PFlop/s, consists of 18,688 compute nodes, with a NVIDIA Kepler K20 GPU and a 16-core AMD Opteron CPU in every node, for a total of 299,008 Opteron cores and 18,688 GPUs offering a cumulative 560,640 equivalent cores. Scientific applications, such as CESM, are also required to demonstrate a "computational readiness capability" to efficiently scale across and utilize 20% of the entire system. The 0,25 deg configuration of the spectral element dynamical core of the Community Atmosphere Model (CAM-SE), the atmospheric component of CESM, has been demonstrated to scale efficiently across more than 5,000 nodes (80,000 CPU cores) on Titan. The tracer transport routines of CAM-SE have also been ported to take advantage of the hybrid many-core architecture of Titan using GPUs [see EGU2014-4233], yielding over 2X speedup when transporting over 100 tracers. The high throughput I/O in CESM, based on the Parallel IO Library (PIO), is being further augmented to support even higher resolutions and enhance resiliency. The application performance of the individual runs are archived in a database and routinely analyzed to identify and rectify performance degradation during the course of the experiments. The various resources available at the OLCF now support a scientific workflow to facilitate high-resolution climate modelling. A high-speed center-wide parallel file system, called ATLAS, capable of 1 TB/s, is available on Titan as well as on the clusters used for analysis (Rhea) and visualization (Lens/EVEREST). Long-term archive is facilitated by the HPSS storage system. The Earth System Grid (ESG), featuring search & discovery, is also used to deliver data. The end-to-end workflow allows OLCF users to efficiently share data and publish results in a timely manner.
The LSST Scheduler from design to construction
NASA Astrophysics Data System (ADS)
Delgado, Francisco; Reuter, Michael A.
2016-07-01
The Large Synoptic Survey Telescope (LSST) will be a highly robotic facility, demanding a very high efficiency during its operation. To achieve this, the LSST Scheduler has been envisioned as an autonomous software component of the Observatory Control System (OCS), that selects the sequence of targets in real time. The Scheduler will drive the survey using optimization of a dynamic cost function of more than 200 parameters. Multiple science programs produce thousands of candidate targets for each observation, and multiple telemetry measurements are received to evaluate the external and the internal conditions of the observatory. The design of the LSST Scheduler started early in the project supported by Model Based Systems Engineering, detailed prototyping and scientific validation of the survey capabilities required. In order to build such a critical component, an agile development path in incremental releases is presented, integrated to the development plan of the Operations Simulator (OpSim) to allow constant testing, integration and validation in a simulated OCS environment. The final product is a Scheduler that is also capable of running 2000 times faster than real time in simulation mode for survey studies and scientific validation during commissioning and operations.
OASYS (OrAnge SYnchrotron Suite): an open-source graphical environment for x-ray virtual experiments
NASA Astrophysics Data System (ADS)
Rebuffi, Luca; Sanchez del Rio, Manuel
2017-08-01
The evolution of the hardware platforms, the modernization of the software tools, the access to the codes of a large number of young people and the popularization of the open source software for scientific applications drove us to design OASYS (ORange SYnchrotron Suite), a completely new graphical environment for modelling X-ray experiments. The implemented software architecture allows to obtain not only an intuitive and very-easy-to-use graphical interface, but also provides high flexibility and rapidity for interactive simulations, making configuration changes to quickly compare multiple beamline configurations. Its purpose is to integrate in a synergetic way the most powerful calculation engines available. OASYS integrates different simulation strategies via the implementation of adequate simulation tools for X-ray Optics (e.g. ray tracing and wave optics packages). It provides a language to make them to communicate by sending and receiving encapsulated data. Python has been chosen as main programming language, because of its universality and popularity in scientific computing. The software Orange, developed at the University of Ljubljana (SLO), is the high level workflow engine that provides the interaction with the user and communication mechanisms.
Exploring the dynamics of collective cognition using a computational model of cognitive dissonance
NASA Astrophysics Data System (ADS)
Smart, Paul R.; Sycara, Katia; Richardson, Darren P.
2013-05-01
The socially-distributed nature of cognitive processing in a variety of organizational settings means that there is increasing scientific interest in the factors that affect collective cognition. In military coalitions, for example, there is a need to understand how factors such as communication network topology, trust, cultural differences and the potential for miscommunication affects the ability of distributed teams to generate high quality plans, to formulate effective decisions and to develop shared situation awareness. The current paper presents a computational model and associated simulation capability for performing in silico experimental analyses of collective sensemaking. This model can be used in combination with the results of human experimental studies in order to improve our understanding of the factors that influence collective sensemaking processes.
Teaching Harmonic Motion in Trigonometry: Inductive Inquiry Supported by Physics Simulations
ERIC Educational Resources Information Center
Sokolowski, Andrzej; Rackley, Robin
2011-01-01
In this article, the authors present a lesson whose goal is to utilise a scientific environment to immerse a trigonometry student in the process of mathematical modelling. The scientific environment utilised during this activity is a physics simulation called "Wave on a String" created by the PhET Interactive Simulations Project at…
Mission Simulation Facility: Simulation Support for Autonomy Development
NASA Technical Reports Server (NTRS)
Pisanich, Greg; Plice, Laura; Neukom, Christian; Flueckiger, Lorenzo; Wagner, Michael
2003-01-01
The Mission Simulation Facility (MSF) supports research in autonomy technology for planetary exploration vehicles. Using HLA (High Level Architecture) across distributed computers, the MSF connects users autonomy algorithms with provided or third-party simulations of robotic vehicles and planetary surface environments, including onboard components and scientific instruments. Simulation fidelity is variable to meet changing needs as autonomy technology advances in Technical Readiness Level (TRL). A virtual robot operating in a virtual environment offers numerous advantages over actual hardware, including availability, simplicity, and risk mitigation. The MSF is in use by researchers at NASA Ames Research Center (ARC) and has demonstrated basic functionality. Continuing work will support the needs of a broader user base.
NASA Astrophysics Data System (ADS)
Sagert, I.; Fann, G. I.; Fattoyev, F. J.; Postnikov, S.; Horowitz, C. J.
2016-05-01
Background: Neutron star and supernova matter at densities just below the nuclear matter saturation density is expected to form a lattice of exotic shapes. These so-called nuclear pasta phases are caused by Coulomb frustration. Their elastic and transport properties are believed to play an important role for thermal and magnetic field evolution, rotation, and oscillation of neutron stars. Furthermore, they can impact neutrino opacities in core-collapse supernovae. Purpose: In this work, we present proof-of-principle three-dimensional (3D) Skyrme Hartree-Fock (SHF) simulations of nuclear pasta with the Multi-resolution ADaptive Numerical Environment for Scientific Simulations (MADNESS). Methods: We perform benchmark studies of 16O, 208Pb, and 238U nuclear ground states and calculate binding energies via 3D SHF simulations. Results are compared with experimentally measured binding energies as well as with theoretically predicted values from an established SHF code. The nuclear pasta simulation is initialized in the so-called waffle geometry as obtained by the Indiana University Molecular Dynamics (IUMD) code. The size of the unit cell is 24 fm with an average density of about ρ =0.05 fm-3 , proton fraction of Yp=0.3 , and temperature of T =0 MeV. Results: Our calculations reproduce the binding energies and shapes of light and heavy nuclei with different geometries. For the pasta simulation, we find that the final geometry is very similar to the initial waffle state. We compare calculations with and without spin-orbit forces. We find that while subtle differences are present, the pasta phase remains in the waffle geometry. Conclusions: Within the MADNESS framework, we can successfully perform calculations of inhomogeneous nuclear matter. By using pasta configurations from IUMD it is possible to explore different geometries and test the impact of self-consistent calculations on the latter.
A generative model for scientific concept hierarchies.
Datta, Srayan; Adar, Eytan
2018-01-01
In many scientific disciplines, each new 'product' of research (method, finding, artifact, etc.) is often built upon previous findings-leading to extension and branching of scientific concepts over time. We aim to understand the evolution of scientific concepts by placing them in phylogenetic hierarchies where scientific keyphrases from a large, longitudinal academic corpora are used as a proxy of scientific concepts. These hierarchies exhibit various important properties, including power-law degree distribution, power-law component size distribution, existence of a giant component and less probability of extending an older concept. We present a generative model based on preferential attachment to simulate the graphical and temporal properties of these hierarchies which helps us understand the underlying process behind scientific concept evolution and may be useful in simulating and predicting scientific evolution.
A generative model for scientific concept hierarchies
Adar, Eytan
2018-01-01
In many scientific disciplines, each new ‘product’ of research (method, finding, artifact, etc.) is often built upon previous findings–leading to extension and branching of scientific concepts over time. We aim to understand the evolution of scientific concepts by placing them in phylogenetic hierarchies where scientific keyphrases from a large, longitudinal academic corpora are used as a proxy of scientific concepts. These hierarchies exhibit various important properties, including power-law degree distribution, power-law component size distribution, existence of a giant component and less probability of extending an older concept. We present a generative model based on preferential attachment to simulate the graphical and temporal properties of these hierarchies which helps us understand the underlying process behind scientific concept evolution and may be useful in simulating and predicting scientific evolution. PMID:29474409
NASA Astrophysics Data System (ADS)
Kees, C. E.; Farthing, M. W.; Terrel, A.; Certik, O.; Seljebotn, D.
2013-12-01
This presentation will focus on two barriers to progress in the hydrological modeling community, and research and development conducted to lessen or eliminate them. The first is a barrier to sharing hydrological models among specialized scientists that is caused by intertwining the implementation of numerical methods with the implementation of abstract numerical modeling information. In the Proteus toolkit for computational methods and simulation, we have decoupled these two important parts of computational model through separate "physics" and "numerics" interfaces. More recently we have begun developing the Strong Form Language for easy and direct representation of the mathematical model formulation in a domain specific language embedded in Python. The second major barrier is sharing ANY scientific software tools that have complex library or module dependencies, as most parallel, multi-physics hydrological models must have. In this setting, users and developer are dependent on an entire distribution, possibly depending on multiple compilers and special instructions depending on the environment of the target machine. To solve these problem we have developed, hashdist, a stateless package management tool and a resulting portable, open source scientific software distribution.
NASA Technical Reports Server (NTRS)
1999-01-01
Aeronautical research usually begins with computers, wind tunnels, and flight simulators, but eventually the theories must fly. This is when flight research begins, and aircraft are the primary tools of the trade. Flight research involves doing precision maneuvers in either a specially built experimental aircraft or an existing production airplane that has been modified. For example, the AD-1 was a unique airplane made only for flight research, while the NASA F-18 High Alpha Research Vehicle (HARV) was a standard fighter aircraft that was transformed into a one-of-a-kind aircraft as it was fitted with new propulsion systems, flight controls, and scientific equipment. All research aircraft are able to perform scientific experiments because of the onboard instruments that record data about its systems, aerodynamics, and the outside environment. Since the 1970's, NASA flight research has become more comprehensive, with flights involving everything form Space Shuttles to ultralights. NASA now flies not only the fastest airplanes, but some of the slowest. Flying machines continue to evolve with new wing designs, propulsion systems, and flight controls. As always, a look at today's experimental research aircraft is a preview of the future.
NASA Astrophysics Data System (ADS)
Develaki, Maria
2017-11-01
Scientific reasoning is particularly pertinent to science education since it is closely related to the content and methodologies of science and contributes to scientific literacy. Much of the research in science education investigates the appropriate framework and teaching methods and tools needed to promote students' ability to reason and evaluate in a scientific way. This paper aims (a) to contribute to an extended understanding of the nature and pedagogical importance of model-based reasoning and (b) to exemplify how using computer simulations can support students' model-based reasoning. We provide first a background for both scientific reasoning and computer simulations, based on the relevant philosophical views and the related educational discussion. This background suggests that the model-based framework provides an epistemologically valid and pedagogically appropriate basis for teaching scientific reasoning and for helping students develop sounder reasoning and decision-taking abilities and explains how using computer simulations can foster these abilities. We then provide some examples illustrating the use of computer simulations to support model-based reasoning and evaluation activities in the classroom. The examples reflect the procedure and criteria for evaluating models in science and demonstrate the educational advantages of their application in classroom reasoning activities.
On disciplinary fragmentation and scientific progress.
Balietti, Stefano; Mäs, Michael; Helbing, Dirk
2015-01-01
Why are some scientific disciplines, such as sociology and psychology, more fragmented into conflicting schools of thought than other fields, such as physics and biology? Furthermore, why does high fragmentation tend to coincide with limited scientific progress? We analyzed a formal model where scientists seek to identify the correct answer to a research question. Each scientist is influenced by three forces: (i) signals received from the correct answer to the question; (ii) peer influence; and (iii) noise. We observed the emergence of different macroscopic patterns of collective exploration, and studied how the three forces affect the degree to which disciplines fall apart into divergent fragments, or so-called "schools of thought". We conducted two simulation experiments where we tested (A) whether the three forces foster or hamper progress, and (B) whether disciplinary fragmentation causally affects scientific progress and vice versa. We found that fragmentation critically limits scientific progress. Strikingly, there is no effect in the opposite causal direction. What is more, our results shows that at the heart of the mechanisms driving scientific progress we find (i) social interactions, and (ii) peer disagreement. In fact, fragmentation is increased and progress limited if the simulated scientists are open to influence only by peers with very similar views, or when within-school diversity is lost. Finally, disciplines where the scientists received strong signals from the correct answer were less fragmented and experienced faster progress. We discuss model's implications for the design of social institutions fostering interdisciplinarity and participation in science.
On Disciplinary Fragmentation and Scientific Progress
Balietti, Stefano; Mäs, Michael; Helbing, Dirk
2015-01-01
Why are some scientific disciplines, such as sociology and psychology, more fragmented into conflicting schools of thought than other fields, such as physics and biology? Furthermore, why does high fragmentation tend to coincide with limited scientific progress? We analyzed a formal model where scientists seek to identify the correct answer to a research question. Each scientist is influenced by three forces: (i) signals received from the correct answer to the question; (ii) peer influence; and (iii) noise. We observed the emergence of different macroscopic patterns of collective exploration, and studied how the three forces affect the degree to which disciplines fall apart into divergent fragments, or so-called “schools of thought”. We conducted two simulation experiments where we tested (A) whether the three forces foster or hamper progress, and (B) whether disciplinary fragmentation causally affects scientific progress and vice versa. We found that fragmentation critically limits scientific progress. Strikingly, there is no effect in the opposite causal direction. What is more, our results shows that at the heart of the mechanisms driving scientific progress we find (i) social interactions, and (ii) peer disagreement. In fact, fragmentation is increased and progress limited if the simulated scientists are open to influence only by peers with very similar views, or when within-school diversity is lost. Finally, disciplines where the scientists received strong signals from the correct answer were less fragmented and experienced faster progress. We discuss model’s implications for the design of social institutions fostering interdisciplinarity and participation in science. PMID:25790025
ERIC Educational Resources Information Center
Robinson, William R.
2000-01-01
Describes a review of research that addresses the effectiveness of simulations in promoting scientific discovery learning and the problems that learners may encounter when using discovery learning. (WRM)
Understanding the Performance and Potential of Cloud Computing for Scientific Applications
Sadooghi, Iman; Martin, Jesus Hernandez; Li, Tonglin; ...
2015-02-19
In this paper, commercial clouds bring a great opportunity to the scientific computing area. Scientific applications usually require significant resources, however not all scientists have access to sufficient high-end computing systems, may of which can be found in the Top500 list. Cloud Computing has gained the attention of scientists as a competitive resource to run HPC applications at a potentially lower cost. But as a different infrastructure, it is unclear whether clouds are capable of running scientific applications with a reasonable performance per money spent. This work studies the performance of public clouds and places this performance in context tomore » price. We evaluate the raw performance of different services of AWS cloud in terms of the basic resources, such as compute, memory, network and I/O. We also evaluate the performance of the scientific applications running in the cloud. This paper aims to assess the ability of the cloud to perform well, as well as to evaluate the cost of the cloud running scientific applications. We developed a full set of metrics and conducted a comprehensive performance evlauation over the Amazon cloud. We evaluated EC2, S3, EBS and DynamoDB among the many Amazon AWS services. We evaluated the memory sub-system performance with CacheBench, the network performance with iperf, processor and network performance with the HPL benchmark application, and shared storage with NFS and PVFS in addition to S3. We also evaluated a real scientific computing application through the Swift parallel scripting system at scale. Armed with both detailed benchmarks to gauge expected performance and a detailed monetary cost analysis, we expect this paper will be a recipe cookbook for scientists to help them decide where to deploy and run their scientific applications between public clouds, private clouds, or hybrid clouds.« less
Understanding the Performance and Potential of Cloud Computing for Scientific Applications
DOE Office of Scientific and Technical Information (OSTI.GOV)
Sadooghi, Iman; Martin, Jesus Hernandez; Li, Tonglin
In this paper, commercial clouds bring a great opportunity to the scientific computing area. Scientific applications usually require significant resources, however not all scientists have access to sufficient high-end computing systems, may of which can be found in the Top500 list. Cloud Computing has gained the attention of scientists as a competitive resource to run HPC applications at a potentially lower cost. But as a different infrastructure, it is unclear whether clouds are capable of running scientific applications with a reasonable performance per money spent. This work studies the performance of public clouds and places this performance in context tomore » price. We evaluate the raw performance of different services of AWS cloud in terms of the basic resources, such as compute, memory, network and I/O. We also evaluate the performance of the scientific applications running in the cloud. This paper aims to assess the ability of the cloud to perform well, as well as to evaluate the cost of the cloud running scientific applications. We developed a full set of metrics and conducted a comprehensive performance evlauation over the Amazon cloud. We evaluated EC2, S3, EBS and DynamoDB among the many Amazon AWS services. We evaluated the memory sub-system performance with CacheBench, the network performance with iperf, processor and network performance with the HPL benchmark application, and shared storage with NFS and PVFS in addition to S3. We also evaluated a real scientific computing application through the Swift parallel scripting system at scale. Armed with both detailed benchmarks to gauge expected performance and a detailed monetary cost analysis, we expect this paper will be a recipe cookbook for scientists to help them decide where to deploy and run their scientific applications between public clouds, private clouds, or hybrid clouds.« less
Oh, Hong-Choon; Toh, Hong-Guan; Giap Cheong, Eddy Seng
2011-11-01
Using the classical process improvement framework of Plan-Do-Study-Act (PDSA), the diagnostic radiology department of a tertiary hospital identified several patient cycle time reduction strategies. Experimentation of these strategies (which included procurement of new machines, hiring of new staff, redesign of queue system, etc.) through pilot scale implementation was impractical because it might incur substantial expenditure or be operationally disruptive. With this in mind, simulation modeling was used to test these strategies via performance of "what if" analyses. Using the output generated by the simulation model, the team was able to identify a cost-free cycle time reduction strategy, which subsequently led to a reduction of patient cycle time and achievement of a management-defined performance target. As healthcare professionals work continually to improve healthcare operational efficiency in response to rising healthcare costs and patient expectation, simulation modeling offers an effective scientific framework that can complement established process improvement framework like PDSA to realize healthcare process enhancement. © 2011 National Association for Healthcare Quality.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Spentzouris, P.; /Fermilab; Cary, J.
The design and performance optimization of particle accelerators are essential for the success of the DOE scientific program in the next decade. Particle accelerators are very complex systems whose accurate description involves a large number of degrees of freedom and requires the inclusion of many physics processes. Building on the success of the SciDAC-1 Accelerator Science and Technology project, the SciDAC-2 Community Petascale Project for Accelerator Science and Simulation (ComPASS) is developing a comprehensive set of interoperable components for beam dynamics, electromagnetics, electron cooling, and laser/plasma acceleration modelling. ComPASS is providing accelerator scientists the tools required to enable the necessarymore » accelerator simulation paradigm shift from high-fidelity single physics process modeling (covered under SciDAC1) to high-fidelity multiphysics modeling. Our computational frameworks have been used to model the behavior of a large number of accelerators and accelerator R&D experiments, assisting both their design and performance optimization. As parallel computational applications, the ComPASS codes have been shown to make effective use of thousands of processors. ComPASS is in the first year of executing its plan to develop the next-generation HPC accelerator modeling tools. ComPASS aims to develop an integrated simulation environment that will utilize existing and new accelerator physics modules with petascale capabilities, by employing modern computing and solver technologies. The ComPASS vision is to deliver to accelerator scientists a virtual accelerator and virtual prototyping modeling environment, with the necessary multiphysics, multiscale capabilities. The plan for this development includes delivering accelerator modeling applications appropriate for each stage of the ComPASS software evolution. Such applications are already being used to address challenging problems in accelerator design and optimization. The ComPASS organization for software development and applications accounts for the natural domain areas (beam dynamics, electromagnetics, and advanced acceleration), and all areas depend on the enabling technologies activities, such as solvers and component technology, to deliver the desired performance and integrated simulation environment. The ComPASS applications focus on computationally challenging problems important for design or performance optimization to all major HEP, NP, and BES accelerator facilities. With the cost and complexity of particle accelerators rising, the use of computation to optimize their designs and find improved operating regimes becomes essential, potentially leading to significant cost savings with modest investment.« less
Finite-element approach to Brownian dynamics of polymers.
Cyron, Christian J; Wall, Wolfgang A
2009-12-01
In the last decades simulation tools for Brownian dynamics of polymers have attracted more and more interest. Such simulation tools have been applied to a large variety of problems and accelerated the scientific progress significantly. However, the currently most frequently used explicit bead models exhibit severe limitations, especially with respect to time step size, the necessity of artificial constraints and the lack of a sound mathematical foundation. Here we present a framework for simulations of Brownian polymer dynamics based on the finite-element method. This approach allows simulating a wide range of physical phenomena at a highly attractive computational cost on the basis of a far-developed mathematical background.
Hybrid imaging: a quantum leap in scientific imaging
NASA Astrophysics Data System (ADS)
Atlas, Gene; Wadsworth, Mark V.
2004-01-01
ImagerLabs has advanced its patented next generation imaging technology called the Hybrid Imaging Technology (HIT) that offers scientific quality performance. The key to the HIT is the merging of the CCD and CMOS technologies through hybridization rather than process integration. HIT offers exceptional QE, fill factor, broad spectral response and very low noise properties of the CCD. In addition, it provides the very high-speed readout, low power, high linearity and high integration capability of CMOS sensors. In this work, we present the benefits, and update the latest advances in the performance of this exciting technology.
Advancements in Large-Scale Data/Metadata Management for Scientific Data.
NASA Astrophysics Data System (ADS)
Guntupally, K.; Devarakonda, R.; Palanisamy, G.; Frame, M. T.
2017-12-01
Scientific data often comes with complex and diverse metadata which are critical for data discovery and users. The Online Metadata Editor (OME) tool, which was developed by an Oak Ridge National Laboratory team, effectively manages diverse scientific datasets across several federal data centers, such as DOE's Atmospheric Radiation Measurement (ARM) Data Center and USGS's Core Science Analytics, Synthesis, and Libraries (CSAS&L) project. This presentation will focus mainly on recent developments and future strategies for refining OME tool within these centers. The ARM OME is a standard based tool (https://www.archive.arm.gov/armome) that allows scientists to create and maintain metadata about their data products. The tool has been improved with new workflows that help metadata coordinators and submitting investigators to submit and review their data more efficiently. The ARM Data Center's newly upgraded Data Discovery Tool (http://www.archive.arm.gov/discovery) uses rich metadata generated by the OME to enable search and discovery of thousands of datasets, while also providing a citation generator and modern order-delivery techniques like Globus (using GridFTP), Dropbox and THREDDS. The Data Discovery Tool also supports incremental indexing, which allows users to find new data as and when they are added. The USGS CSAS&L search catalog employs a custom version of the OME (https://www1.usgs.gov/csas/ome), which has been upgraded with high-level Federal Geographic Data Committee (FGDC) validations and the ability to reserve and mint Digital Object Identifiers (DOIs). The USGS's Science Data Catalog (SDC) (https://data.usgs.gov/datacatalog) allows users to discover a myriad of science data holdings through a web portal. Recent major upgrades to the SDC and ARM Data Discovery Tool include improved harvesting performance and migration using new search software, such as Apache Solr 6.0 for serving up data/metadata to scientific communities. Our presentation will highlight the future enhancements of these tools which enable users to retrieve fast search results, along with parallelizing the retrieval process from online and High Performance Storage Systems. In addition, these improvements to the tools will support additional metadata formats like the Large-Eddy Simulation (LES) ARM Symbiotic and Observation (LASSO) bundle data.
Fast I/O for Massively Parallel Applications
NASA Technical Reports Server (NTRS)
OKeefe, Matthew T.
1996-01-01
The two primary goals for this report were the design, contruction and modeling of parallel disk arrays for scientific visualization and animation, and a study of the IO requirements of highly parallel applications. In addition, further work in parallel display systems required to project and animate the very high-resolution frames resulting from our supercomputing simulations in ocean circulation and compressible gas dynamics.
Benefits of computer screen-based simulation in learning cardiac arrest procedures.
Bonnetain, Elodie; Boucheix, Jean-Michel; Hamet, Maël; Freysz, Marc
2010-07-01
What is the best way to train medical students early so that they acquire basic skills in cardiopulmonary resuscitation as effectively as possible? Studies have shown the benefits of high-fidelity patient simulators, but have also demonstrated their limits. New computer screen-based multimedia simulators have fewer constraints than high-fidelity patient simulators. In this area, as yet, there has been no research on the effectiveness of transfer of learning from a computer screen-based simulator to more realistic situations such as those encountered with high-fidelity patient simulators. We tested the benefits of learning cardiac arrest procedures using a multimedia computer screen-based simulator in 28 Year 2 medical students. Just before the end of the traditional resuscitation course, we compared two groups. An experiment group (EG) was first asked to learn to perform the appropriate procedures in a cardiac arrest scenario (CA1) in the computer screen-based learning environment and was then tested on a high-fidelity patient simulator in another cardiac arrest simulation (CA2). While the EG was learning to perform CA1 procedures in the computer screen-based learning environment, a control group (CG) actively continued to learn cardiac arrest procedures using practical exercises in a traditional class environment. Both groups were given the same amount of practice, exercises and trials. The CG was then also tested on the high-fidelity patient simulator for CA2, after which it was asked to perform CA1 using the computer screen-based simulator. Performances with both simulators were scored on a precise 23-point scale. On the test on a high-fidelity patient simulator, the EG trained with a multimedia computer screen-based simulator performed significantly better than the CG trained with traditional exercises and practice (16.21 versus 11.13 of 23 possible points, respectively; p<0.001). Computer screen-based simulation appears to be effective in preparing learners to use high-fidelity patient simulators, which present simulations that are closer to real-life situations.
NASA Astrophysics Data System (ADS)
Lee, Seungjoon; Kevrekidis, Ioannis G.; Karniadakis, George Em
2017-09-01
Exascale-level simulations require fault-resilient algorithms that are robust against repeated and expected software and/or hardware failures during computations, which may render the simulation results unsatisfactory. If each processor can share some global information about the simulation from a coarse, limited accuracy but relatively costless auxiliary simulator we can effectively fill-in the missing spatial data at the required times by a statistical learning technique - multi-level Gaussian process regression, on the fly; this has been demonstrated in previous work [1]. Based on the previous work, we also employ another (nonlinear) statistical learning technique, Diffusion Maps, that detects computational redundancy in time and hence accelerate the simulation by projective time integration, giving the overall computation a "patch dynamics" flavor. Furthermore, we are now able to perform information fusion with multi-fidelity and heterogeneous data (including stochastic data). Finally, we set the foundations of a new framework in CFD, called patch simulation, that combines information fusion techniques from, in principle, multiple fidelity and resolution simulations (and even experiments) with a new adaptive timestep refinement technique. We present two benchmark problems (the heat equation and the Navier-Stokes equations) to demonstrate the new capability that statistical learning tools can bring to traditional scientific computing algorithms. For each problem, we rely on heterogeneous and multi-fidelity data, either from a coarse simulation of the same equation or from a stochastic, particle-based, more "microscopic" simulation. We consider, as such "auxiliary" models, a Monte Carlo random walk for the heat equation and a dissipative particle dynamics (DPD) model for the Navier-Stokes equations. More broadly, in this paper we demonstrate the symbiotic and synergistic combination of statistical learning, domain decomposition, and scientific computing in exascale simulations.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Arkin, Adam; Bader, David C.; Coffey, Richard
Understanding the fundamentals of genomic systems or the processes governing impactful weather patterns are examples of the types of simulation and modeling performed on the most advanced computing resources in America. High-performance computing and computational science together provide a necessary platform for the mission science conducted by the Biological and Environmental Research (BER) office at the U.S. Department of Energy (DOE). This report reviews BER’s computing needs and their importance for solving some of the toughest problems in BER’s portfolio. BER’s impact on science has been transformative. Mapping the human genome, including the U.S.-supported international Human Genome Project that DOEmore » began in 1987, initiated the era of modern biotechnology and genomics-based systems biology. And since the 1950s, BER has been a core contributor to atmospheric, environmental, and climate science research, beginning with atmospheric circulation studies that were the forerunners of modern Earth system models (ESMs) and by pioneering the implementation of climate codes onto high-performance computers. See http://exascaleage.org/ber/ for more information.« less
In-depth analysis of bicycle hydraulic disc brakes
NASA Astrophysics Data System (ADS)
Maier, Oliver; Györfi, Benedikt; Wrede, Jürgen; Arnold, Timo; Moia, Alessandro
2017-10-01
Hydraulic Disc Brakes (HDBs) represent the most recent and innovative bicycle braking system. Especially Electric Bicycles (EBs), which are becoming more and more popular, are equipped with this powerful, unaffected by environmental influences, and low-wear type of brakes. As a consequence of the high braking performance, typical bicycle braking errors lead to more serious accidents. This is the starting point for the development of a Braking Dynamics Assistance system (BDA) to prevent front wheel lockup and nose-over (falling over the handlebars). One of the essential prerequisites for the system design is a better understanding of bicycle HDBs' characteristics. A physical simulation model and a test bench have been built for this purpose. The results of the virtual and real experiments conducted show a high correlation and allow valuable insights into HDBs on bicycles, which have not been studied scientifically in any depth so far.
From the desktop to the grid: scalable bioinformatics via workflow conversion.
de la Garza, Luis; Veit, Johannes; Szolek, Andras; Röttig, Marc; Aiche, Stephan; Gesing, Sandra; Reinert, Knut; Kohlbacher, Oliver
2016-03-12
Reproducibility is one of the tenets of the scientific method. Scientific experiments often comprise complex data flows, selection of adequate parameters, and analysis and visualization of intermediate and end results. Breaking down the complexity of such experiments into the joint collaboration of small, repeatable, well defined tasks, each with well defined inputs, parameters, and outputs, offers the immediate benefit of identifying bottlenecks, pinpoint sections which could benefit from parallelization, among others. Workflows rest upon the notion of splitting complex work into the joint effort of several manageable tasks. There are several engines that give users the ability to design and execute workflows. Each engine was created to address certain problems of a specific community, therefore each one has its advantages and shortcomings. Furthermore, not all features of all workflow engines are royalty-free -an aspect that could potentially drive away members of the scientific community. We have developed a set of tools that enables the scientific community to benefit from workflow interoperability. We developed a platform-free structured representation of parameters, inputs, outputs of command-line tools in so-called Common Tool Descriptor documents. We have also overcome the shortcomings and combined the features of two royalty-free workflow engines with a substantial user community: the Konstanz Information Miner, an engine which we see as a formidable workflow editor, and the Grid and User Support Environment, a web-based framework able to interact with several high-performance computing resources. We have thus created a free and highly accessible way to design workflows on a desktop computer and execute them on high-performance computing resources. Our work will not only reduce time spent on designing scientific workflows, but also make executing workflows on remote high-performance computing resources more accessible to technically inexperienced users. We strongly believe that our efforts not only decrease the turnaround time to obtain scientific results but also have a positive impact on reproducibility, thus elevating the quality of obtained scientific results.
ERIC Educational Resources Information Center
Abdullah, Sopiah; Shariff, Adilah
2008-01-01
The purpose of the study was to investigate the effects of inquiry-based computer simulation with heterogeneous-ability cooperative learning (HACL) and inquiry-based computer simulation with friendship cooperative learning (FCL) on (a) scientific reasoning (SR) and (b) conceptual understanding (CU) among Form Four students in Malaysian Smart…
Extraordinary Tools for Extraordinary Science: The Impact ofSciDAC on Accelerator Science&Technology
DOE Office of Scientific and Technical Information (OSTI.GOV)
Ryne, Robert D.
2006-08-10
Particle accelerators are among the most complex and versatile instruments of scientific exploration. They have enabled remarkable scientific discoveries and important technological advances that span all programs within the DOE Office of Science (DOE/SC). The importance of accelerators to the DOE/SC mission is evident from an examination of the DOE document, ''Facilities for the Future of Science: A Twenty-Year Outlook''. Of the 28 facilities listed, 13 involve accelerators. Thanks to SciDAC, a powerful suite of parallel simulation tools has been developed that represent a paradigm shift in computational accelerator science. Simulations that used to take weeks or more now takemore » hours, and simulations that were once thought impossible are now performed routinely. These codes have been applied to many important projects of DOE/SC including existing facilities (the Tevatron complex, the Relativistic Heavy Ion Collider), facilities under construction (the Large Hadron Collider, the Spallation Neutron Source, the Linac Coherent Light Source), and to future facilities (the International Linear Collider, the Rare Isotope Accelerator). The new codes have also been used to explore innovative approaches to charged particle acceleration. These approaches, based on the extremely intense fields that can be present in lasers and plasmas, may one day provide a path to the outermost reaches of the energy frontier. Furthermore, they could lead to compact, high-gradient accelerators that would have huge consequences for US science and technology, industry, and medicine. In this talk I will describe the new accelerator modeling capabilities developed under SciDAC, the essential role of multi-disciplinary collaboration with applied mathematicians, computer scientists, and other IT experts in developing these capabilities, and provide examples of how the codes have been used to support DOE/SC accelerator projects.« less
NASA Astrophysics Data System (ADS)
Ryne, Robert D.
2006-09-01
Particle accelerators are among the most complex and versatile instruments of scientific exploration. They have enabled remarkable scientific discoveries and important technological advances that span all programs within the DOE Office of Science (DOE/SC). The importance of accelerators to the DOE/SC mission is evident from an examination of the DOE document, ''Facilities for the Future of Science: A Twenty-Year Outlook.'' Of the 28 facilities listed, 13 involve accelerators. Thanks to SciDAC, a powerful suite of parallel simulation tools has been developed that represent a paradigm shift in computational accelerator science. Simulations that used to take weeks or more now take hours, and simulations that were once thought impossible are now performed routinely. These codes have been applied to many important projects of DOE/SC including existing facilities (the Tevatron complex, the Relativistic Heavy Ion Collider), facilities under construction (the Large Hadron Collider, the Spallation Neutron Source, the Linac Coherent Light Source), and to future facilities (the International Linear Collider, the Rare Isotope Accelerator). The new codes have also been used to explore innovative approaches to charged particle acceleration. These approaches, based on the extremely intense fields that can be present in lasers and plasmas, may one day provide a path to the outermost reaches of the energy frontier. Furthermore, they could lead to compact, high-gradient accelerators that would have huge consequences for US science and technology, industry, and medicine. In this talk I will describe the new accelerator modeling capabilities developed under SciDAC, the essential role of multi-disciplinary collaboration with applied mathematicians, computer scientists, and other IT experts in developing these capabilities, and provide examples of how the codes have been used to support DOE/SC accelerator projects.
A Simulated Environment Experiment on Annoyance Due to Combined Road Traffic and Industrial Noises.
Marquis-Favre, Catherine; Morel, Julien
2015-07-21
Total annoyance due to combined noises is still difficult to predict adequately. This scientific gap is an obstacle for noise action planning, especially in urban areas where inhabitants are usually exposed to high noise levels from multiple sources. In this context, this work aims to highlight potential to enhance the prediction of total annoyance. The work is based on a simulated environment experiment where participants performed activities in a living room while exposed to combined road traffic and industrial noises. The first objective of the experiment presented in this paper was to gain further understanding of the effects on annoyance of some acoustical factors, non-acoustical factors and potential interactions between the combined noise sources. The second one was to assess total annoyance models constructed from the data collected during the experiment and tested using data gathered in situ. The results obtained in this work highlighted the superiority of perceptual models. In particular, perceptual models with an interaction term seemed to be the best predictors for the two combined noise sources under study, even with high differences in sound pressure level. Thus, these results reinforced the need to focus on perceptual models and to improve the prediction of partial annoyances.
Accuracy of the lattice-Boltzmann method using the Cell processor
NASA Astrophysics Data System (ADS)
Harvey, M. J.; de Fabritiis, G.; Giupponi, G.
2008-11-01
Accelerator processors like the new Cell processor are extending the traditional platforms for scientific computation, allowing orders of magnitude more floating-point operations per second (flops) compared to standard central processing units. However, they currently lack double-precision support and support for some IEEE 754 capabilities. In this work, we develop a lattice-Boltzmann (LB) code to run on the Cell processor and test the accuracy of this lattice method on this platform. We run tests for different flow topologies, boundary conditions, and Reynolds numbers in the range Re=6 350 . In one case, simulation results show a reduced mass and momentum conservation compared to an equivalent double-precision LB implementation. All other cases demonstrate the utility of the Cell processor for fluid dynamics simulations. Benchmarks on two Cell-based platforms are performed, the Sony Playstation3 and the QS20/QS21 IBM blade, obtaining a speed-up factor of 7 and 21, respectively, compared to the original PC version of the code, and a conservative sustained performance of 28 gigaflops per single Cell processor. Our results suggest that choice of IEEE 754 rounding mode is possibly as important as double-precision support for this specific scientific application.
NASA Astrophysics Data System (ADS)
Berckmans, Julie; Hamdi, Rafiq; De Troch, Rozemien; Giot, Olivier
2015-04-01
At the Royal Meteorological Institute of Belgium (RMI), climate simulations are performed with the regional climate model (RCM) ALARO, a version of the ALADIN model with improved physical parameterizations. In order to obtain high-resolution information of the regional climate, lateral bounary conditions (LBC) are prescribed from the global climate model (GCM) ARPEGE. Dynamical downscaling is commonly done in a continuous long-term simulation, with the initialisation of the model at the start and driven by the regularly updated LBCs of the GCM. Recently, more interest exists in the dynamical downscaling approach of frequent reinitializations of the climate simulations. For these experiments, the model is initialised daily and driven for 24 hours by the GCM. However, the surface is either initialised daily together with the atmosphere or free to evolve continuously. The surface scheme implemented in ALARO is SURFEX, which can be either run in coupled mode or in stand-alone mode. The regional climate is simulated on different domains, on a 20km horizontal resolution over Western-Europe and a 4km horizontal resolution over Belgium. Besides, SURFEX allows to perform a stand-alone or offline simulation on 1km horizontal resolution over Belgium. This research is in the framework of the project MASC: "Modelling and Assessing Surface Change Impacts on Belgian and Western European Climate", a 4-year project funded by the Belgian Federal Government. The overall aim of the project is to study the feedbacks between climate changes and land surface changes in order to improve regional climate model projections at the decennial scale over Belgium and Western Europe and thus to provide better climate projections and climate change evaluation tools to policy makers, stakeholders and the scientific community.
A 500 megabyte/second disk array
NASA Technical Reports Server (NTRS)
Ruwart, Thomas M.; Okeefe, Matthew T.
1994-01-01
Applications at the Army High Performance Computing Research Center's (AHPCRC) Graphic and Visualization Laboratory (GVL) at the University of Minnesota require a tremendous amount of I/O bandwidth and this appetite for data is growing. Silicon Graphics workstations are used to perform the post-processing, visualization, and animation of multi-terabyte size datasets produced by scientific simulations performed of AHPCRC supercomputers. The M.A.X. (Maximum Achievable Xfer) was designed to find the maximum achievable I/O performance of the Silicon Graphics CHALLENGE/Onyx-class machines that run these applications. Running a fully configured Onyx machine with 12-150MHz R4400 processors, 512MB of 8-way interleaved memory, 31 fast/wide SCSI-2 channel each with a Ciprico disk array controller we were able to achieve a maximum sustained transfer rate of 509.8 megabytes per second. However, after analyzing the results it became clear that the true maximum transfer rate is somewhat beyond this figure and we will need to do further testing with more disk array controllers in order to find the true maximum.
Optimization strategies for molecular dynamics programs on Cray computers and scalar work stations
NASA Astrophysics Data System (ADS)
Unekis, Michael J.; Rice, Betsy M.
1994-12-01
We present results of timing runs and different optimization strategies for a prototype molecular dynamics program that simulates shock waves in a two-dimensional (2-D) model of a reactive energetic solid. The performance of the program may be improved substantially by simple changes to the Fortran or by employing various vendor-supplied compiler optimizations. The optimum strategy varies among the machines used and will vary depending upon the details of the program. The effect of various compiler options and vendor-supplied subroutine calls is demonstrated. Comparison is made between two scalar workstations (IBM RS/6000 Model 370 and Model 530) and several Cray supercomputers (X-MP/48, Y-MP8/128, and C-90/16256). We find that for a scientific application program dominated by sequential, scalar statements, a relatively inexpensive high-end work station such as the IBM RS/60006 RISC series will outperform single processor performance of the Cray X-MP/48 and perform competitively with single processor performance of the Y-MP8/128 and C-9O/16256.
Introducing GHOST: The Geospace/Heliosphere Observation & Simulation Tool-kit
NASA Astrophysics Data System (ADS)
Murphy, J. J.; Elkington, S. R.; Schmitt, P.; Wiltberger, M. J.; Baker, D. N.
2013-12-01
Simulation models of the heliospheric and geospace environments can provide key insights into the geoeffective potential of solar disturbances such as Coronal Mass Ejections and High Speed Solar Wind Streams. Advanced post processing of the results of these simulations greatly enhances the utility of these models for scientists and other researchers. Currently, no supported centralized tool exists for performing these processing tasks. With GHOST, we introduce a toolkit for the ParaView visualization environment that provides a centralized suite of tools suited for Space Physics post processing. Building on the work from the Center For Integrated Space Weather Modeling (CISM) Knowledge Transfer group, GHOST is an open-source tool suite for ParaView. The tool-kit plugin currently provides tools for reading LFM and Enlil data sets, and provides automated tools for data comparison with NASA's CDAweb database. As work progresses, many additional tools will be added and through open-source collaboration, we hope to add readers for additional model types, as well as any additional tools deemed necessary by the scientific public. The ultimate end goal of this work is to provide a complete Sun-to-Earth model analysis toolset.
Regional model simulations of New Zealand climate
NASA Astrophysics Data System (ADS)
Renwick, James A.; Katzfey, Jack J.; Nguyen, Kim C.; McGregor, John L.
1998-03-01
Simulation of New Zealand climate is examined through the use of a regional climate model nested within the output of the Commonwealth Scientific and Industrial Research Organisation nine-level general circulation model (GCM). R21 resolution GCM output is used to drive a regional model run at 125 km grid spacing over the Australasian region. The 125 km run is used in turn to drive a simulation at 50 km resolution over New Zealand. Simulations with a full seasonal cycle are performed for 10 model years. The focus is on the quality of the simulation of present-day climate, but results of a doubled-CO2 run are discussed briefly. Spatial patterns of mean simulated precipitation and surface temperatures improve markedly as horizontal resolution is increased, through the better resolution of the country's orography. However, increased horizontal resolution leads to a positive bias in precipitation. At 50 km resolution, simulated frequency distributions of daily maximum/minimum temperatures are statistically similar to those of observations at many stations, while frequency distributions of daily precipitation appear to be statistically different to those of observations at most stations. Modeled daily precipitation variability at 125 km resolution is considerably less than observed, but is comparable to, or exceeds, observed variability at 50 km resolution. The sensitivity of the simulated climate to changes in the specification of the land surface is discussed briefly. Spatial patterns of the frequency of extreme temperatures and precipitation are generally well modeled. Under a doubling of CO2, the frequency of precipitation extremes changes only slightly at most locations, while air frosts become virtually unknown except at high-elevation sites.
Controlling Ethylene for Extended Preservation of Fresh Fruits and Vegetables
2008-12-01
into a process simulation to determine the effects of key design parameters on the overall performance of the system. Integrating process simulation...High Decay [Asian Pears High High Decay [ Avocados High High Decay lBananas Moderate ~igh Decay Cantaloupe High Moderate Decay Cherimoya Very High High...ozonolysis. Process simulation was subsequently used to understand the effect of key system parameters on EEU performance. Using this modeling work
DOE Office of Scientific and Technical Information (OSTI.GOV)
Gerber, Richard; Allcock, William; Beggio, Chris
2014-10-17
U.S. Department of Energy (DOE) High Performance Computing (HPC) facilities are on the verge of a paradigm shift in the way they deliver systems and services to science and engineering teams. Research projects are producing a wide variety of data at unprecedented scale and level of complexity, with community-specific services that are part of the data collection and analysis workflow. On June 18-19, 2014 representatives from six DOE HPC centers met in Oakland, CA at the DOE High Performance Operational Review (HPCOR) to discuss how they can best provide facilities and services to enable large-scale data-driven scientific discovery at themore » DOE national laboratories. The report contains findings from that review.« less
A domain specific language for performance portable molecular dynamics algorithms
NASA Astrophysics Data System (ADS)
Saunders, William Robert; Grant, James; Müller, Eike Hermann
2018-03-01
Developers of Molecular Dynamics (MD) codes face significant challenges when adapting existing simulation packages to new hardware. In a continuously diversifying hardware landscape it becomes increasingly difficult for scientists to be experts both in their own domain (physics/chemistry/biology) and specialists in the low level parallelisation and optimisation of their codes. To address this challenge, we describe a "Separation of Concerns" approach for the development of parallel and optimised MD codes: the science specialist writes code at a high abstraction level in a domain specific language (DSL), which is then translated into efficient computer code by a scientific programmer. In a related context, an abstraction for the solution of partial differential equations with grid based methods has recently been implemented in the (Py)OP2 library. Inspired by this approach, we develop a Python code generation system for molecular dynamics simulations on different parallel architectures, including massively parallel distributed memory systems and GPUs. We demonstrate the efficiency of the auto-generated code by studying its performance and scalability on different hardware and compare it to other state-of-the-art simulation packages. With growing data volumes the extraction of physically meaningful information from the simulation becomes increasingly challenging and requires equally efficient implementations. A particular advantage of our approach is the easy expression of such analysis algorithms. We consider two popular methods for deducing the crystalline structure of a material from the local environment of each atom, show how they can be expressed in our abstraction and implement them in the code generation framework.
Curran, Vernon; Fleet, Lisa; White, Susan; Bessell, Clare; Deshpandey, Akhil; Drover, Anne; Hayward, Mark; Valcour, James
2015-03-01
The neonatal resuscitation program (NRP) has been developed to educate physicians and other health care providers about newborn resuscitation and has been shown to improve neonatal resuscitation skills. Simulation-based training is recommended as an effective modality for instructing neonatal resuscitation and both low and high-fidelity manikin simulators are used. There is limited research that has compared the effect of low and high-fidelity manikin simulators for NRP learning outcomes, and more specifically on teamwork performance and confidence. The purpose of this study was to examine the effect of using low versus high-fidelity manikin simulators in NRP instruction. A randomized posttest-only control group study design was conducted. Third year undergraduate medical students participated in NRP instruction and were assigned to an experimental group (high-fidelity manikin simulator) or control group (low-fidelity manikin simulator). Integrated skills station (megacode) performance, participant satisfaction, confidence and teamwork behaviour scores were compared between the study groups. Participants in the high-fidelity manikin simulator instructional group reported significantly higher total scores in overall satisfaction (p = 0.001) and confidence (p = 0.001). There were no significant differences in teamwork behaviour scores, as observed by two independent raters, nor differences on mandatory integrated skills station performance items at the p < 0.05 level. Medical students' reported greater satisfaction and confidence with high-fidelity manikin simulators, but did not demonstrate overall significantly improved teamwork or integrated skills station performance. Low and high-fidelity manikin simulators facilitate similar levels of objectively measured NRP outcomes for integrated skills station and teamwork performance.
Promising applications of graphene and graphene-based nanostructures
NASA Astrophysics Data System (ADS)
Nguyen, Bich Ha; Hieu Nguyen, Van
2016-06-01
The present article is a review of research works on promising applications of graphene and graphene-based nanostructures. It contains five main scientific subjects. The first one is the research on graphene-based transparent and flexible conductive films for displays and electrodes: efficient method ensuring uniform and controllable deposition of reduced graphene oxide thin films over large areas, large-scale pattern growth of graphene films for stretchble transparent electrodes, utilization of graphene-based transparent conducting films and graphene oxide-based ones in many photonic and optoelectronic devices and equipments such as the window electrodes of inorganic, organic and dye-sensitized solar cells, organic light-emitting diodes, light-emitting electrochemical cells, touch screens, flexible smart windows, graphene-based saturated absorbers in laser cavities for ultrafast generations, graphene-based flexible, transparent heaters in automobile defogging/deicing systems, heatable smart windows, graphene electrodes for high-performance organic field-effect transistors, flexible and transparent acoustic actuators and nanogenerators etc. The second scientific subject is the research on conductive inks for printed electronics to revolutionize the electronic industry by producing cost-effective electronic circuits and sensors in very large quantities: preparing high mobility printable semiconductors, low sintering temperature conducting inks, graphene-based ink by liquid phase exfoliation of graphite in organic solutions, and developing inkjet printing technique for mass production of high-quality graphene patterns with high resolution and for fabricating a variety of good-performance electronic devices, including transparent conductors, embedded resistors, thin-film transistors and micro supercapacitors. The third scientific subject is the research on graphene-based separation membranes: molecular dynamics simulation study on the mechanisms of the transport of molecules, vapors and gases through nanopores in graphene membranes, experimental works investigating selective transport of different molecules through nanopores in single-layer graphene and graphene-based membranes toward the water desalination, chemical mixture separation and gas control. Various applications of graphene in bio-medicine are the contents of the fourth scientific subject of the review. They include the DNA translocations through nanopores in graphene membranes toward the fabrication of devices for genomic screening, in particular DNA sequencing; subnanometre trans-electrode membranes with potential applications to the fabrication of very high resolution, high throughput nanopore-based single-molecule detectors; antibacterial activity of graphene, graphite oxide, graphene oxide and reduced graphene oxide; nanopore sensors for nucleic acid analysis; utilization of graphene multilayers as the gates for sequential release of proteins from surface; utilization of graphene-based electroresponsive scaffolds as implants for on-demand drug delivery etc. The fifth scientific subject of the review is the research on the utilization of graphene in energy storage devices: ternary self-assembly of ordered metal oxide-graphene nanocomposites for electrochemical energy storage; self-assembled graphene/carbon nanotube hybrid films for supercapacitors; carbon-based supercapacitors fabricated by activation of graphene; functionalized graphene sheet-sulfure nanocomposite for using as cathode material in rechargeable lithium batteries; tunable three-dimensional pillared carbon nanotube-graphene networks for high-performance capacitance; fabrications of electrochemical micro-capacitors using thin films of carbon nanotubes and chemically reduced graphenes; laser scribing of high-performance and flexible graphene-based electrochemical capacitors; emergence of next-generation safe batteries featuring graphene-supported Li metal anode with exceptionally high energy or power densities; fabrication of anodes for lithium ion batteries from crumpled graphene-encapsulated Si nanoparticles; liquid-mediated dense integration of graphene materials for compact capacitive energy storage; scalable fabrication of high-power graphene micro-supercapacitors for flexible and on-chip energy storage; superior micro-supercapacitors based on graphene quantum dots; all-graphene core-sheat microfibres for all-solid-state, stretchable fibriform supercapacitors and wearable electronic textiles; micro-supercapacitors with high electrochemical performance based on three-dimensional graphene-carbon nanotube carpets; macroscopic nitrogen-doped graphene hydrogels for ultrafast capacitors; manufacture of scalable ultra-thin and high power density graphene electrochemical capacitor electrodes by aqueous exfoliation and spray deposition; scalable synthesis of hierarchically structured carbon nanotube-graphene fibers for capacitive energy storage; phosphorene-graphene hybrid material as a high-capacity anode material for sodium-ion batteries. Beside above-presented promising applications of graphene and graphene-based nanostructures, other less widespread, but perhaps not less important, applications of graphene and graphene-based nanomaterials, are also briefly discussed.
JASMINE Simulator - construction of framework
NASA Astrophysics Data System (ADS)
Yamada, Yoshiyuki; Ueda, Seiji; Kuwabara, Takashi; Yano, Taihei; Gouda, Naoteru
2004-10-01
JASMINE is an abbreviation of Japan Astrometry Satellite Mission for INfrared Exploration currently planned at National Astronomical Observatory of Japan. JASMINE stands at a stage where its basic design will be determined in a few years. Then it is very important for JASMINE to simulate the data stream generated by the astrometric fields in order to support investigations of accuracy, sampling strategy, data compression, data analysis, scientific performances, etc. It is found that the new software technologies of Object Oriented methodologies with Unified Modeling Language are ideal for the simulation system of JASMINE (JASMINE Simualtor). In this paper, we briefly introduce some concepts of such technologies and explain the framework of the JASMINE Simulator which is constructed by new technologies. We believe that these technologies are useful also for other future big projects of astronomcial research.
X-Ray Spectrometer For ROSAT II (SPECTROSAT)
NASA Astrophysics Data System (ADS)
Predehl, Peter; Brauninger, Heinrich
1986-01-01
The objective transmission grating was one of the earliest inventions in the field of X-ray astronomy and has been incorporated into Skylab, HERO-P, and EXOTAT. In recent years there have been advances in grating technology and spectrometer design. A high precision mechanical ruling and replication process for manufacturing large self-supporting transmission gratings has been developed by an industrial manufacturer in cooperation with the Max-Planck-Institute (MPI). Theoretical analyses have determined the optimum configuration of the grating facets and the grating surface in order to correct third order aberations and obtain maximum resolving power. We have verified experimentally that the predicted efficiencies may be achieved. In addition, an experimental study of large grating assemblies for space telescopes was made in industry with scientific guidance by MPI. Main objectives of this study were the determination of mechanical loads during launch, as well as the design, construction and fabrication of a representative model of a ROSAT grating ring. Performancy studies including instrument pro-perties as well as the simulated radiation from hot plasmas have shown the ability of SPECTROSAT to perform high efficiency, high resolution line-spectroscopy on a wide variety of cosmic X-ray sources.
Constructing Scientific Applications from Heterogeneous Resources
NASA Technical Reports Server (NTRS)
Schichting, Richard D.
1995-01-01
A new model for high-performance scientific applications in which such applications are implemented as heterogeneous distributed programs or, equivalently, meta-computations, is investigated. The specific focus of this grant was a collaborative effort with researchers at NASA and the University of Toledo to test and improve Schooner, a software interconnection system, and to explore the benefits of increased user interaction with existing scientific applications.
NASA Astrophysics Data System (ADS)
Wiwin, E.; Kustijono, R.
2018-03-01
The purpose of the study is to describe the use of Physics practicum to train the science process skills and its effect on the scientific attitudes of the vocational high school students. The components of science process skills are: observing, classifying, inferring, predicting, and communicating. The established scientific attitudes are: curiosity, honesty, collaboration, responsibility, and open-mindedness. This is an experimental research with the one-shot case study design. The subjects are 30 Multimedia Program students of SMK Negeri 12 Surabaya. The data collection techniques used are observation and performance tests. The score of science process skills and scientific attitudes are taken from observational and performance instruments. Data analysis used are descriptive statistics and correlation. The results show that: 1) the physics practicum can train the science process skills and scientific attitudes in good category, 2) the relationship between the science process skills and the students' scientific attitude is good category 3) Student responses to the learning process using the practicum in the good category, The results of the research conclude that the physics practicum can train the science process skill and have a significant effect on the scientific attitude of the vocational highschool students.
WFIRST: Simulating the Wide-Field Sky
NASA Astrophysics Data System (ADS)
Peeples, Molly; WFIRST Wide Field Imager Simulations Working Group
2018-01-01
As astronomy’s first high-resolution wide-field multi-mode instrument, simulated data will play a vital role in the planning for and analysis of data from WFIRST’s WFI (Wide Field Imager) instrument. Part of the key to WFIRST’s scientific success lies in our ability to push the systematics limit, but in order to do so, the WFI pipeline will need to be able to measure and take out said systematics. The efficacy of this pipeline can only be verified with large suites of synthetic data; these data must include both the range of astrophysical sky scenes (from crowded starfields to high-latitude grism data observations) and the systematics from the detector and telescope optics the WFI pipeline aims to mitigate. We summarize here(1) the status of current and planned astrophysical simulations in support of the WFI,(2) the status of current WFI instrument simulators and requirements on future generations thereof, and(3) plans, methods, and requirements on interfacing astrophysical simulations and WFI instrument simulators.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Prowell, Stacy J; Symons, Christopher T
2015-01-01
Producing trusted results from high-performance codes is essential for policy and has significant economic impact. We propose combining rigorous analytical methods with machine learning techniques to achieve the goal of repeatable, trustworthy scientific computing.
Spooled packaging of shape memory alloy actuators
NASA Astrophysics Data System (ADS)
Redmond, John A.
A vast cross-section of transportation, manufacturing, consumer product, and medical technologies rely heavily on actuation. Accordingly, progress in these industries is often strongly coupled to the advancement of actuation technologies. As the field of actuation continues to evolve, smart materials show significant promise for satisfying the growing needs of industry. In particular, shape memory alloy (SMA) wire actuators present an opportunity for low-cost, high performance actuation, but until now, they have been limited or restricted from use in many otherwise suitable applications by the difficulty in packaging the SMA wires within tight or unusually shaped form constraints. To address this packaging problem, SMA wires can be spool-packaged by wrapping around mandrels to make the actuator more compact or by redirecting around multiple mandrels to customize SMA wire pathways to unusual form factors. The goal of this dissertation is to develop the scientific knowledge base for spooled packaging of low-cost SMA wire actuators that enables high, predictable performance within compact, customizable form factors. In developing the scientific knowledge base, this dissertation defines a systematic general representation of single and multiple mandrel spool-packaged SMA actuators and provides tools for their analysis, understanding, and synthesis. A quasi-static analytical model distills the underlying mechanics down to the three effects of friction, bending, and binding, which enables prediction of the behavior of generic spool-packaged SMA actuators with specifiable geometric, loading, frictional, and SMA material parameters. An extensive experimental and simulation-based parameter study establishes the necessary understanding of how primary design tradeoffs between performance, packaging, and cost are governed by the underlying mechanics of spooled actuators. A design methodology outlines a systematic approach to synthesizing high performance SMA wire actuators with mitigated material, power, and packaging costs and compact, customizable form factors. By examining the multi-faceted connections between performance, packaging, and cost, this dissertation builds a knowledge base that goes beyond implementing SMA actuators for particular applications. Rather, it provides a well-developed strategy for realizing the advantages of SMA actuation for a broadened range of applications, thereby enabling opportunities for new functionality and capabilities in industry.
NASA Astrophysics Data System (ADS)
Callaghan, S.; Maechling, P. J.; Juve, G.; Vahi, K.; Deelman, E.; Jordan, T. H.
2015-12-01
The CyberShake computational platform, developed by the Southern California Earthquake Center (SCEC), is an integrated collection of scientific software and middleware that performs 3D physics-based probabilistic seismic hazard analysis (PSHA) for Southern California. CyberShake integrates large-scale and high-throughput research codes to produce probabilistic seismic hazard curves for individual locations of interest and hazard maps for an entire region. A recent CyberShake calculation produced about 500,000 two-component seismograms for each of 336 locations, resulting in over 300 million synthetic seismograms in a Los Angeles-area probabilistic seismic hazard model. CyberShake calculations require a series of scientific software programs. Early computational stages produce data used as inputs by later stages, so we describe CyberShake calculations using a workflow definition language. Scientific workflow tools automate and manage the input and output data and enable remote job execution on large-scale HPC systems. To satisfy the requests of broad impact users of CyberShake data, such as seismologists, utility companies, and building code engineers, we successfully completed CyberShake Study 15.4 in April and May 2015, calculating a 1 Hz urban seismic hazard map for Los Angeles. We distributed the calculation between the NSF Track 1 system NCSA Blue Waters, the DOE Leadership-class system OLCF Titan, and USC's Center for High Performance Computing. This study ran for over 5 weeks, burning about 1.1 million node-hours and producing over half a petabyte of data. The CyberShake Study 15.4 results doubled the maximum simulated seismic frequency from 0.5 Hz to 1.0 Hz as compared to previous studies, representing a factor of 16 increase in computational complexity. We will describe how our workflow tools supported splitting the calculation across multiple systems. We will explain how we modified CyberShake software components, including GPU implementations and migrating from file-based communication to MPI messaging, to greatly reduce the I/O demands and node-hour requirements of CyberShake. We will also present performance metrics from CyberShake Study 15.4, and discuss challenges that producers of Big Data on open-science HPC resources face moving forward.
MW-assisted synthesis of LiFePO 4 for high power applications
NASA Astrophysics Data System (ADS)
Beninati, Sabina; Damen, Libero; Mastragostino, Marina
LiFePO 4/C was prepared by solid-state reaction from Li 3PO 4, Fe 3(PO 4) 2·8H 2O, carbon and glucose in a few minutes in a scientific MW (microwave) oven with temperature and power control. The material was characterized by X-ray diffraction, scanning electron microscopy and by TGA analysis to evaluate carbon content. The electrochemical characterization as positive electrode in EC (ethylene carbonate)-DMC (dimethylcarbonate) 1 M LiPF 6 was performed by galvanostatic charge-discharge cycles at C/10 to evaluate specific capacity and by sequences of 10 s discharge-charge pulses, at different high C-rates (5-45C) to evaluate pulse-specific power in simulate operative conditions for full-HEV application. The maximum pulse-specific power and, particularly, pulse efficiency values are quite high and make MW synthesis a very promising route for mass production of LiFePO 4/C for full-HEV batteries at low energy costs.
NASA Astrophysics Data System (ADS)
Yan, Hui; Wang, K. G.; Jones, Jim E.
2016-06-01
A parallel algorithm for large-scale three-dimensional phase-field simulations of phase coarsening is developed and implemented on high-performance architectures. From the large-scale simulations, a new kinetics in phase coarsening in the region of ultrahigh volume fraction is found. The parallel implementation is capable of harnessing the greater computer power available from high-performance architectures. The parallelized code enables increase in three-dimensional simulation system size up to a 5123 grid cube. Through the parallelized code, practical runtime can be achieved for three-dimensional large-scale simulations, and the statistical significance of the results from these high resolution parallel simulations are greatly improved over those obtainable from serial simulations. A detailed performance analysis on speed-up and scalability is presented, showing good scalability which improves with increasing problem size. In addition, a model for prediction of runtime is developed, which shows a good agreement with actual run time from numerical tests.
Profiling and Improving I/O Performance of a Large-Scale Climate Scientific Application
NASA Technical Reports Server (NTRS)
Liu, Zhuo; Wang, Bin; Wang, Teng; Tian, Yuan; Xu, Cong; Wang, Yandong; Yu, Weikuan; Cruz, Carlos A.; Zhou, Shujia; Clune, Tom;
2013-01-01
Exascale computing systems are soon to emerge, which will pose great challenges on the huge gap between computing and I/O performance. Many large-scale scientific applications play an important role in our daily life. The huge amounts of data generated by such applications require highly parallel and efficient I/O management policies. In this paper, we adopt a mission-critical scientific application, GEOS-5, as a case to profile and analyze the communication and I/O issues that are preventing applications from fully utilizing the underlying parallel storage systems. Through in-detail architectural and experimental characterization, we observe that current legacy I/O schemes incur significant network communication overheads and are unable to fully parallelize the data access, thus degrading applications' I/O performance and scalability. To address these inefficiencies, we redesign its I/O framework along with a set of parallel I/O techniques to achieve high scalability and performance. Evaluation results on the NASA discover cluster show that our optimization of GEOS-5 with ADIOS has led to significant performance improvements compared to the original GEOS-5 implementation.
NASA Astrophysics Data System (ADS)
Kanzawa, H.; Emori, S.; Nishimura, T.; Suzuki, T.; Inoue, T.; Hasumi, H.; Saito, F.; Abe-Ouchi, A.; Kimoto, M.; Sumi, A.
2002-12-01
The fastest supercomputer of the world, the Earth Simulator (total peak performance 40TFLOPS) has recently been available for climate researches in Yokohama, Japan. We are planning to conduct a series of future climate change projection experiments on the Earth Simulator with a high-resolution coupled ocean-atmosphere climate model. The main scientific aims for the experiments are to investigate 1) the change in global ocean circulation with an eddy-permitting ocean model, 2) the regional details of the climate change including Asian monsoon rainfall pattern, tropical cyclones and so on, and 3) the change in natural climate variability with a high-resolution model of the coupled ocean-atmosphere system. To meet these aims, an atmospheric GCM, CCSR/NIES AGCM, with T106(~1.1o) horizontal resolution and 56 vertical layers is to be coupled with an oceanic GCM, COCO, with ~ 0.28ox 0.19o horizontal resolution and 48 vertical layers. This coupled ocean-atmosphere climate model, named MIROC, also includes a land-surface model, a dynamic-thermodynamic seaice model, and a river routing model. The poles of the oceanic model grid system are rotated from the geographic poles so that they are placed in Greenland and Antarctic land masses to avoild the singularity of the grid system. Each of the atmospheric and the oceanic parts of the model is parallelized with the Message Passing Interface (MPI) technique. The coupling of the two is to be done with a Multi Program Multi Data (MPMD) fashion. A 100-model-year integration will be possible in one actual month with 720 vector processors (which is only 14% of the full resources of the Earth Simulator).
RAPPORT: running scientific high-performance computing applications on the cloud.
Cohen, Jeremy; Filippis, Ioannis; Woodbridge, Mark; Bauer, Daniela; Hong, Neil Chue; Jackson, Mike; Butcher, Sarah; Colling, David; Darlington, John; Fuchs, Brian; Harvey, Matt
2013-01-28
Cloud computing infrastructure is now widely used in many domains, but one area where there has been more limited adoption is research computing, in particular for running scientific high-performance computing (HPC) software. The Robust Application Porting for HPC in the Cloud (RAPPORT) project took advantage of existing links between computing researchers and application scientists in the fields of bioinformatics, high-energy physics (HEP) and digital humanities, to investigate running a set of scientific HPC applications from these domains on cloud infrastructure. In this paper, we focus on the bioinformatics and HEP domains, describing the applications and target cloud platforms. We conclude that, while there are many factors that need consideration, there is no fundamental impediment to the use of cloud infrastructure for running many types of HPC applications and, in some cases, there is potential for researchers to benefit significantly from the flexibility offered by cloud platforms.
NASA Technical Reports Server (NTRS)
Said, Magdi A; Schur, Willi W.; Gupta, Amit; Mock, Gary N.; Seyam, Abdelfattah M.; Theyson, Thomas
2004-01-01
Science and technology development from balloon-borne telescopes and experiments is a rich return on a relatively modest involvement of NASA resources. For the past three decades, the development of increasingly competitive and complex science payloads and observational programs from high altitude balloon-borne platforms has yielded significant scientific discoveries. The success and capabilities of scientific balloons are closely related to advancements in the textile and plastic industries. This paper will present an overview of scientific balloons as a viable and economical platform for transporting large telescopes and scientific instruments to the upper atmosphere to conduct scientific missions. Additionally, the paper sheds the light on the problems associated with UV degradation of high performance textile components that are used to support the payload of the balloon and proposes future research to reduce/eliminate Ultra Violet (UV) degradation in order to conduct long-term scientific missions.
Computational Simulations and the Scientific Method
NASA Technical Reports Server (NTRS)
Kleb, Bil; Wood, Bill
2005-01-01
As scientific simulation software becomes more complicated, the scientific-software implementor's need for component tests from new model developers becomes more crucial. The community's ability to follow the basic premise of the Scientific Method requires independently repeatable experiments, and model innovators are in the best position to create these test fixtures. Scientific software developers also need to quickly judge the value of the new model, i.e., its cost-to-benefit ratio in terms of gains provided by the new model and implementation risks such as cost, time, and quality. This paper asks two questions. The first is whether other scientific software developers would find published component tests useful, and the second is whether model innovators think publishing test fixtures is a feasible approach.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Lucas, Robert; Ang, James; Bergman, Keren
2014-02-10
Exascale computing systems are essential for the scientific fields that will transform the 21st century global economy, including energy, biotechnology, nanotechnology, and materials science. Progress in these fields is predicated on the ability to perform advanced scientific and engineering simulations, and analyze the deluge of data. On July 29, 2013, ASCAC was charged by Patricia Dehmer, the Acting Director of the Office of Science, to assemble a subcommittee to provide advice on exascale computing. This subcommittee was directed to return a list of no more than ten technical approaches (hardware and software) that will enable the development of a systemmore » that achieves the Department's goals for exascale computing. Numerous reports over the past few years have documented the technical challenges and the non¬-viability of simply scaling existing computer designs to reach exascale. The technical challenges revolve around energy consumption, memory performance, resilience, extreme concurrency, and big data. Drawing from these reports and more recent experience, this ASCAC subcommittee has identified the top ten computing technology advancements that are critical to making a capable, economically viable, exascale system.« less
Fundamental performance differences between CMOS and CCD imagers: part III
NASA Astrophysics Data System (ADS)
Janesick, James; Pinter, Jeff; Potter, Robert; Elliott, Tom; Andrews, James; Tower, John; Cheng, John; Bishop, Jeanne
2009-08-01
This paper is a status report on recent scientific CMOS imager developments since when previous publications were written. Focus today is being given on CMOS design and process optimization because fundamental problems affecting performance are now reasonably well understood. Topics found in this paper include discussions on a low cost custom scientific CMOS fabrication approach, substrate bias for deep depletion imagers, near IR and x-ray point-spread performance, custom fabricated high resisitivity epitaxial and SOI silicon wafers for backside illuminated imagers, buried channel MOSFETs for ultra low noise performance, 1 e- charge transfer imagers, high speed transfer pixels, RTS/ flicker noise versus MOSFET geometry, pixel offset and gain non uniformity measurements, high S/N dCDS/aCDS signal processors, pixel thermal dark current sources, radiation damage topics, CCDs fabricated in CMOS and future large CMOS imagers planned at Sarnoff.
E-GRASP/Eratosthenes: a mission proposal for millimetric TRF realization
NASA Astrophysics Data System (ADS)
Biancale, Richard; Pollet, Arnaud; Coulot, David; Mandea, Mioara
2017-04-01
The ITRF is currently worked out by independent concatenation of space technique information. GNSS, DORIS, SLR and VLBI data are processed independently by analysis centers before combination centers form mono-technique sets which are then combined together to produce official ITRF solutions. Actually this approach performs quite well, although systematisms between techniques remain visible in origin or scale parameters of the underlying terrestrial frames, for instance. Improvement and homogenization of TRF are expected in the future, provided that dedicated multi-technique platforms are used at best. The goal fixed by GGOS to realizing the terrestrial reference system with an accuracy of 1 mm and a long-term stability of 0.1 mm/yr can be next achieved in the E-GRASP/Eratosthenes scenario. This mission proposed to ESA as response of the 2017 Earth Explorer-9 call was already scientifically well assessed in the 2016 EE9 call. It co-locates all of the fundamental space-based geodetic instruments, GNSS and DORIS receivers, laser retro-reflectors, and a VLBI transmitter on the same satellite platform on a highly eccentric orbit with particular attention paid to the time and space metrology on board. Different kinds of simulations were performed both for discriminating the best orbital scenario according to many geometric/technical/physical criteria and for assessing the expected performances on the TRF according to GGOS goals. The presentation will focus on the mission scenario and simulation results.
NASA Astrophysics Data System (ADS)
Masson, V.; Le Moigne, P.; Martin, E.; Faroux, S.; Alias, A.; Alkama, R.; Belamari, S.; Barbu, A.; Boone, A.; Bouyssel, F.; Brousseau, P.; Brun, E.; Calvet, J.-C.; Carrer, D.; Decharme, B.; Delire, C.; Donier, S.; Essaouini, K.; Gibelin, A.-L.; Giordani, H.; Habets, F.; Jidane, M.; Kerdraon, G.; Kourzeneva, E.; Lafaysse, M.; Lafont, S.; Lebeaupin Brossier, C.; Lemonsu, A.; Mahfouf, J.-F.; Marguinaud, P.; Mokhtari, M.; Morin, S.; Pigeon, G.; Salgado, R.; Seity, Y.; Taillefer, F.; Tanguy, G.; Tulet, P.; Vincendon, B.; Vionnet, V.; Voldoire, A.
2013-07-01
SURFEX is a new externalized land and ocean surface platform that describes the surface fluxes and the evolution of four types of surfaces: nature, town, inland water and ocean. It is mostly based on pre-existing, well-validated scientific models that are continuously improved. The motivation for the building of SURFEX is to use strictly identical scientific models in a high range of applications in order to mutualise the research and development efforts. SURFEX can be run in offline mode (0-D or 2-D runs) or in coupled mode (from mesoscale models to numerical weather prediction and climate models). An assimilation mode is included for numerical weather prediction and monitoring. In addition to momentum, heat and water fluxes, SURFEX is able to simulate fluxes of carbon dioxide, chemical species, continental aerosols, sea salt and snow particles. The main principles of the organisation of the surface are described first. Then, a survey is made of the scientific module (including the coupling strategy). Finally, the main applications of the code are summarised. The validation work undertaken shows that replacing the pre-existing surface models by SURFEX in these applications is usually associated with improved skill, as the numerous scientific developments contained in this community code are used to good advantage.
Probabilities and predictions: modeling the development of scientific problem-solving skills.
Stevens, Ron; Johnson, David F; Soller, Amy
2005-01-01
The IMMEX (Interactive Multi-Media Exercises) Web-based problem set platform enables the online delivery of complex, multimedia simulations, the rapid collection of student performance data, and has already been used in several genetic simulations. The next step is the use of these data to understand and improve student learning in a formative manner. This article describes the development of probabilistic models of undergraduate student problem solving in molecular genetics that detailed the spectrum of strategies students used when problem solving, and how the strategic approaches evolved with experience. The actions of 776 university sophomore biology majors from three molecular biology lecture courses were recorded and analyzed. Each of six simulations were first grouped by artificial neural network clustering to provide individual performance measures, and then sequences of these performances were probabilistically modeled by hidden Markov modeling to provide measures of progress. The models showed that students with different initial problem-solving abilities choose different strategies. Initial and final strategies varied across different sections of the same course and were not strongly correlated with other achievement measures. In contrast to previous studies, we observed no significant gender differences. We suggest that instructor interventions based on early student performances with these simulations may assist students to recognize effective and efficient problem-solving strategies and enhance learning.
NASA Technical Reports Server (NTRS)
Rutishauser, David
2006-01-01
The motivation for this work comes from an observation that amidst the push for Massively Parallel (MP) solutions to high-end computing problems such as numerical physical simulations, large amounts of legacy code exist that are highly optimized for vector supercomputers. Because re-hosting legacy code often requires a complete re-write of the original code, which can be a very long and expensive effort, this work examines the potential to exploit reconfigurable computing machines in place of a vector supercomputer to implement an essentially unmodified legacy source code. Custom and reconfigurable computing resources could be used to emulate an original application's target platform to the extent required to achieve high performance. To arrive at an architecture that delivers the desired performance subject to limited resources involves solving a multi-variable optimization problem with constraints. Prior research in the area of reconfigurable computing has demonstrated that designing an optimum hardware implementation of a given application under hardware resource constraints is an NP-complete problem. The premise of the approach is that the general issue of applying reconfigurable computing resources to the implementation of an application, maximizing the performance of the computation subject to physical resource constraints, can be made a tractable problem by assuming a computational paradigm, such as vector processing. This research contributes a formulation of the problem and a methodology to design a reconfigurable vector processing implementation of a given application that satisfies a performance metric. A generic, parametric, architectural framework for vector processing implemented in reconfigurable logic is developed as a target for a scheduling/mapping algorithm that maps an input computation to a given instance of the architecture. This algorithm is integrated with an optimization framework to arrive at a specification of the architecture parameters that attempts to minimize execution time, while staying within resource constraints. The flexibility of using a custom reconfigurable implementation is exploited in a unique manner to leverage the lessons learned in vector supercomputer development. The vector processing framework is tailored to the application, with variable parameters that are fixed in traditional vector processing. Benchmark data that demonstrates the functionality and utility of the approach is presented. The benchmark data includes an identified bottleneck in a real case study example vector code, the NASA Langley Terminal Area Simulation System (TASS) application.
ASCR Cybersecurity for Scientific Computing Integrity - Research Pathways and Ideas Workshop
DOE Office of Scientific and Technical Information (OSTI.GOV)
Peisert, Sean; Potok, Thomas E.; Jones, Todd
At the request of the U.S. Department of Energy's (DOE) Office of Science (SC) Advanced Scientific Computing Research (ASCR) program office, a workshop was held June 2-3, 2015, in Gaithersburg, MD, to identify potential long term (10 to +20 year) cybersecurity fundamental basic research and development challenges, strategies and roadmap facing future high performance computing (HPC), networks, data centers, and extreme-scale scientific user facilities. This workshop was a follow-on to the workshop held January 7-9, 2015, in Rockville, MD, that examined higher level ideas about scientific computing integrity specific to the mission of the DOE Office of Science. Issues includedmore » research computation and simulation that takes place on ASCR computing facilities and networks, as well as network-connected scientific instruments, such as those run by various DOE Office of Science programs. Workshop participants included researchers and operational staff from DOE national laboratories, as well as academic researchers and industry experts. Participants were selected based on the submission of abstracts relating to the topics discussed in the previous workshop report [1] and also from other ASCR reports, including "Abstract Machine Models and Proxy Architectures for Exascale Computing" [27], the DOE "Preliminary Conceptual Design for an Exascale Computing Initiative" [28], and the January 2015 machine learning workshop [29]. The workshop was also attended by several observers from DOE and other government agencies. The workshop was divided into three topic areas: (1) Trustworthy Supercomputing, (2) Extreme-Scale Data, Knowledge, and Analytics for Understanding and Improving Cybersecurity, and (3) Trust within High-end Networking and Data Centers. Participants were divided into three corresponding teams based on the category of their abstracts. The workshop began with a series of talks from the program manager and workshop chair, followed by the leaders for each of the three topics and a representative of each of the four major DOE Office of Science Advanced Scientific Computing Research Facilities: the Argonne Leadership Computing Facility (ALCF), the Energy Sciences Network (ESnet), the National Energy Research Scientific Computing Center (NERSC), and the Oak Ridge Leadership Computing Facility (OLCF). The rest of the workshop consisted of topical breakout discussions and focused writing periods that produced much of this report.« less
An Advanced, Interactive, High-Performance Liquid Chromatography Simulator and Instructor Resources
ERIC Educational Resources Information Center
Boswell, Paul G.; Stoll, Dwight R.; Carr, Peter W.; Nagel, Megan L.; Vitha, Mark F.; Mabbott, Gary A.
2013-01-01
High-performance liquid chromatography (HPLC) simulation software has long been recognized as an effective educational tool, yet many of the existing HPLC simulators are either too expensive, outdated, or lack many important features necessary to make them widely useful for educational purposes. Here, a free, open-source HPLC simulator is…
NASA Astrophysics Data System (ADS)
Kemp, Gregory Elijah
Ultra-intense laser (> 1018 W/cm2) interactions with matter are capable of producing relativistic electrons which have a variety of applications in state-of-the-art scientific and medical research conducted at universities and national laboratories across the world. Control of various aspects of these hot-electron distributions is highly desired to optimize a particular outcome. Hot-electron generation in low-contrast interactions, where significant amounts of under-dense pre-plasma are present, can be plagued by highly non-linear relativistic laser-plasma instabilities and quasi-static magnetic field generation, often resulting in less than desirable and predictable electron source characteristics. High-contrast interactions offer more controlled interactions but often at the cost of overall lower coupling and increased sensitivity to initial target conditions. An experiment studying the differences in hot-electron generation between high and low-contrast pulse interactions with solid density targets was performed on the Titan laser platform at the Jupiter Laser Facility at Lawrence Livermore National Laboratory in Livermore, CA. To date, these hot-electrons generated in the laboratory are not directly observable at the source of the interaction. Instead, indirect studies are performed using state-of-the-art simulations, constrained by the various experimental measurements. These measurements, more-often-than-not, rely on secondary processes generated by the transport of these electrons through the solid density materials which can susceptible to a variety instabilities and target material/geometry effects. Although often neglected in these types of studies, the specularly reflected light can provide invaluable insight as it is directly influenced by the interaction. In this thesis, I address the use of (personally obtained) experimental specular reflectivity measurements to indirectly study hot-electron generation in the context of high-contrast, relativistic laser-plasma interactions. Spatial, temporal and spectral properties of the incident and specular pulses, both near and far away from the interaction region where experimental measurements are obtained, are used to benchmark simulations designed to infer dominant hot-electron acceleration mechanisms and their corresponding energy/angular distributions. To handle this highly coupled interaction, I employed particle-in-cell modeling using a wide variety of algorithms (verified to be numerically stable and consistent with analytic expressions) and physical models (validated by experimental results) to reasonably model the interaction's sweeping range of plasma densities, temporal and spatial scales, electromagnetic wave propagation and its interaction with solid density matter. Due to the fluctuations in the experimental conditions and limited computational resources, only a limited number of full-scale simulations were performed under typical experimental conditions to infer the relevant physical phenomena in the interactions. I show the usefulness of the often overlooked specular reflectivity measurements in constraining both high and low-contrast simulations, as well as limitations of their experimental interpretations. Using these experimental measurements to reasonably constrain the simulation results, I discuss the sensitivity of relativistic electron generation in ultra-intense laser plasma interactions to initial target conditions and the dynamic evolution of the interaction region.
Heterogeneous High Throughput Scientific Computing with APM X-Gene and Intel Xeon Phi
NASA Astrophysics Data System (ADS)
Abdurachmanov, David; Bockelman, Brian; Elmer, Peter; Eulisse, Giulio; Knight, Robert; Muzaffar, Shahzad
2015-05-01
Electrical power requirements will be a constraint on the future growth of Distributed High Throughput Computing (DHTC) as used by High Energy Physics. Performance-per-watt is a critical metric for the evaluation of computer architectures for cost- efficient computing. Additionally, future performance growth will come from heterogeneous, many-core, and high computing density platforms with specialized processors. In this paper, we examine the Intel Xeon Phi Many Integrated Cores (MIC) co-processor and Applied Micro X-Gene ARMv8 64-bit low-power server system-on-a-chip (SoC) solutions for scientific computing applications. We report our experience on software porting, performance and energy efficiency and evaluate the potential for use of such technologies in the context of distributed computing systems such as the Worldwide LHC Computing Grid (WLCG).
NASA Technical Reports Server (NTRS)
2012-01-01
Topics include: Bioreactors Drive Advances in Tissue Engineering; Tooling Techniques Enhance Medical Imaging; Ventilator Technologies Sustain Critically Injured Patients; Protein Innovations Advance Drug Treatments, Skin Care; Mass Analyzers Facilitate Research on Addiction; Frameworks Coordinate Scientific Data Management; Cameras Improve Navigation for Pilots, Drivers; Integrated Design Tools Reduce Risk, Cost; Advisory Systems Save Time, Fuel for Airlines; Modeling Programs Increase Aircraft Design Safety; Fly-by-Wire Systems Enable Safer, More Efficient Flight; Modified Fittings Enhance Industrial Safety; Simulation Tools Model Icing for Aircraft Design; Information Systems Coordinate Emergency Management; Imaging Systems Provide Maps for U.S. Soldiers; High-Pressure Systems Suppress Fires in Seconds; Alloy-Enhanced Fans Maintain Fresh Air in Tunnels; Control Algorithms Charge Batteries Faster; Software Programs Derive Measurements from Photographs; Retrofits Convert Gas Vehicles into Hybrids; NASA Missions Inspire Online Video Games; Monitors Track Vital Signs for Fitness and Safety; Thermal Components Boost Performance of HVAC Systems; World Wind Tools Reveal Environmental Change; Analyzers Measure Greenhouse Gasses, Airborne Pollutants; Remediation Technologies Eliminate Contaminants; Receivers Gather Data for Climate, Weather Prediction; Coating Processes Boost Performance of Solar Cells; Analyzers Provide Water Security in Space and on Earth; Catalyst Substrates Remove Contaminants, Produce Fuel; Rocket Engine Innovations Advance Clean Energy; Technologies Render Views of Earth for Virtual Navigation; Content Platforms Meet Data Storage, Retrieval Needs; Tools Ensure Reliability of Critical Software; Electronic Handbooks Simplify Process Management; Software Innovations Speed Scientific Computing; Controller Chips Preserve Microprocessor Function; Nanotube Production Devices Expand Research Capabilities; Custom Machines Advance Composite Manufacturing; Polyimide Foams Offer Superior Insulation; Beam Steering Devices Reduce Payload Weight; Models Support Energy-Saving Microwave Technologies; Materials Advance Chemical Propulsion Technology; and High-Temperature Coatings Offer Energy Savings.
Phipps, Eric T.; D'Elia, Marta; Edwards, Harold C.; ...
2017-04-18
In this study, quantifying simulation uncertainties is a critical component of rigorous predictive simulation. A key component of this is forward propagation of uncertainties in simulation input data to output quantities of interest. Typical approaches involve repeated sampling of the simulation over the uncertain input data, and can require numerous samples when accurately propagating uncertainties from large numbers of sources. Often simulation processes from sample to sample are similar and much of the data generated from each sample evaluation could be reused. We explore a new method for implementing sampling methods that simultaneously propagates groups of samples together in anmore » embedded fashion, which we call embedded ensemble propagation. We show how this approach takes advantage of properties of modern computer architectures to improve performance by enabling reuse between samples, reducing memory bandwidth requirements, improving memory access patterns, improving opportunities for fine-grained parallelization, and reducing communication costs. We describe a software technique for implementing embedded ensemble propagation based on the use of C++ templates and describe its integration with various scientific computing libraries within Trilinos. We demonstrate improved performance, portability and scalability for the approach applied to the simulation of partial differential equations on a variety of CPU, GPU, and accelerator architectures, including up to 131,072 cores on a Cray XK7 (Titan).« less
Publicly Releasing a Large Simulation Dataset with NDS Labs
NASA Astrophysics Data System (ADS)
Goldbaum, Nathan
2016-03-01
Optimally, all publicly funded research should be accompanied by the tools, code, and data necessary to fully reproduce the analysis performed in journal articles describing the research. This ideal can be difficult to attain, particularly when dealing with large (>10 TB) simulation datasets. In this lightning talk, we describe the process of publicly releasing a large simulation dataset to accompany the submission of a journal article. The simulation was performed using Enzo, an open source, community-developed N-body/hydrodynamics code and was analyzed using a wide range of community- developed tools in the scientific Python ecosystem. Although the simulation was performed and analyzed using an ecosystem of sustainably developed tools, we enable sustainable science using our data by making it publicly available. Combining the data release with the NDS Labs infrastructure allows a substantial amount of added value, including web-based access to analysis and visualization using the yt analysis package through an IPython notebook interface. In addition, we are able to accompany the paper submission to the arXiv preprint server with links to the raw simulation data as well as interactive real-time data visualizations that readers can explore on their own or share with colleagues during journal club discussions. It is our hope that the value added by these services will substantially increase the impact and readership of the paper.
A new deadlock resolution protocol and message matching algorithm for the extreme-scale simulator
Engelmann, Christian; Naughton, III, Thomas J.
2016-03-22
Investigating the performance of parallel applications at scale on future high-performance computing (HPC) architectures and the performance impact of different HPC architecture choices is an important component of HPC hardware/software co-design. The Extreme-scale Simulator (xSim) is a simulation toolkit for investigating the performance of parallel applications at scale. xSim scales to millions of simulated Message Passing Interface (MPI) processes. The overhead introduced by a simulation tool is an important performance and productivity aspect. This paper documents two improvements to xSim: (1)~a new deadlock resolution protocol to reduce the parallel discrete event simulation overhead and (2)~a new simulated MPI message matchingmore » algorithm to reduce the oversubscription management overhead. The results clearly show a significant performance improvement. The simulation overhead for running the NAS Parallel Benchmark suite was reduced from 102% to 0% for the embarrassingly parallel (EP) benchmark and from 1,020% to 238% for the conjugate gradient (CG) benchmark. xSim offers a highly accurate simulation mode for better tracking of injected MPI process failures. Furthermore, with highly accurate simulation, the overhead was reduced from 3,332% to 204% for EP and from 37,511% to 13,808% for CG.« less
Research in human performance related to space: A compilation of three projects/proposals
NASA Technical Reports Server (NTRS)
Hasson, Scott M.
1989-01-01
Scientific projects were developed in order to maximize performance in space and assure physiological homeostatis upon return. Three projects that are related to this common goal were either initiated or formulated during the Faculty Fellowship Summer Program. The projects were entitled: (1) Effect of simulated weightlessness (bed rest) on muscle performance and morphology; (2) Effect of submaximal eccentric muscle contractions on muscle injury, soreness and performance: A grant proposal; and (3) Correlation between isolated joint dynamic muscle strength to end-effector strength of the push and pull extravehicular activity (EVA) ratchet maneuver. The purpose is to describe each of these studies in greater detail.
NASA Astrophysics Data System (ADS)
Leclaire, Sebastien
The computer assisted simulation of the dynamics of fluid flow has been a highly rewarding topic of research for several decades now, in terms of the number of scientific problems that have been solved as a result, both in the academic world and in industry. In the fluid dynamics field, simulating multiphase immiscible fluid flow remains a challenge, because of the complexity of the interactions at the flow phase interfaces. Various numerical methods are available to study these phenomena, and, the lattice Boltzmann method has been shown in recent years to be well adapted to solving this type of complex flow. In this thesis, a lattice Boltzmann model for the simulation of two-phase immiscible flows is studied. The main objective of the thesis is to develop this promising method further, with a view to enhancing its validity. To achieve this objective, the research is divided into five distinct themes. The first two focus on correcting some of the deficiencies of the original model. The third generalizes the model to support the simulation of N-phase immiscible fluid flows. The fourth is aimed at modifying the model itself, to enable the simulation of immiscible fluid flows in which the density of the phases varies. With the lattice Boltzmann class of models studied here, this density variation has been inadequately modeled, and, after 20 years, the issue still has not been resolved. The fifth, which complements this thesis, is connected with the lattice Boltzmann method, in that it generalizes the theory of 2D and 3D isotropic gradients for a high order of spatial precision. These themes have each been the subject of a scientific article, as listed in the appendix to this thesis, and together they constitute a synthesis that explains the links between the articles, as well as their scientific contributions, and satisfy the main objective of this research. Globally, a number of qualitative and quantitative test cases based on the theory of multiphase fluid flows have highlighted issues plaguing the simulation model. These test cases have resulted in various modifications to the model, which have reduced or eliminated some numerical artifacts that were problematic. They also allowed us to validate the extensions that were applied to the original model.
Transfer of training for aerospace operations: How to measure, validate, and improve it
NASA Technical Reports Server (NTRS)
Cohen, Malcolm M.
1993-01-01
It has been a commonly accepted practice to train pilots and astronauts in expensive, extremely sophisticated, high fidelity simulators, with as much of the real-world feel and response as possible. High fidelity and high validity have often been assumed to be inextricably interwoven, although this assumption may not be warranted. The Project Mercury rate-damping task on the Naval Air Warfare Center's Human Centrifuge Dynamic Flight Simulator, the shuttle landing task on the NASA-ARC Vertical Motion Simulator, and the almost complete acceptance by the airline industry of full-up Boeing 767 flight simulators, are just a few examples of this approach. For obvious reasons, the classical models of transfer of training have never been adequately evaluated in aerospace operations, and there have been few, if any, scientifically valid replacements for the classical models. This paper reviews some of the earlier work involving transfer of training in aerospace operations, and discusses some of the methods by which appropriate criteria for assessing the validity of training may be established.
Podolsky, Dale J; Fisher, David M; Wong Riff, Karen W; Szasz, Peter; Looi, Thomas; Drake, James M; Forrest, Christopher R
2018-06-01
This study assessed technical performance in cleft palate repair using a newly developed assessment tool and high-fidelity cleft palate simulator through a longitudinal simulation training exercise. Three residents performed five and one resident performed nine consecutive endoscopically recorded cleft palate repairs using a cleft palate simulator. Two fellows in pediatric plastic surgery and two expert cleft surgeons also performed recorded simulated repairs. The Cleft Palate Objective Structured Assessment of Technical Skill (CLOSATS) and end-product scales were developed to assess performance. Two blinded cleft surgeons assessed the recordings and the final repairs using the CLOSATS, end-product scale, and a previously developed global rating scale. The average procedure-specific (CLOSATS), global rating, and end-product scores increased logarithmically after each successive simulation session for the residents. Reliability of the CLOSATS (average item intraclass correlation coefficient (ICC), 0.85 ± 0.093) and global ratings (average item ICC, 0.91 ± 0.02) among the raters was high. Reliability of the end-product assessments was lower (average item ICC, 0.66 ± 0.15). Standard setting linear regression using an overall cutoff score of 7 of 10 corresponded to a pass score for the CLOSATS and the global score of 44 (maximum, 60) and 23 (maximum, 30), respectively. Using logarithmic best-fit curves, 6.3 simulation sessions are required to reach the minimum standard. A high-fidelity cleft palate simulator has been developed that improves technical performance in cleft palate repair. The simulator and technical assessment scores can be used to determine performance before operating on patients.
SIGNUM: A Matlab, TIN-based landscape evolution model
NASA Astrophysics Data System (ADS)
Refice, A.; Giachetta, E.; Capolongo, D.
2012-08-01
Several numerical landscape evolution models (LEMs) have been developed to date, and many are available as open source codes. Most are written in efficient programming languages such as Fortran or C, but often require additional code efforts to plug in to more user-friendly data analysis and/or visualization tools to ease interpretation and scientific insight. In this paper, we present an effort to port a common core of accepted physical principles governing landscape evolution directly into a high-level language and data analysis environment such as Matlab. SIGNUM (acronym for Simple Integrated Geomorphological Numerical Model) is an independent and self-contained Matlab, TIN-based landscape evolution model, built to simulate topography development at various space and time scales. SIGNUM is presently capable of simulating hillslope processes such as linear and nonlinear diffusion, fluvial incision into bedrock, spatially varying surface uplift which can be used to simulate changes in base level, thrust and faulting, as well as effects of climate changes. Although based on accepted and well-known processes and algorithms in its present version, it is built with a modular structure, which allows to easily modify and upgrade the simulated physical processes to suite virtually any user needs. The code is conceived as an open-source project, and is thus an ideal tool for both research and didactic purposes, thanks to the high-level nature of the Matlab environment and its popularity among the scientific community. In this paper the simulation code is presented together with some simple examples of surface evolution, and guidelines for development of new modules and algorithms are proposed.
Learning Relative Motion Concepts in Immersive and Non-immersive Virtual Environments
NASA Astrophysics Data System (ADS)
Kozhevnikov, Michael; Gurlitt, Johannes; Kozhevnikov, Maria
2013-12-01
The focus of the current study is to understand which unique features of an immersive virtual reality environment have the potential to improve learning relative motion concepts. Thirty-seven undergraduate students learned relative motion concepts using computer simulation either in immersive virtual environment (IVE) or non-immersive desktop virtual environment (DVE) conditions. Our results show that after the simulation activities, both IVE and DVE groups exhibited a significant shift toward a scientific understanding in their conceptual models and epistemological beliefs about the nature of relative motion, and also a significant improvement on relative motion problem-solving tests. In addition, we analyzed students' performance on one-dimensional and two-dimensional questions in the relative motion problem-solving test separately and found that after training in the simulation, the IVE group performed significantly better than the DVE group on solving two-dimensional relative motion problems. We suggest that egocentric encoding of the scene in IVE (where the learner constitutes a part of a scene they are immersed in), as compared to allocentric encoding on a computer screen in DVE (where the learner is looking at the scene from "outside"), is more beneficial than DVE for studying more complex (two-dimensional) relative motion problems. Overall, our findings suggest that such aspects of virtual realities as immersivity, first-hand experience, and the possibility of changing different frames of reference can facilitate understanding abstract scientific phenomena and help in displacing intuitive misconceptions with more accurate mental models.
Simulations as a tool for higher mass resolution spectrometer: Lessons from existing observations
NASA Astrophysics Data System (ADS)
Nicolaou, Georgios; Yamauchi, Masatoshi; Nilsson, Hans; Wieser, Martin; Fedorov, Andrei
2017-04-01
Scientific requirements of each mission are crucial for the instrument's design. Ion tracing simulations of instruments can be helpful to characterize their performance, identify their limitations and improving the design for future missions. However, simulations provide the best performance in ideal case, and the actual response is determined by many other factors. Therefore, simulations should be compared with observations when possible. Characterizing the actual response of a running instrument gives valuable lessons for the future design of test instruments with the same detection principle before spending resources to build and calibrate them. In this study we use an ion tracing simulation of the Ion Composition Analyser (ICA) on board ROSETTA, in order to characterize its response and to compare it with the observations. It turned out that, due to the complicated unexpected response of the running instrument, the heavy cometary ions and molecules are sometimes difficult to be resolved. However, preliminary simulation of a slightly modified design predicts much higher mass resolution. Even after considering the complicated unexpected response, we safely expect that the modified design can resolve most abundant heavy atomic ions (e.g., O^+) and molecular ions (e.g., N_2+ and O_2^+). We show the simulation results for both designs and ICA data.
NASA Astrophysics Data System (ADS)
Einkemmer, Lukas
2016-05-01
The recently developed semi-Lagrangian discontinuous Galerkin approach is used to discretize hyperbolic partial differential equations (usually first order equations). Since these methods are conservative, local in space, and able to limit numerical diffusion, they are considered a promising alternative to more traditional semi-Lagrangian schemes (which are usually based on polynomial or spline interpolation). In this paper, we consider a parallel implementation of a semi-Lagrangian discontinuous Galerkin method for distributed memory systems (so-called clusters). Both strong and weak scaling studies are performed on the Vienna Scientific Cluster 2 (VSC-2). In the case of weak scaling we observe a parallel efficiency above 0.8 for both two and four dimensional problems and up to 8192 cores. Strong scaling results show good scalability to at least 512 cores (we consider problems that can be run on a single processor in reasonable time). In addition, we study the scaling of a two dimensional Vlasov-Poisson solver that is implemented using the framework provided. All of the simulations are conducted in the context of worst case communication overhead; i.e., in a setting where the CFL (Courant-Friedrichs-Lewy) number increases linearly with the problem size. The framework introduced in this paper facilitates a dimension independent implementation of scientific codes (based on C++ templates) using both an MPI and a hybrid approach to parallelization. We describe the essential ingredients of our implementation.
Dongarra, Jack; Heroux, Michael A.; Luszczek, Piotr
2015-08-17
Here, we describe a new high-performance conjugate-gradient (HPCG) benchmark. HPCG is composed of computations and data-access patterns commonly found in scientific applications. HPCG strives for a better correlation to existing codes from the computational science domain and to be representative of their performance. Furthermore, HPCG is meant to help drive the computer system design and implementation in directions that will better impact future performance improvement.
Performance assessments of nuclear waste repositories--A dialogue on their value and limitations
Ewing, Rodney C.; Tierney, Martin S.; Konikow, Leonard F.; Rechard, Rob P.
1999-01-01
Performance Assessment (PA) is the use of mathematical models to simulate the long-term behavior of engineered and geologic barriers in a nuclear waste repository; methods of uncertainty analysis are used to assess effects of parametric and conceptual uncertainties associated with the model system upon the uncertainty in outcomes of the simulation. PA is required by the U.S. Environmental Protection Agency as part of its certification process for geologic repositories for nuclear waste. This paper is a dialogue to explore the value and limitations of PA. Two “skeptics” acknowledge the utility of PA in organizing the scientific investigations that are necessary for confident siting and licensing of a repository; however, they maintain that the PA process, at least as it is currently implemented, is an essentially unscientific process with shortcomings that may provide results of limited use in evaluating actual effects on public health and safety. Conceptual uncertainties in a PA analysis can be so great that results can be confidently applied only over short time ranges, the antithesis of the purpose behind long-term, geologic disposal. Two “proponents” of PA agree that performance assessment is unscientific, but only in the sense that PA is an engineering analysis that uses existing scientific knowledge to support public policy decisions, rather than an investigation intended to increase fundamental knowledge of nature; PA has different goals and constraints than a typical scientific study. The “proponents” describe an ideal, sixstep process for conducting generalized PA, here called probabilistic systems analysis (PSA); they note that virtually all scientific content of a PA is introduced during the model-building steps of a PSA, they contend that a PA based on simple but scientifically acceptable mathematical models can provide useful and objective input to regulatory decision makers. The value of the results of any PA must lie between these two views and will depend on the level of knowledge of the site, the degree to which models capture actual physical and chemical processes, the time over which extrapolations are made, and the proper evaluation of health risks attending implementation of the repository. The challenge is in evaluating whether the quality of the PA matches the needs of decision makers charged with protecting the health and safety of the public.
NASA Astrophysics Data System (ADS)
Silva, F.; Maechling, P. J.; Goulet, C. A.; Somerville, P.; Jordan, T. H.
2014-12-01
The Southern California Earthquake Center (SCEC) Broadband Platform is a collaborative software development project involving geoscientists, earthquake engineers, graduate students, and the SCEC Community Modeling Environment. The SCEC Broadband Platform (BBP) is open-source scientific software that can generate broadband (0-100Hz) ground motions for earthquakes, integrating complex scientific modules that implement rupture generation, low and high-frequency seismogram synthesis, non-linear site effects calculation, and visualization into a software system that supports easy on-demand computation of seismograms. The Broadband Platform operates in two primary modes: validation simulations and scenario simulations. In validation mode, the Platform runs earthquake rupture and wave propagation modeling software to calculate seismograms for a well-observed historical earthquake. Then, the BBP calculates a number of goodness of fit measurements that quantify how well the model-based broadband seismograms match the observed seismograms for a certain event. Based on these results, the Platform can be used to tune and validate different numerical modeling techniques. In scenario mode, the Broadband Platform can run simulations for hypothetical (scenario) earthquakes. In this mode, users input an earthquake description, a list of station names and locations, and a 1D velocity model for their region of interest, and the Broadband Platform software then calculates ground motions for the specified stations. Working in close collaboration with scientists and research engineers, the SCEC software development group continues to add new capabilities to the Broadband Platform and to release new versions as open-source scientific software distributions that can be compiled and run on many Linux computer systems. Our latest release includes 5 simulation methods, 7 simulation regions covering California, Japan, and Eastern North America, the ability to compare simulation results against GMPEs, and several new data products, such as map and distance-based goodness of fit plots. As the number and complexity of scenarios simulated using the Broadband Platform increases, we have added batching utilities to substantially improve support for running large-scale simulations on computing clusters.
Falls Risk and Simulated Driving Performance in Older Adults
Gaspar, John G.; Neider, Mark B.; Kramer, Arthur F.
2013-01-01
Declines in executive function and dual-task performance have been related to falls in older adults, and recent research suggests that older adults at risk for falls also show impairments on real-world tasks, such as crossing a street. The present study examined whether falls risk was associated with driving performance in a high-fidelity simulator. Participants were classified as high or low falls risk using the Physiological Profile Assessment and completed a number of challenging simulated driving assessments in which they responded quickly to unexpected events. High falls risk drivers had slower response times (~2.1 seconds) to unexpected events compared to low falls risk drivers (~1.7 seconds). Furthermore, when asked to perform a concurrent cognitive task while driving, high falls risk drivers showed greater costs to secondary task performance than did low falls risk drivers, and low falls risk older adults also outperformed high falls risk older adults on a computer-based measure of dual-task performance. Our results suggest that attentional differences between high and low falls risk older adults extend to simulated driving performance. PMID:23509627
Hypergraph-Based Combinatorial Optimization of Matrix-Vector Multiplication
ERIC Educational Resources Information Center
Wolf, Michael Maclean
2009-01-01
Combinatorial scientific computing plays an important enabling role in computational science, particularly in high performance scientific computing. In this thesis, we will describe our work on optimizing matrix-vector multiplication using combinatorial techniques. Our research has focused on two different problems in combinatorial scientific…
Hambrick, David Z; Libarkin, Julie C; Petcovic, Heather L; Baker, Kathleen M; Elkins, Joe; Callahan, Caitlin N; Turner, Sheldon P; Rench, Tara A; Ladue, Nicole D
2012-08-01
Sources of individual differences in scientific problem solving were investigated. Participants representing a wide range of experience in geology completed tests of visuospatial ability and geological knowledge, and performed a geological bedrock mapping task, in which they attempted to infer the geological structure of an area in the Tobacco Root Mountains of Montana. A Visuospatial Ability × Geological Knowledge interaction was found, such that visuospatial ability positively predicted mapping performance at low, but not high, levels of geological knowledge. This finding suggests that high levels of domain knowledge may sometimes enable circumvention of performance limitations associated with cognitive abilities. (PsycINFO Database Record (c) 2012 APA, all rights reserved).
The Gaia On-Board Scientific Data Handling
NASA Astrophysics Data System (ADS)
Arenou, F.; Babusiaux, C.; Chéreau, F.; Mignot, S.
2005-01-01
Because Gaia will perform a continuous all-sky survey at a medium (Spectro) or very high (Astro) angular resolution, the on-board processing needs to cope with a high variety of objects and densities which calls for generic and adaptive algorithms at the detection level, but not only. Consequently, the Pyxis scientific algorithms developed for the on-board data handling cover a large range of application: detection and confirmation of astronomical objects, background sky estimation, classification of detected objects, Near-Earth Objects onboard detection, and window selection and positioning. Very dense fields, where the real-time computing requirements should remain within fixed bounds, are particularly challenging. Another constraint stems from the limited telemetry bandwidth and an additional compromise has to be found between scientific requirements and constraints in terms of the mass, volume and power budgets of the satellite. The rationale for the on-board data handling procedure is described here, together with the developed algorithms, the main issues and the expected scientific performances in the Astro and Spectro instruments.
Crack propagation of brittle rock under high geostress
NASA Astrophysics Data System (ADS)
Liu, Ning; Chu, Weijiang; Chen, Pingzhi
2018-03-01
Based on fracture mechanics and numerical methods, the characteristics and failure criterions of wall rock cracks including initiation, propagation, and coalescence are analyzed systematically under different conditions. In order to consider the interaction among cracks, adopt the sliding model of multi-cracks to simulate the splitting failure of rock in axial compress. The reinforcement of bolts and shotcrete supporting to rock mass can control the cracks propagation well. Adopt both theory analysis and simulation method to study the mechanism of controlling the propagation. The best fixed angle of bolts is calculated. Then use ansys to simulate the crack arrest function of bolt to crack. Analyze the influence of different factors on stress intensity factor. The method offer more scientific and rational criterion to evaluate the splitting failure of underground engineering under high geostress.
The discovery of the causes of leprosy: A computational analysis
DOE Office of Scientific and Technical Information (OSTI.GOV)
Corruble, V.; Ganascia, J.G.
1996-12-31
The role played by the inductive inference has been studied extensively in the field of Scientific Discovery. The work presented here tackles the problem of induction in medical research. The discovery of the causes of leprosy is analyzed and simulated using computational means. An inductive algorithm is proposed, which is successful in simulating some essential steps in the progress of the understanding of the disease. It also allows us to simulate the false reasoning of previous centuries through the introduction of some medical a priori inherited form archaic medicine. Corroborating previous research, this problem illustrates the importance of the socialmore » and cultural environment on the way the inductive inference is performed in medicine.« less
ERIC Educational Resources Information Center
Tomera, Audrey N.
Investigated were two problems in science education, the retention and positive lateral transfer of the scientific processes of observation and comparison. Data for this study were collected from two junior high school settings, urban and rural. A total sample of 172 seventh- and eighth-grade students were instructed in the skills of observation…
Bremer, Peer-Timo; Weber, Gunther; Tierny, Julien; Pascucci, Valerio; Day, Marcus S; Bell, John B
2011-09-01
Large-scale simulations are increasingly being used to study complex scientific and engineering phenomena. As a result, advanced visualization and data analysis are also becoming an integral part of the scientific process. Often, a key step in extracting insight from these large simulations involves the definition, extraction, and evaluation of features in the space and time coordinates of the solution. However, in many applications, these features involve a range of parameters and decisions that will affect the quality and direction of the analysis. Examples include particular level sets of a specific scalar field, or local inequalities between derived quantities. A critical step in the analysis is to understand how these arbitrary parameters/decisions impact the statistical properties of the features, since such a characterization will help to evaluate the conclusions of the analysis as a whole. We present a new topological framework that in a single-pass extracts and encodes entire families of possible features definitions as well as their statistical properties. For each time step we construct a hierarchical merge tree a highly compact, yet flexible feature representation. While this data structure is more than two orders of magnitude smaller than the raw simulation data it allows us to extract a set of features for any given parameter selection in a postprocessing step. Furthermore, we augment the trees with additional attributes making it possible to gather a large number of useful global, local, as well as conditional statistic that would otherwise be extremely difficult to compile. We also use this representation to create tracking graphs that describe the temporal evolution of the features over time. Our system provides a linked-view interface to explore the time-evolution of the graph interactively alongside the segmentation, thus making it possible to perform extensive data analysis in a very efficient manner. We demonstrate our framework by extracting and analyzing burning cells from a large-scale turbulent combustion simulation. In particular, we show how the statistical analysis enabled by our techniques provides new insight into the combustion process.
Open-Source 3-D Platform for Low-Cost Scientific Instrument Ecosystem.
Zhang, C; Wijnen, B; Pearce, J M
2016-08-01
The combination of open-source software and hardware provides technically feasible methods to create low-cost, highly customized scientific research equipment. Open-source 3-D printers have proven useful for fabricating scientific tools. Here the capabilities of an open-source 3-D printer are expanded to become a highly flexible scientific platform. An automated low-cost 3-D motion control platform is presented that has the capacity to perform scientific applications, including (1) 3-D printing of scientific hardware; (2) laboratory auto-stirring, measuring, and probing; (3) automated fluid handling; and (4) shaking and mixing. The open-source 3-D platform not only facilities routine research while radically reducing the cost, but also inspires the creation of a diverse array of custom instruments that can be shared and replicated digitally throughout the world to drive down the cost of research and education further. © 2016 Society for Laboratory Automation and Screening.
NASA Astrophysics Data System (ADS)
Behrman, K. D.; Johnson, M. V. V.; Atwood, J. D.; Norfleet, M. L.
2016-12-01
Recent algal blooms in Western Lake Erie Basin (WLEB) have renewed scientific community's interest in developing process based models to better understand and predict the drivers of eutrophic conditions in the lake. At the same time, in order to prevent future blooms, farmers, local communities and policy makers are interested in developing spatially explicit nutrient and sediment management plans at various scales, from field to watershed. These interests have fueled several modeling exercises intended to locate "hotspots" in the basin where targeted adoption of additional agricultural conservation practices could provide the most benefit to water quality. The models have also been used to simulate various scenarios representing potential agricultural solutions. The Soil and Water Assessment Tool (SWAT) and its sister model, the Agricultural Policy Environmental eXtender (APEX), have been used to simulate hydrology of interacting land uses in thousands of scientific studies around the world. High performance computing allows SWAT and APEX users to continue to improve and refine the model specificity to make predictions at small-spatial scales. Consequently, data inputs and calibration/validation data are now becoming the limiting factor to model performance. Water quality data for the tributaries and rivers that flow through WLEB is spatially and temporally limited. Land management data, including conservation practice and nutrient management data, are not publicly available at fine spatial and temporal scales. Here we show the data uncertainties associated with modeling WLEB croplands at a relatively large spatial scale (HUC-4) using site management data from over 1,000 farms collected by the Conservation Effects Assessment Project (CEAP). The error associated with downscaling this data to the HUC-8 and HUC-12 scale is shown. Simulations of spatially explicit dynamics can be very informative, but care must be taken when policy decisions are made based on models with unstated, but implicit assumptions. As we interpret modeling results, we must communicate the spatial and temporal scale for which the model was developed and at which the data is valid. When there is little to no data to enable appropriate validation and calibration, the results must be interpreted with appropriate skepticism.
NASA Technical Reports Server (NTRS)
Strybel, Thomas Z.; Vu, Kim-Phuong L.; Battiste, Vernol; Dao, Arik-Quang; Dwyer, John P.; Landry, Steven; Johnson, Walter; Ho, Nhut
2011-01-01
A research consortium of scientists and engineers from California State University Long Beach (CSULB), San Jose State University Foundation (SJSUF), California State University Northridge (CSUN), Purdue University, and The Boeing Company was assembled to evaluate the impact of changes in roles and responsibilities and new automated technologies, being introduced in the Next Generation Air Transportation System (NextGen), on operator situation awareness (SA) and workload. To meet these goals, consortium members performed systems analyses of NextGen concepts and airspace scenarios, and concurrently evaluated SA, workload, and performance measures to assess their appropriateness for evaluations of NextGen concepts and tools. The following activities and accomplishments were supported by the NRA: a distributed simulation, metric development, systems analysis, part-task simulations, and large-scale simulations. As a result of this NRA, we have gained a greater understanding of situation awareness and its measurement, and have shared our knowledge with the scientific community. This network provides a mechanism for consortium members, colleagues, and students to pursue research on other topics in air traffic management and aviation, thus enabling them to make greater contributions to the field
Teaching Science and Mathematics Subjects Using the Excel Spreadsheet Package
ERIC Educational Resources Information Center
Ibrahim, Dogan
2009-01-01
The teaching of scientific subjects usually require laboratories where students can put the theory they have learned into practice. Traditionally, electronic programmable calculators, dedicated software, or expensive software simulation packages, such as MATLAB have been used to simulate scientific experiments. Recently, spreadsheet programs have…
Data Container Study for Handling array-based data using Hive, Spark, MongoDB, SciDB and Rasdaman
NASA Astrophysics Data System (ADS)
Xu, M.; Hu, F.; Yang, J.; Yu, M.; Yang, C. P.
2017-12-01
Geoscience communities have come up with various big data storage solutions, such as Rasdaman and Hive, to address the grand challenges for massive Earth observation data management and processing. To examine the readiness of current solutions in supporting big Earth observation, we propose to investigate and compare four popular data container solutions, including Rasdaman, Hive, Spark, SciDB and MongoDB. Using different types of spatial and non-spatial queries, datasets stored in common scientific data formats (e.g., NetCDF and HDF), and two applications (i.e. dust storm simulation data mining and MERRA data analytics), we systematically compare and evaluate the feature and performance of these four data containers in terms of data discover and access. The computing resources (e.g. CPU, memory, hard drive, network) consumed while performing various queries and operations are monitored and recorded for the performance evaluation. The initial results show that 1) the popular data container clusters are able to handle large volume of data, but their performances vary in different situations. Meanwhile, there is a trade-off between data preprocessing, disk saving, query-time saving, and resource consuming. 2) ClimateSpark, MongoDB and SciDB perform the best among all the containers in all the queries tests, and Hive performs the worst. 3) These studied data containers can be applied on other array-based datasets, such as high resolution remote sensing data and model simulation data. 4) Rasdaman clustering configuration is more complex than the others. A comprehensive report will detail the experimental results, and compare their pros and cons regarding system performance, ease of use, accessibility, scalability, compatibility, and flexibility.
2014-09-23
conduct simulations with a high-latitude data assimilation model. The specific objectives are to study magnetosphere-ionosphere ( M -I) coupling processes...based on three physics-based models, including a magnetosphere-ionosphere ( M -I) electrodynamics model, an ionosphere model, and a magnetic...inversion code. The ionosphere model is a high-resolution version of the Ionosphere Forecast Model ( IFM ), which is a 3-D, multi-ion model of the ionosphere
Soapy: an adaptive optics simulation written purely in Python for rapid concept development
NASA Astrophysics Data System (ADS)
Reeves, Andrew
2016-07-01
Soapy is a newly developed Adaptive Optics (AO) simulation which aims be a flexible and fast to use tool-kit for many applications in the field of AO. It is written purely in the Python language, adding to and taking advantage of the already rich ecosystem of scientific libraries and programs. The simulation has been designed to be extremely modular, such that each component can be used stand-alone for projects which do not require a full end-to-end simulation. Ease of use, modularity and code clarity have been prioritised at the expense of computational performance. Though this means the code is not yet suitable for large studies of Extremely Large Telescope AO systems, it is well suited to education, exploration of new AO concepts and investigations of current generation telescopes.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Luo, Y.; Cameron, K.W.
1998-11-24
Workload characterization has been proven an essential tool to architecture design and performance evaluation in both scientific and commercial computing areas. Traditional workload characterization techniques include FLOPS rate, cache miss ratios, CPI (cycles per instruction or IPC, instructions per cycle) etc. With the complexity of sophisticated modern superscalar microprocessors, these traditional characterization techniques are not powerful enough to pinpoint the performance bottleneck of an application on a specific microprocessor. They are also incapable of immediately demonstrating the potential performance benefit of any architectural or functional improvement in a new processor design. To solve these problems, many people rely on simulators,more » which have substantial constraints especially on large-scale scientific computing applications. This paper presents a new technique of characterizing applications at the instruction level using hardware performance counters. It has the advantage of collecting instruction-level characteristics in a few runs virtually without overhead or slowdown. A variety of instruction counts can be utilized to calculate some average abstract workload parameters corresponding to microprocessor pipelines or functional units. Based on the microprocessor architectural constraints and these calculated abstract parameters, the architectural performance bottleneck for a specific application can be estimated. In particular, the analysis results can provide some insight to the problem that only a small percentage of processor peak performance can be achieved even for many very cache-friendly codes. Meanwhile, the bottleneck estimation can provide suggestions about viable architectural/functional improvement for certain workloads. Eventually, these abstract parameters can lead to the creation of an analytical microprocessor pipeline model and memory hierarchy model.« less
Toward Exascale Earthquake Ground Motion Simulations for Near-Fault Engineering Analysis
Johansen, Hans; Rodgers, Arthur; Petersson, N. Anders; ...
2017-09-01
Modernizing SW4 for massively parallel time-domain simulations of earthquake ground motions in 3D earth models increases resolution and provides ground motion estimates for critical infrastructure risk evaluations. Simulations of ground motions from large (M ≥ 7.0) earthquakes require domains on the order of 100 to500 km and spatial granularity on the order of 1 to5 m resulting in hundreds of billions of grid points. Surface-focused structured mesh refinement (SMR) allows for more constant grid point per wavelength scaling in typical Earth models, where wavespeeds increase with depth. In fact, MR allows for simulations to double the frequency content relative tomore » a fixed grid calculation on a given resource. The authors report improvements to the SW4 algorithm developed while porting the code to the Cori Phase 2 (Intel Xeon Phi) systems at the National Energy Research Scientific Computing Center (NERSC) at Lawrence Berkeley National Laboratory. As a result, investigations of the performance of the innermost loop of the calculations found that reorganizing the order of operations can improve performance for massive problems.« less
Toward Exascale Earthquake Ground Motion Simulations for Near-Fault Engineering Analysis
DOE Office of Scientific and Technical Information (OSTI.GOV)
Johansen, Hans; Rodgers, Arthur; Petersson, N. Anders
Modernizing SW4 for massively parallel time-domain simulations of earthquake ground motions in 3D earth models increases resolution and provides ground motion estimates for critical infrastructure risk evaluations. Simulations of ground motions from large (M ≥ 7.0) earthquakes require domains on the order of 100 to500 km and spatial granularity on the order of 1 to5 m resulting in hundreds of billions of grid points. Surface-focused structured mesh refinement (SMR) allows for more constant grid point per wavelength scaling in typical Earth models, where wavespeeds increase with depth. In fact, MR allows for simulations to double the frequency content relative tomore » a fixed grid calculation on a given resource. The authors report improvements to the SW4 algorithm developed while porting the code to the Cori Phase 2 (Intel Xeon Phi) systems at the National Energy Research Scientific Computing Center (NERSC) at Lawrence Berkeley National Laboratory. As a result, investigations of the performance of the innermost loop of the calculations found that reorganizing the order of operations can improve performance for massive problems.« less
NASA Technical Reports Server (NTRS)
Fogleman, Guy (Editor); Huntington, Judith L. (Editor); Schwartz, Deborah E. (Editor); Fonda, Mark L. (Editor)
1989-01-01
An overview of the Gas-Grain Simulation Facility (GGSF) project and its current status is provided. The proceedings of the Gas-Grain Simulation Facility Experiments Workshop are recorded. The goal of the workshop was to define experiments for the GGSF--a small particle microgravity research facility. The workshop addressed the opportunity for performing, in Earth orbit, a wide variety of experiments that involve single small particles (grains) or clouds of particles. Twenty experiments from the fields of exobiology, planetary science, astrophysics, atmospheric science, biology, physics, and chemistry were described at the workshop and are outlined in Volume 2. Each experiment description included specific scientific objectives, an outline of the experimental procedure, and the anticipated GGSF performance requirements. Since these experiments represent the types of studies that will ultimately be proposed for the facility, they will be used to define the general science requirements of the GGSF. Also included in the second volume is a physics feasibility study and abstracts of example Gas-Grain Simulation Facility experiments and related experiments in progress.
Visualizing staggered fields and analyzing electromagnetic data with PerceptEM
DOE Office of Scientific and Technical Information (OSTI.GOV)
Shasharina, Svetlana
This project resulted in VSimSP: a software for simulating large photonic devices of high-performance computers. It includes: GUI for Photonics Simulations; High-Performance Meshing Algorithm; 2d Order Multimaterials Algorithm; Mode Solver for Waveguides; 2d Order Material Dispersion Algorithm; S Parameters Calculation; High-Performance Workflow at NERSC ; and Large Photonic Devices Simulation Setups We believe we became the only company in the world which can simulate large photonics devices in 3D on modern supercomputers without the need to split them into subparts or do low-fidelity modeling. We started commercial engagement with a manufacturing company.
Energy Innovation Hubs: A Home for Scientific Collaboration
DOE Office of Scientific and Technical Information (OSTI.GOV)
Chu, Steven
Secretary Chu will host a live, streaming Q&A session with the directors of the Energy Innovation Hubs on Tuesday, March 6, at 2:15 p.m. EST. The directors will be available for questions regarding their teams' work and the future of American energy. Ask your questions in the comments below, or submit them on Facebook, Twitter (@energy), or send an e-mail to newmedia@hq.doe.gov, prior or during the live event. Dr. Hank Foley is the director of the Greater Philadelphia Innovation Cluster for Energy-Efficient Buildings, which is pioneering new data intensive techniques for designing and operating energy efficient buildings, including advanced computermore » modeling. Dr. Douglas Kothe is the director of the Consortium for Advanced Simulation of Light Water Reactors, which uses powerful supercomputers to create "virtual" reactors that will help improve the safety and performance of both existing and new nuclear reactors. Dr. Nathan Lewis is the director of the Joint Center for Artificial Photosynthesis, which focuses on how to produce fuels from sunlight, water, and carbon dioxide. The Energy Innovation Hubs are major integrated research centers, with researchers from many different institutions and technical backgrounds. Each hub is focused on a specific high priority goal, rapidly accelerating scientific discoveries and shortening the path from laboratory innovation to technological development and commercial deployment of critical energy technologies. Ask your questions in the comments below, or submit them on Facebook, Twitter (@energy), or send an e-mail to newmedia@energy.gov, prior or during the live event. The Energy Innovation Hubs are major integrated research centers, with researchers from many different institutions and technical backgrounds. Each Hub is focused on a specific high priority goal, rapidly accelerating scientific discoveries and shortening the path from laboratory innovation to technological development and commercial deployment of critical energy technologies. Dr. Hank Holey is the director of the Greater Philadelphia Innovation Cluster for Energy-Efficient Buildings, which is pioneering new data intensive techniques for designing and operating energy efficient buildings, including advanced computer modeling. Dr. Douglas Kothe is the director of the Modeling and Simulation for Nuclear Reactors Hub, which uses powerful supercomputers to create "virtual" reactors that will help improve the safety and performance of both existing and new nuclear reactors. Dr. Nathan Lewis is the director of the Joint Center for Artificial Photosynthesis Hub, which focuses on how to produce biofuels from sunlight, water, and carbon dioxide.« less
Energy Innovation Hubs: A Home for Scientific Collaboration
Chu, Steven
2017-12-11
Secretary Chu will host a live, streaming Q&A session with the directors of the Energy Innovation Hubs on Tuesday, March 6, at 2:15 p.m. EST. The directors will be available for questions regarding their teams' work and the future of American energy. Ask your questions in the comments below, or submit them on Facebook, Twitter (@energy), or send an e-mail to newmedia@hq.doe.gov, prior or during the live event. Dr. Hank Foley is the director of the Greater Philadelphia Innovation Cluster for Energy-Efficient Buildings, which is pioneering new data intensive techniques for designing and operating energy efficient buildings, including advanced computer modeling. Dr. Douglas Kothe is the director of the Consortium for Advanced Simulation of Light Water Reactors, which uses powerful supercomputers to create "virtual" reactors that will help improve the safety and performance of both existing and new nuclear reactors. Dr. Nathan Lewis is the director of the Joint Center for Artificial Photosynthesis, which focuses on how to produce fuels from sunlight, water, and carbon dioxide. The Energy Innovation Hubs are major integrated research centers, with researchers from many different institutions and technical backgrounds. Each hub is focused on a specific high priority goal, rapidly accelerating scientific discoveries and shortening the path from laboratory innovation to technological development and commercial deployment of critical energy technologies. Ask your questions in the comments below, or submit them on Facebook, Twitter (@energy), or send an e-mail to newmedia@energy.gov, prior or during the live event. The Energy Innovation Hubs are major integrated research centers, with researchers from many different institutions and technical backgrounds. Each Hub is focused on a specific high priority goal, rapidly accelerating scientific discoveries and shortening the path from laboratory innovation to technological development and commercial deployment of critical energy technologies. Dr. Hank Holey is the director of the Greater Philadelphia Innovation Cluster for Energy-Efficient Buildings, which is pioneering new data intensive techniques for designing and operating energy efficient buildings, including advanced computer modeling. Dr. Douglas Kothe is the director of the Modeling and Simulation for Nuclear Reactors Hub, which uses powerful supercomputers to create "virtual" reactors that will help improve the safety and performance of both existing and new nuclear reactors. Dr. Nathan Lewis is the director of the Joint Center for Artificial Photosynthesis Hub, which focuses on how to produce biofuels from sunlight, water, and carbon dioxide.
Extreme scale multi-physics simulations of the tsunamigenic 2004 Sumatra megathrust earthquake
NASA Astrophysics Data System (ADS)
Ulrich, T.; Gabriel, A. A.; Madden, E. H.; Wollherr, S.; Uphoff, C.; Rettenberger, S.; Bader, M.
2017-12-01
SeisSol (www.seissol.org) is an open-source software package based on an arbitrary high-order derivative Discontinuous Galerkin method (ADER-DG). It solves spontaneous dynamic rupture propagation on pre-existing fault interfaces according to non-linear friction laws, coupled to seismic wave propagation with high-order accuracy in space and time (minimal dispersion errors). SeisSol exploits unstructured meshes to account for complex geometries, e.g. high resolution topography and bathymetry, 3D subsurface structure, and fault networks. We present the up-to-date largest (1500 km of faults) and longest (500 s) dynamic rupture simulation modeling the 2004 Sumatra-Andaman earthquake. We demonstrate the need for end-to-end-optimization and petascale performance of scientific software to realize realistic simulations on the extreme scales of subduction zone earthquakes: Considering the full complexity of subduction zone geometries leads inevitably to huge differences in element sizes. The main code improvements include a cache-aware wave propagation scheme and optimizations of the dynamic rupture kernels using code generation. In addition, a novel clustered local-time-stepping scheme for dynamic rupture has been established. Finally, asynchronous output has been implemented to overlap I/O and compute time. We resolve the frictional sliding process on the curved mega-thrust and a system of splay faults, as well as the seismic wave field and seafloor displacement with frequency content up to 2.2 Hz. We validate the scenario by geodetic, seismological and tsunami observations. The resulting rupture dynamics shed new light on the activation and importance of splay faults.
Efficiently passing messages in distributed spiking neural network simulation.
Thibeault, Corey M; Minkovich, Kirill; O'Brien, Michael J; Harris, Frederick C; Srinivasa, Narayan
2013-01-01
Efficiently passing spiking messages in a neural model is an important aspect of high-performance simulation. As the scale of networks has increased so has the size of the computing systems required to simulate them. In addition, the information exchange of these resources has become more of an impediment to performance. In this paper we explore spike message passing using different mechanisms provided by the Message Passing Interface (MPI). A specific implementation, MVAPICH, designed for high-performance clusters with Infiniband hardware is employed. The focus is on providing information about these mechanisms for users of commodity high-performance spiking simulators. In addition, a novel hybrid method for spike exchange was implemented and benchmarked.
National Science Education Standards.
ERIC Educational Resources Information Center
National Academy of Sciences - National Research Council, Washington, DC.
The National Science Education Standards present a vision of a scientifically literate populace. The standards outline what students need to know, understand, and be able to do to be scientifically literate at different grade levels. They describe an educational system in which all students demonstrate high levels of performance, teachers are…
NASA Astrophysics Data System (ADS)
Janesick, James; Gunawan, Ferry; Dosluoglu, Taner; Tower, John; McCaffrey, Niel
2002-08-01
High performance CMOS pixels are introduced; and their development is discussed. 3T (3-transistor) photodiode, 5T pinned diode, 6T photogate and 6T photogate back illuminated CMOS pixels are examined in detail, and the latter three are considered as scientific pixels. The advantages and disadvantagesof these options for scientific CMOS pixels are examined.Pixel characterization, which is used to gain a better understanding of CMOS pixels themselves, is also discussed.
NASA Astrophysics Data System (ADS)
Janesick, J.; Gunawan, F.; Dosluoglu, T.; Tower, J.; McCaffrey, N.
High performance CMOS pixels are introduced and their development is discussed. 3T (3-transistor) photodiode, 5T pinned diode, 6T photogate and 6T photogate back illuminated CMOS pixels are examined in detail, and the latter three are considered as scientific pixels. The advantages and disadvantages of these options for scientific CMOS pixels are examined. Pixel characterization, which is used to gain a better understanding of CMOS pixels themselves, is also discussed.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Svetlana Shasharina
The goal of the Center for Technology for Advanced Scientific Component Software is to fundamentally changing the way scientific software is developed and used by bringing component-based software development technologies to high-performance scientific and engineering computing. The role of Tech-X work in TASCS project is to provide an outreach to accelerator physics and fusion applications by introducing TASCS tools into applications, testing tools in the applications and modifying the tools to be more usable.
Understanding I/O workload characteristics of a Peta-scale storage system
DOE Office of Scientific and Technical Information (OSTI.GOV)
Kim, Youngjae; Gunasekaran, Raghul
2015-01-01
Understanding workload characteristics is critical for optimizing and improving the performance of current systems and software, and architecting new storage systems based on observed workload patterns. In this paper, we characterize the I/O workloads of scientific applications of one of the world s fastest high performance computing (HPC) storage cluster, Spider, at the Oak Ridge Leadership Computing Facility (OLCF). OLCF flagship petascale simulation platform, Titan, and other large HPC clusters, in total over 250 thousands compute cores, depend on Spider for their I/O needs. We characterize the system utilization, the demands of reads and writes, idle time, storage space utilization,more » and the distribution of read requests to write requests for the Peta-scale Storage Systems. From this study, we develop synthesized workloads, and we show that the read and write I/O bandwidth usage as well as the inter-arrival time of requests can be modeled as a Pareto distribution. We also study the I/O load imbalance problems using I/O performance data collected from the Spider storage system.« less
Ssalmon - The Solar Simulations For The Atacama Large Millimeter Observatory Network
NASA Astrophysics Data System (ADS)
Wedemeyer, Sven; Ssalmon Group
2016-07-01
The Atacama Large Millimeter/submillimeter Array (ALMA) provides a new powerful tool for observing the solar chromosphere at high spatial, temporal, and spectral resolution, which will allow for addressing a wide range of scientific topics in solar physics. Numerical simulations of the solar atmosphere and modeling of instrumental effects are valuable tools for constraining, preparing and optimizing future observations with ALMA and for interpreting the results. In order to co-ordinate related activities, the Solar Simulations for the Atacama Large Millimeter Observatory Network (SSALMON) was initiated on September 1st, 2014, in connection with the NA- and EU-led solar ALMA development studies. As of April, 2015, SSALMON has grown to 83 members from 18 countries (plus ESO and ESA). Another important goal of SSALMON is to promote the scientific potential of solar science with ALMA, which has resulted in two major publications so far. During 2015, the SSALMON Expert Teams produced a White Paper with potential science cases for Cycle 4, which will be the first time regular solar observations will be carried out. Registration and more information at http://www.ssalmon.uio.no.
Heterogeneous high throughput scientific computing with APM X-Gene and Intel Xeon Phi
Abdurachmanov, David; Bockelman, Brian; Elmer, Peter; ...
2015-05-22
Electrical power requirements will be a constraint on the future growth of Distributed High Throughput Computing (DHTC) as used by High Energy Physics. Performance-per-watt is a critical metric for the evaluation of computer architectures for cost- efficient computing. Additionally, future performance growth will come from heterogeneous, many-core, and high computing density platforms with specialized processors. In this paper, we examine the Intel Xeon Phi Many Integrated Cores (MIC) co-processor and Applied Micro X-Gene ARMv8 64-bit low-power server system-on-a-chip (SoC) solutions for scientific computing applications. As a result, we report our experience on software porting, performance and energy efficiency and evaluatemore » the potential for use of such technologies in the context of distributed computing systems such as the Worldwide LHC Computing Grid (WLCG).« less
NASA Astrophysics Data System (ADS)
Carbillet, Marcel; Riccardi, Armando; Esposito, Simone
2004-10-01
We present our latest results concerning the simulation studies performed for the first-light adaptive optics (AO) system of the Large Binocular Telescope (LBT), namely WLBT. After a brief description of the "raw" performance evaluation results, in terms of Strehl ratios attained in the various considered bands (from V to K), we focus on the "scientific" performance that will be obtained when considering the subsequent instrumentation that will benefit from the correction given by the AO system WLBT and the adaptive secondary mirrors LBT 672. In particular, we discuss the performance of the coupling with the instrument LUCIFER, working at near-infrared bands, in terms of signal-to-noise values and limiting magnitudes, and in both the cases of spectroscopy and photometric detection. We also give the encircled energies that are expected in the visible bands, result relevant in one hand for the instrument PEPSI, and in other hand for the "technical viewer" that will be on board the WLBT system itself.
NASA Astrophysics Data System (ADS)
Xing, Jacques
Dielectric barrier discharge (DBD) plasma actuator is a proposed device for active for control in order to improve the performances of aircraft and turbomachines. Essentially, these actuators are made of two electrodes separated by a layer of dielectric material and convert electricity directly into flow. Because of the high costs associated with experiences in realistic operating conditions, there is a need to develop a robust numerical model that can predict the plasma body force and the effects of various parameters on it. Indeed, this plasma body force can be affected by atmospheric conditions (temperature, pressure, and humidity), velocity of the neutral flow, applied voltage (amplitude, frequency, and waveform), and by the actuator geometry. In that respect, the purpose of this thesis is to implement a plasma model for DBD actuator that has the potential to consider the effects of these various parameters. In DBD actuator modelling, two types of approach are commonly proposed, low-order modelling (or phenomenological) and high-order modelling (or scientific). However a critical analysis, presented in this thesis, showed that phenomenological models are not robust enough to predict the plasma body force without artificial calibration for each specific case. Moreover, there are based on erroneous assumptions. Hence, the selected approach to model the plasma body force is a scientific drift-diffusion model with four chemical species (electrons, positive ions, negative ions, and neutrals). This model was chosen because it gives consistent numerical results comparatively with experimental data. Moreover, this model has great potential to include the effect of temperature, pressure, and humidity on the plasma body force and requires only a reasonable computational time. This model was independently implemented in C++ programming language and validated with several test cases. This model was later used to simulate the effect of the plasma body force on the laminar-turbulent transition on airfoil in order to validate the performance of this model in practical CFD simulation. Numerical results show that this model gives a better prediction of the effect of the plasma on the fluid flow for a practical case in aerospace than a phenomenological model.
NASA Astrophysics Data System (ADS)
Loring, B.; Karimabadi, H.; Rortershteyn, V.
2015-10-01
The surface line integral convolution(LIC) visualization technique produces dense visualization of vector fields on arbitrary surfaces. We present a screen space surface LIC algorithm for use in distributed memory data parallel sort last rendering infrastructures. The motivations for our work are to support analysis of datasets that are too large to fit in the main memory of a single computer and compatibility with prevalent parallel scientific visualization tools such as ParaView and VisIt. By working in screen space using OpenGL we can leverage the computational power of GPUs when they are available and run without them when they are not. We address efficiency and performance issues that arise from the transformation of data from physical to screen space by selecting an alternate screen space domain decomposition. We analyze the algorithm's scaling behavior with and without GPUs on two high performance computing systems using data from turbulent plasma simulations.
Examination of Daily Weather in the NCAR CCM
NASA Astrophysics Data System (ADS)
Cocke, S. D.
2006-05-01
The NCAR CCM is one of the most extensively studied climate models in the scientific community. However, most studies focus primarily on the long term mean behavior, typically monthly or longer time scales. In this study we examine the daily weather in the GCM by performing a series of daily or weekly 10 day forecasts for one year at moderate (T63) and high (T126) resolution. The model is initialized with operational "AVN" and ECMWF analyses, and model performance is compared to that of major operational centers, using conventional skill scores used by the major centers. Such a detailed look at the CCM at shorter time scales may lead to improvements in physical parameterizations, which may in turn lead to improved climate simulations. One finding from this study is that the CCM has a significant drying tendency in the lower troposphere compared to the operational analyses. Another is that the large scale predictability of the GCM is competitive with most of the operational models, particularly in the southern hemisphere.
Automated problem scheduling and reduction of synchronization delay effects
NASA Technical Reports Server (NTRS)
Saltz, Joel H.
1987-01-01
It is anticipated that in order to make effective use of many future high performance architectures, programs will have to exhibit at least a medium grained parallelism. A framework is presented for partitioning very sparse triangular systems of linear equations that is designed to produce favorable preformance results in a wide variety of parallel architectures. Efficient methods for solving these systems are of interest because: (1) they provide a useful model problem for use in exploring heuristics for the aggregation, mapping and scheduling of relatively fine grained computations whose data dependencies are specified by directed acrylic graphs, and (2) because such efficient methods can find direct application in the development of parallel algorithms for scientific computation. Simple expressions are derived that describe how to schedule computational work with varying degrees of granularity. The Encore Multimax was used as a hardware simulator to investigate the performance effects of using the partitioning techniques presented in shared memory architectures with varying relative synchronization costs.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Boggess, A.
Existing models and simulants of tank disposition media at SRS have presumed the presence of high concentrations of inorganic mercury. However, recent quarterly tank analyses show that mercury is present as organomercurial species at concentrations that may present challenges to remediation and disposition and may exceed the Saltstone Waste Acceptance Criteria (WAC). To-date, methylmercury analysis for Savannah River Remediation (SRR) has been performed off-site by Eurofins Scientific (Lancaster, PA). A series of optimization and validation experiments has been performed at SRNL, which has resulted in the development of on-site organomercury speciation capabilities using purge and trap gas chromatography coupled withmore » thermal desorption cold vapor atomic fluorescence spectroscopy (P&T GC/CVAFS). Speciation has been achieved for methylmercury, with a method reporting limit (MRL) values of 1.42 pg for methylmercury. Results obtained by SRNL from the analysis of past quarterly samples from tanks 21, 40, and 50 have demonstrated statistically indistinguishable concentration values compared with the concentration data obtained from Eurofins, while the data from SRNL has demonstrated significantly improved precision and processing time.« less
DOE Office of Scientific and Technical Information (OSTI.GOV)
Loring, Burlen; Karimabadi, Homa; Rortershteyn, Vadim
2014-07-01
The surface line integral convolution(LIC) visualization technique produces dense visualization of vector fields on arbitrary surfaces. We present a screen space surface LIC algorithm for use in distributed memory data parallel sort last rendering infrastructures. The motivations for our work are to support analysis of datasets that are too large to fit in the main memory of a single computer and compatibility with prevalent parallel scientific visualization tools such as ParaView and VisIt. By working in screen space using OpenGL we can leverage the computational power of GPUs when they are available and run without them when they are not.more » We address efficiency and performance issues that arise from the transformation of data from physical to screen space by selecting an alternate screen space domain decomposition. We analyze the algorithm's scaling behavior with and without GPUs on two high performance computing systems using data from turbulent plasma simulations.« less
APEX - the Hyperspectral ESA Airborne Prism Experiment
Itten, Klaus I.; Dell'Endice, Francesco; Hueni, Andreas; Kneubühler, Mathias; Schläpfer, Daniel; Odermatt, Daniel; Seidel, Felix; Huber, Silvia; Schopfer, Jürg; Kellenberger, Tobias; Bühler, Yves; D'Odorico, Petra; Nieke, Jens; Alberti, Edoardo; Meuleman, Koen
2008-01-01
The airborne ESA-APEX (Airborne Prism Experiment) hyperspectral mission simulator is described with its distinct specifications to provide high quality remote sensing data. The concept of an automatic calibration, performed in the Calibration Home Base (CHB) by using the Control Test Master (CTM), the In-Flight Calibration facility (IFC), quality flagging (QF) and specific processing in a dedicated Processing and Archiving Facility (PAF), and vicarious calibration experiments are presented. A preview on major applications and the corresponding development efforts to provide scientific data products up to level 2/3 to the user is presented for limnology, vegetation, aerosols, general classification routines and rapid mapping tasks. BRDF (Bidirectional Reflectance Distribution Function) issues are discussed and the spectral database SPECCHIO (Spectral Input/Output) introduced. The optical performance as well as the dedicated software utilities make APEX a state-of-the-art hyperspectral sensor, capable of (a) satisfying the needs of several research communities and (b) helping the understanding of the Earth's complex mechanisms. PMID:27873868
Workload Characterization of a Leadership Class Storage Cluster
DOE Office of Scientific and Technical Information (OSTI.GOV)
Kim, Youngjae; Gunasekaran, Raghul; Shipman, Galen M
2010-01-01
Understanding workload characteristics is critical for optimizing and improving the performance of current systems and software, and architecting new storage systems based on observed workload patterns. In this paper, we characterize the scientific workloads of the world s fastest HPC (High Performance Computing) storage cluster, Spider, at the Oak Ridge Leadership Computing Facility (OLCF). Spider provides an aggregate bandwidth of over 240 GB/s with over 10 petabytes of RAID 6 formatted capacity. OLCFs flagship petascale simulation platform, Jaguar, and other large HPC clusters, in total over 250 thousands compute cores, depend on Spider for their I/O needs. We characterize themore » system utilization, the demands of reads and writes, idle time, and the distribution of read requests to write requests for the storage system observed over a period of 6 months. From this study we develop synthesized workloads and we show that the read and write I/O bandwidth usage as well as the inter-arrival time of requests can be modeled as a Pareto distribution.« less
CFD Vision 2030 Study: A Path to Revolutionary Computational Aerosciences
NASA Technical Reports Server (NTRS)
Slotnick, Jeffrey; Khodadoust, Abdollah; Alonso, Juan; Darmofal, David; Gropp, William; Lurie, Elizabeth; Mavriplis, Dimitri
2014-01-01
This report documents the results of a study to address the long range, strategic planning required by NASA's Revolutionary Computational Aerosciences (RCA) program in the area of computational fluid dynamics (CFD), including future software and hardware requirements for High Performance Computing (HPC). Specifically, the "Vision 2030" CFD study is to provide a knowledge-based forecast of the future computational capabilities required for turbulent, transitional, and reacting flow simulations across a broad Mach number regime, and to lay the foundation for the development of a future framework and/or environment where physics-based, accurate predictions of complex turbulent flows, including flow separation, can be accomplished routinely and efficiently in cooperation with other physics-based simulations to enable multi-physics analysis and design. Specific technical requirements from the aerospace industrial and scientific communities were obtained to determine critical capability gaps, anticipated technical challenges, and impediments to achieving the target CFD capability in 2030. A preliminary development plan and roadmap were created to help focus investments in technology development to help achieve the CFD vision in 2030.
ERIC Educational Resources Information Center
Abuzaghleh, Omar; Goldschmidt, Kathleen; Elleithy, Yasser; Lee, Jeongkyu
2013-01-01
With the advances in computing power, high-performance computing (HPC) platforms have had an impact on not only scientific research in advanced organizations but also computer science curriculum in the educational community. For example, multicore programming and parallel systems are highly desired courses in the computer science major. However,…
Schlairet, Maura C; Schlairet, Timothy James; Sauls, Denise H; Bellflowers, Lois
2015-03-01
Establishing the impact of the high-fidelity simulation environment on student performance, as well as identifying factors that could predict learning, would refine simulation outcome expectations among educators. The purpose of this quasi-experimental pilot study was to explore the impact of simulation on emotion and cognitive load among beginning nursing students. Forty baccalaureate nursing students participated in teaching simulations, rated their emotional state and cognitive load, and completed evaluation simulations. Two principal components of emotion were identified representing the pleasant activation and pleasant deactivation components of affect. Mean rating of cognitive load following simulation was high. Linear regression identiffed slight but statistically nonsignificant positive associations between principal components of emotion and cognitive load. Logistic regression identified a negative but statistically nonsignificant effect of cognitive load on assessment performance. Among lower ability students, a more pronounced effect of cognitive load on assessment performance was observed; this also was statistically non-significant. Copyright 2015, SLACK Incorporated.
NASA Astrophysics Data System (ADS)
Green, Joel D.; Smith, Denise A.; Lawton, Brandon L.; Jirdeh, Hussein; Meinke, Bonnie K.
2016-01-01
The James Webb Space Telescope is the successor to the Hubble Space Telescope. STScI and the Office of Public Outreach are committed to bringing awareness of the technology, the excitement, and the future science potential of this great observatory to the public, to educators and students, and to the scientific community, prior to its 2018 launch. The challenges in ensuring the high profile of JWST (understanding the infrared, the vast distance to the telescope's final position, and the unfamiliar science territory) requires us to lay the proper background. We currently engage the full range of the public and scientific communities using a variety of high impact, memorable initiatives, in combination with modern technologies to extend reach, linking the science goals of Webb to the ongoing discoveries being made by Hubble. We have injected Webb-specific content into ongoing E/PO programs: for example, simulated scientifically inspired but aesthetic JWST scenes, illustrating the differences between JWST and previous missions; partnering with high impact science communicators such as MinutePhysics to produce timely and concise content; educational materials in vast networks of schools through products like the Star Witness News.
Radial SI latches vibration test data review
NASA Technical Reports Server (NTRS)
Harrison, P. M.; Smith, J. L.
1984-01-01
Dynamic testing of the Space Telescope Scientific Instrument Radial Latches was performed as specified by the designated test criteria. No structural failures were observed during the test. The alignment stability of the instrument simulator was within required tolerances after testing. Particulates were discovered around the latch bases, after testing, due to wearing at the B and C latch interface surfaces. This report covers criteria derivation, testing, and test results.
NASA Astrophysics Data System (ADS)
Das, Santanu; Choudhary, Kamal; Chernatynskiy, Aleksandr; Choi Yim, Haein; Bandyopadhyay, Asis K.; Mukherjee, Sundeep
2016-06-01
High-performance magnetic materials have immense industrial and scientific importance in wide-ranging electronic, electromechanical, and medical device technologies. Metallic glasses with a fully amorphous structure are particularly suited for advanced soft-magnetic applications. However, fundamental scientific understanding is lacking for the spin-exchange interaction between metal and metalloid atoms, which typically constitute a metallic glass. Using an integrated experimental and molecular dynamics approach, we demonstrate the mechanism of electron interaction between transition metals and metalloids. Spin-exchange interactions were investigated for a Fe-Co metallic glass system of composition [(Co1-x Fe x )0.75B0.2Si0.05]96Cr4. The saturation magnetization increased with higher Fe concentration, but the trend significantly deviated from simple rule of mixtures. Ab initio molecular dynamics simulation was used to identify the ferromagnetic/anti-ferromagnetic interaction between the transition metals and metalloids. The overlapping band-structure and density of states represent ‘Stoner type’ magnetization for the amorphous alloys in contrast to ‘Heisenberg type’ in crystalline iron. The enhancement of magnetization by increasing iron was attributed to the interaction between Fe 3d and B 2p bands, which was further validated by valence-band study.
Operational Issues: What Science in Available?
NASA Technical Reports Server (NTRS)
Rosekind, Mark R.; Neri, David F.
1997-01-01
Flight/duty/rest considerations involve two highly complex factors: the diverse demands of aviation operations and human physiology (especially sleep and circadian rhythms). Several core operational issues related to fatigue have been identified, such as minimum rest requirements, duty length, flight time considerations, crossing multiple time zones, and night flying. Operations also can involve on-call reserve status and callout, delays due to unforeseen circumstances (e.g., weather, mechanical), and on-demand flights. Over 40 years of scientific research is now available to apply to these complex issues of flight/duty/rest requirements. This research involves controlled 'laboratory studies, simulations, and data collected during regular flight operations. When flight/duty/rest requirements are determined they are typically based on a variety of considerations, such as operational demand, safety, economic, etc. Rarely has the available, state-of-the-art science been a consideration along with these other factors when determining flight/duty/rest requirements. While the complexity of the operational demand and human physiology precludes an absolute solution, there is an opportunity to take full advantage of the current scientific data. Incorporating these data in a rational operational manner into flight/duty/rest requirements can improve flight crew performance, alertness, and ultimately, aviation safety.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Joshi, Chan; Mori, W.
2013-10-21
This is the final report on the DOE grant number DE-FG02-92ER40727 titled, “Experimental, Theoretical and Computational Studies of Plasma-Based Concepts for Future High Energy Accelerators.” During this grant period the UCLA program on Advanced Plasma Based Accelerators, headed by Professor C. Joshi has made many key scientific advances and trained a generation of students, many of whom have stayed in this research field and even started research programs of their own. In this final report however, we will focus on the last three years of the grant and report on the scientific progress made in each of the four tasksmore » listed under this grant. Four tasks are focused on: Plasma Wakefield Accelerator Research at FACET, SLAC National Accelerator Laboratory, In House Research at UCLA’s Neptune and 20 TW Laser Laboratories, Laser-Wakefield Acceleration (LWFA) in Self Guided Regime: Experiments at the Callisto Laser at LLNL, and Theory and Simulations. Major scientific results have been obtained in each of the four tasks described in this report. These have led to publications in the prestigious scientific journals, graduation and continued training of high quality Ph.D. level students and have kept the U.S. at the forefront of plasma-based accelerators research field.« less
Lunar Regolith Simulant Materials: Recommendations for Standardization, Production, and Usage
NASA Technical Reports Server (NTRS)
Sibille, L.; Carpenter, P.; Schlagheck, R.; French, R. A.
2006-01-01
Experience gained during the Apollo program demonstrated the need for extensive testing of surface systems in relevant environments, including regolith materials similar to those encountered on the lunar surface. As NASA embarks on a return to the Moon, it is clear that the current lunar sample inventory is not only insufficient to support lunar surface technology and system development, but its scientific value is too great to be consumed by destructive studies. Every effort must be made to utilize standard simulant materials, which will allow developers to reduce the cost, development, and operational risks to surface systems. The Lunar Regolith Simulant Materials Workshop held in Huntsville, AL, on January 24 26, 2005, identified the need for widely accepted standard reference lunar simulant materials to perform research and development of technologies required for lunar operations. The workshop also established a need for a common, traceable, and repeatable process regarding the standardization, characterization, and distribution of lunar simulants. This document presents recommendations for the standardization, production and usage of lunar regolith simulant materials.
OpenSim: open-source software to create and analyze dynamic simulations of movement.
Delp, Scott L; Anderson, Frank C; Arnold, Allison S; Loan, Peter; Habib, Ayman; John, Chand T; Guendelman, Eran; Thelen, Darryl G
2007-11-01
Dynamic simulations of movement allow one to study neuromuscular coordination, analyze athletic performance, and estimate internal loading of the musculoskeletal system. Simulations can also be used to identify the sources of pathological movement and establish a scientific basis for treatment planning. We have developed a freely available, open-source software system (OpenSim) that lets users develop models of musculoskeletal structures and create dynamic simulations of a wide variety of movements. We are using this system to simulate the dynamics of individuals with pathological gait and to explore the biomechanical effects of treatments. OpenSim provides a platform on which the biomechanics community can build a library of simulations that can be exchanged, tested, analyzed, and improved through a multi-institutional collaboration. Developing software that enables a concerted effort from many investigators poses technical and sociological challenges. Meeting those challenges will accelerate the discovery of principles that govern movement control and improve treatments for individuals with movement pathologies.
Molecular dynamics simulations through GPU video games technologies
Loukatou, Styliani; Papageorgiou, Louis; Fakourelis, Paraskevas; Filntisi, Arianna; Polychronidou, Eleftheria; Bassis, Ioannis; Megalooikonomou, Vasileios; Makałowski, Wojciech; Vlachakis, Dimitrios; Kossida, Sophia
2016-01-01
Bioinformatics is the scientific field that focuses on the application of computer technology to the management of biological information. Over the years, bioinformatics applications have been used to store, process and integrate biological and genetic information, using a wide range of methodologies. One of the most de novo techniques used to understand the physical movements of atoms and molecules is molecular dynamics (MD). MD is an in silico method to simulate the physical motions of atoms and molecules under certain conditions. This has become a state strategic technique and now plays a key role in many areas of exact sciences, such as chemistry, biology, physics and medicine. Due to their complexity, MD calculations could require enormous amounts of computer memory and time and therefore their execution has been a big problem. Despite the huge computational cost, molecular dynamics have been implemented using traditional computers with a central memory unit (CPU). A graphics processing unit (GPU) computing technology was first designed with the goal to improve video games, by rapidly creating and displaying images in a frame buffer such as screens. The hybrid GPU-CPU implementation, combined with parallel computing is a novel technology to perform a wide range of calculations. GPUs have been proposed and used to accelerate many scientific computations including MD simulations. Herein, we describe the new methodologies developed initially as video games and how they are now applied in MD simulations. PMID:27525251
CALLISTO: scientific projects performed by high school students
NASA Astrophysics Data System (ADS)
Boer, Michel
The Callisto project was initiated in 2002 by the "Lyće de l'Arc" (High School in Orange, e France) and the "Observatoire de Haute Provence". Its goal is to give the students motivation for scientific and technical studies: they have the possibility to perform scientific projects together with professional astronomers. The pupils work in groups of 3 to 4, each having a specific theme: geophysics, variable stars, small bodies of the solar system, mechanical and optical instrumentation. They follow a whole scientific approach, from the question to answer, the instrumental setup, acquisition, data reduction, and publication. During a week they are invited to observe using the OHP 1.20m and 0.80m, with the support of a professional astronomer. Some projects have been in fact derived from actual proposal accepted at OHP (e.g. rotation curves of binary asteroids). The best projects are considered for some competitions like ESO "catch a star", "Olympiade de Physique", etc. Since 2005 three high-schools participate to this project. The Callisto initiative has also produced the basis of a teacher training course. Callisto is an example of a succesful collaboration between an interdisciplinary team of teachers (physics, maths, philosophy, English...), a research institution (the OHP), and researchers.
NASA Astrophysics Data System (ADS)
Engquist, Björn; Frederick, Christina; Huynh, Quyen; Zhou, Haomin
2017-06-01
We present a multiscale approach for identifying features in ocean beds by solving inverse problems in high frequency seafloor acoustics. The setting is based on Sound Navigation And Ranging (SONAR) imaging used in scientific, commercial, and military applications. The forward model incorporates multiscale simulations, by coupling Helmholtz equations and geometrical optics for a wide range of spatial scales in the seafloor geometry. This allows for detailed recovery of seafloor parameters including material type. Simulated backscattered data is generated using numerical microlocal analysis techniques. In order to lower the computational cost of the large-scale simulations in the inversion process, we take advantage of a pre-computed library of representative acoustic responses from various seafloor parameterizations.
A Rich Metadata Filesystem for Scientific Data
ERIC Educational Resources Information Center
Bui, Hoang
2012-01-01
As scientific research becomes more data intensive, there is an increasing need for scalable, reliable, and high performance storage systems. Such data repositories must provide both data archival services and rich metadata, and cleanly integrate with large scale computing resources. ROARS is a hybrid approach to distributed storage that provides…
Scientific Investigations of Elementary School Children
ERIC Educational Resources Information Center
Valanides, Nicos; Papageorgiou, Maria; Angeli, Charoula
2014-01-01
The study provides evidence concerning elementary school children's ability to conduct a scientific investigation. Two hundred and fifty sixth-grade students and 248 fourth-grade students were administered a test, and based on their performance, they were classified into high-ability and low-ability students. The sample of this study was…
Training Elementary Teachers to Prepare Students for High School Authentic Scientific Research
NASA Astrophysics Data System (ADS)
Danch, J. M.
2017-12-01
The Woodbridge Township New Jersey School District has a 4-year high school Science Research program that depends on the enrollment of students with the prerequisite skills to conduct authentic scientific research at the high school level. A multifaceted approach to training elementary teachers in the methods of scientific investigation, data collection and analysis and communication of results was undertaken in 2017. Teachers of predominately grades 4 and 5 participated in hands on workshops at a Summer Tech Academy, an EdCamp, a District Inservice Day and a series of in-class workshops for teachers and students together. Aspects of the instruction for each of these activities was facilitated by high school students currently enrolled in the High School Science Research Program. Much of the training activities centered around a "Learning With Students" model where teachers and their students simultaneously learn to perform inquiry activities and conduct scientific research fostering inquiry as it is meant to be: where participants produce original data are not merely working to obtain previously determined results.
Homemade Buckeye-Pi: A Learning Many-Node Platform for High-Performance Parallel Computing
NASA Astrophysics Data System (ADS)
Amooie, M. A.; Moortgat, J.
2017-12-01
We report on the "Buckeye-Pi" cluster, the supercomputer developed in The Ohio State University School of Earth Sciences from 128 inexpensive Raspberry Pi (RPi) 3 Model B single-board computers. Each RPi is equipped with fast Quad Core 1.2GHz ARMv8 64bit processor, 1GB of RAM, and 32GB microSD card for local storage. Therefore, the cluster has a total RAM of 128GB that is distributed on the individual nodes and a flash capacity of 4TB with 512 processors, while it benefits from low power consumption, easy portability, and low total cost. The cluster uses the Message Passing Interface protocol to manage the communications between each node. These features render our platform the most powerful RPi supercomputer to date and suitable for educational applications in high-performance-computing (HPC) and handling of large datasets. In particular, we use the Buckeye-Pi to implement optimized parallel codes in our in-house simulator for subsurface media flows with the goal of achieving a massively-parallelized scalable code. We present benchmarking results for the computational performance across various number of RPi nodes. We believe our project could inspire scientists and students to consider the proposed unconventional cluster architecture as a mainstream and a feasible learning platform for challenging engineering and scientific problems.
Characterizing water surface elevation under different flow conditions for the upcoming SWOT mission
NASA Astrophysics Data System (ADS)
Domeneghetti, A.; Schumann, G. J.-P.; Frasson, R. P. M.; Wei, R.; Pavelsky, T. M.; Castellarin, A.; Brath, A.; Durand, M. T.
2018-06-01
The Surface Water and Ocean Topography satellite mission (SWOT), scheduled for launch in 2021, will deliver two-dimensional observations of water surface heights for lakes, rivers wider than 100 m and oceans. Even though the scientific literature has highlighted several fields of application for the expected products, detailed simulations of the SWOT radar performance for a realistic river scenario have not been presented in the literature. Understanding the error of the most fundamental "raw" SWOT hydrology product is important in order to have a greater awareness about strengths and limits of the forthcoming satellite observations. This study focuses on a reach (∼140 km in length) of the middle-lower portion of the Po River, in Northern Italy, and, to date, represents one of the few real-case analyses of the spatial patterns in water surface elevation accuracy expected from SWOT. The river stretch is characterized by a main channel varying from 100 to 500 m in width and a large floodplain (up to 5 km) delimited by a system of major embankments. The simulation of the water surface along the Po River for different flow conditions (high, low and mean annual flows) is performed with inputs from a quasi-2D model implemented using detailed topographic and bathymetric information (LiDAR, 2 m resolution). By employing a simulator that mimics many SWOT satellite sensor characteristics and generates proxies of the remotely sensed hydrometric data, this study characterizes the spatial observations potentially provided by SWOT. We evaluate SWOT performance under different hydraulic conditions and assess possible effects of river embankments, river width, river topography and distance from the satellite ground track. Despite analyzing errors from the raw radar pixel cloud, which receives minimal processing, the present study highlights the promising potential of this Ka-band interferometer for measuring water surface elevations, with mean elevation errors of 0.1 cm and 21 cm for high and low flows, respectively. Results of the study characterize the expected performance of the upcoming SWOT mission and provide additional insights into potential applications of SWOT observations.
DOE Office of Scientific and Technical Information (OSTI.GOV)
McCoy, Michel; Archer, Bill; Hendrickson, Bruce
The Stockpile Stewardship Program (SSP) is an integrated technical program for maintaining the safety, surety, and reliability of the U.S. nuclear stockpile. The SSP uses nuclear test data, computational modeling and simulation, and experimental facilities to advance understanding of nuclear weapons. It includes stockpile surveillance, experimental research, development and engineering programs, and an appropriately scaled production capability to support stockpile requirements. This integrated national program requires the continued use of experimental facilities and programs, and the computational capabilities to support these programs. The Advanced Simulation and Computing Program (ASC) is a cornerstone of the SSP, providing simulation capabilities and computationalmore » resources that support annual stockpile assessment and certification, study advanced nuclear weapons design and manufacturing processes, analyze accident scenarios and weapons aging, and provide the tools to enable stockpile Life Extension Programs (LEPs) and the resolution of Significant Finding Investigations (SFIs). This requires a balance of resource, including technical staff, hardware, simulation software, and computer science solutions. ASC is now focused on increasing predictive capabilities in a three-dimensional (3D) simulation environment while maintaining support to the SSP. The program continues to improve its unique tools for solving progressively more difficult stockpile problems (sufficient resolution, dimensionality, and scientific details), and quantifying critical margins and uncertainties. Resolving each issue requires increasingly difficult analyses because the aging process has progressively moved the stockpile further away from the original test base. Where possible, the program also enables the use of high performance computing (HPC) and simulation tools to address broader national security needs, such as foreign nuclear weapon assessments and counter nuclear terrorism.« less
Idle waves in high-performance computing
NASA Astrophysics Data System (ADS)
Markidis, Stefano; Vencels, Juris; Peng, Ivy Bo; Akhmetova, Dana; Laure, Erwin; Henri, Pierre
2015-01-01
The vast majority of parallel scientific applications distributes computation among processes that are in a busy state when computing and in an idle state when waiting for information from other processes. We identify the propagation of idle waves through processes in scientific applications with a local information exchange between the two processes. Idle waves are nondispersive and have a phase velocity inversely proportional to the average busy time. The physical mechanism enabling the propagation of idle waves is the local synchronization between two processes due to remote data dependency. This study provides a description of the large number of processes in parallel scientific applications as a continuous medium. This work also is a step towards an understanding of how localized idle periods can affect remote processes, leading to the degradation of global performance in parallel scientific applications.
Human and Robotic Mission to Small Bodies: Mapping, Planning and Exploration
NASA Technical Reports Server (NTRS)
Neffian, Ara V.; Bellerose, Julie; Beyer, Ross A.; Archinal, Brent; Edwards, Laurence; Lee, Pascal; Colaprete, Anthony; Fong, Terry
2013-01-01
This study investigates the requirements, performs a gap analysis and makes a set of recommendations for mapping products and exploration tools required to support operations and scientific discovery for near- term and future NASA missions to small bodies. The mapping products and their requirements are based on the analysis of current mission scenarios (rendezvous, docking, and sample return) and recommendations made by the NEA Users Team (NUT) in the framework of human exploration. The mapping products that sat- isfy operational, scienti c, and public outreach goals include topography, images, albedo, gravity, mass, density, subsurface radar, mineralogical and thermal maps. The gap analysis points to a need for incremental generation of mapping products from low (flyby) to high-resolution data needed for anchoring and docking, real-time spatial data processing for hazard avoidance and astronaut or robot localization in low gravity, high dynamic environments, and motivates a standard for coordinate reference systems capable of describing irregular body shapes. Another aspect investigated in this study is the set of requirements and the gap analysis for exploration tools that support visualization and simulation of operational conditions including soil interactions, environment dynamics, and communications coverage. Building robust, usable data sets and visualisation/simulation tools is the best way for mission designers and simulators to make correct decisions for future missions. In the near term, it is the most useful way to begin building capabilities for small body exploration without needing to commit to specific mission architectures.
Zhang, Xinyuan; Zheng, Nan
2008-01-01
Cell-based molecular transport simulations are being developed to facilitate exploratory cheminformatic analysis of virtual libraries of small drug-like molecules. For this purpose, mathematical models of single cells are built from equations capturing the transport of small molecules across membranes. In turn, physicochemical properties of small molecules can be used as input to simulate intracellular drug distribution, through time. Here, with mathematical equations and biological parameters adjusted so as to mimic a leukocyte in the blood, simulations were performed to analyze steady state, relative accumulation of small molecules in lysosomes, mitochondria, and cytosol of this target cell, in the presence of a homogenous extracellular drug concentration. Similarly, with equations and parameters set to mimic an intestinal epithelial cell, simulations were also performed to analyze steady state, relative distribution and transcellular permeability in this non-target cell, in the presence of an apical-to-basolateral concentration gradient. With a test set of ninety-nine monobasic amines gathered from the scientific literature, simulation results helped analyze relationships between the chemical diversity of these molecules and their intracellular distributions. Electronic supplementary material The online version of this article (doi:10.1007/s10822-008-9194-7) contains supplementary material, which is available to authorized users. PMID:18338229
Atomistic simulation of graphene-based polymer nanocomposites
NASA Astrophysics Data System (ADS)
Rissanou, Anastassia N.; Bačová, Petra; Harmandaris, Vagelis
2016-05-01
Polymer/graphene nanostructured systems are hybrid materials which have attracted great attention the last years both for scientific and technological reasons. In the present work atomistic Molecular Dynamics simulations are performed for the study of graphene-based polymer nanocomposites composed of pristine, hydrogenated and carboxylated graphene sheets dispersed in polar (PEO) and nonpolar (PE) short polymer matrices (i.e., matrices containing chains of low molecular weight). Our focus is twofold; the one is the study of the structural and dynamical properties of short polymer chains and the way that they are affected by functionalized graphene sheets while the other is the effect of the polymer matrices on the behavior of graphene sheets.
Airborne simulation of Shuttle/Spacelab management and operation
NASA Technical Reports Server (NTRS)
Mulholland, D. R.; Neel, C. B.
1976-01-01
The ASSESS (Airborne Science/Spacelab Experiments System Simulation) program is discussed. A simulated Spacelab operation was carried out aboard the CV-990 airborne laboratory at Ames Research Center. A scientific payload was selected to conduct studies in upper atmospheric physics and infrared astronomy with principal investigators from France, the Netherlands, England and the U.S. Two experiment operators (EOs) from the U.S. and two from Europe were trained to function as proxies for the principal investigators in operating, maintaining, and repairing the scientific instruments. The simulated mission, in which the EOs and a Mission Manager were confined to the aircraft and living quarters for a 1-week period while making scientific observations during nightly flights, provided experience in the overall management of a complex international payload, experiment preparation, testing, and integration, the training and selection of proxy operators, and data handling.
An Effective Construction Method of Modular Manipulator 3D Virtual Simulation Platform
NASA Astrophysics Data System (ADS)
Li, Xianhua; Lv, Lei; Sheng, Rui; Sun, Qing; Zhang, Leigang
2018-06-01
This work discusses about a fast and efficient method of constructing an open 3D manipulator virtual simulation platform which make it easier for teachers and students to learn about positive and inverse kinematics of a robot manipulator. The method was carried out using MATLAB. In which, the Robotics Toolbox, MATLAB GUI and 3D animation with the help of modelling using SolidWorks, were fully applied to produce a good visualization of the system. The advantages of using quickly build is its powerful function of the input and output and its ability to simulate a 3D manipulator realistically. In this article, a Schunk six DOF modular manipulator was constructed by the author's research group to be used as example. The implementation steps of this method was detailed described, and thereafter, a high-level open and realistic visualization manipulator 3D virtual simulation platform was achieved. With the graphs obtained from simulation, the test results show that the manipulator 3D virtual simulation platform can be constructed quickly with good usability and high maneuverability, and it can meet the needs of scientific research and teaching.
Computational Particle Dynamic Simulations on Multicore Processors (CPDMu) Final Report Phase I
DOE Office of Scientific and Technical Information (OSTI.GOV)
Schmalz, Mark S
2011-07-24
Statement of Problem - Department of Energy has many legacy codes for simulation of computational particle dynamics and computational fluid dynamics applications that are designed to run on sequential processors and are not easily parallelized. Emerging high-performance computing architectures employ massively parallel multicore architectures (e.g., graphics processing units) to increase throughput. Parallelization of legacy simulation codes is a high priority, to achieve compatibility, efficiency, accuracy, and extensibility. General Statement of Solution - A legacy simulation application designed for implementation on mainly-sequential processors has been represented as a graph G. Mathematical transformations, applied to G, produce a graph representation {und G}more » for a high-performance architecture. Key computational and data movement kernels of the application were analyzed/optimized for parallel execution using the mapping G {yields} {und G}, which can be performed semi-automatically. This approach is widely applicable to many types of high-performance computing systems, such as graphics processing units or clusters comprised of nodes that contain one or more such units. Phase I Accomplishments - Phase I research decomposed/profiled computational particle dynamics simulation code for rocket fuel combustion into low and high computational cost regions (respectively, mainly sequential and mainly parallel kernels), with analysis of space and time complexity. Using the research team's expertise in algorithm-to-architecture mappings, the high-cost kernels were transformed, parallelized, and implemented on Nvidia Fermi GPUs. Measured speedups (GPU with respect to single-core CPU) were approximately 20-32X for realistic model parameters, without final optimization. Error analysis showed no loss of computational accuracy. Commercial Applications and Other Benefits - The proposed research will constitute a breakthrough in solution of problems related to efficient parallel computation of particle and fluid dynamics simulations. These problems occur throughout DOE, military and commercial sectors: the potential payoff is high. We plan to license or sell the solution to contractors for military and domestic applications such as disaster simulation (aerodynamic and hydrodynamic), Government agencies (hydrological and environmental simulations), and medical applications (e.g., in tomographic image reconstruction). Keywords - High-performance Computing, Graphic Processing Unit, Fluid/Particle Simulation. Summary for Members of Congress - Department of Energy has many simulation codes that must compute faster, to be effective. The Phase I research parallelized particle/fluid simulations for rocket combustion, for high-performance computing systems.« less
DOE Office of Scientific and Technical Information (OSTI.GOV)
Bland, Arthur S Buddy; Hack, James J; Baker, Ann E
Oak Ridge National Laboratory's (ORNL's) Cray XT5 supercomputer, Jaguar, kicked off the era of petascale scientific computing in 2008 with applications that sustained more than a thousand trillion floating point calculations per second - or 1 petaflop. Jaguar continues to grow even more powerful as it helps researchers broaden the boundaries of knowledge in virtually every domain of computational science, including weather and climate, nuclear energy, geosciences, combustion, bioenergy, fusion, and materials science. Their insights promise to broaden our knowledge in areas that are vitally important to the Department of Energy (DOE) and the nation as a whole, particularly energymore » assurance and climate change. The science of the 21st century, however, will demand further revolutions in computing, supercomputers capable of a million trillion calculations a second - 1 exaflop - and beyond. These systems will allow investigators to continue attacking global challenges through modeling and simulation and to unravel longstanding scientific questions. Creating such systems will also require new approaches to daunting challenges. High-performance systems of the future will need to be codesigned for scientific and engineering applications with best-in-class communications networks and data-management infrastructures and teams of skilled researchers able to take full advantage of these new resources. The Oak Ridge Leadership Computing Facility (OLCF) provides the nation's most powerful open resource for capability computing, with a sustainable path that will maintain and extend national leadership for DOE's Office of Science (SC). The OLCF has engaged a world-class team to support petascale science and to take a dramatic step forward, fielding new capabilities for high-end science. This report highlights the successful delivery and operation of a petascale system and shows how the OLCF fosters application development teams, developing cutting-edge tools and resources for next-generation systems.« less
He, Shuijian; Chen, Wei
2015-04-28
Because of the excellent intrinsic properties, especially the strong mechanical strength, extraordinarily high surface area and extremely high conductivity, graphene is deemed as a versatile building block for fabricating functional materials for energy production and storage applications. In this article, the recent progress in the assembly of binder-free and self-standing graphene-based materials, as well as their application in supercapacitors are reviewed, including electrical double layer capacitors, pseudocapacitors, and asymmetric supercapacitors. Various fabrication strategies and the influence of structures on the capacitance performance of 3D graphene-based materials are discussed. We finally give concluding remarks and an outlook on the scientific design of binder-free and self-standing graphene materials for achieving better capacitance performance.
NASA Astrophysics Data System (ADS)
He, Shuijian; Chen, Wei
2015-04-01
Because of the excellent intrinsic properties, especially the strong mechanical strength, extraordinarily high surface area and extremely high conductivity, graphene is deemed as a versatile building block for fabricating functional materials for energy production and storage applications. In this article, the recent progress in the assembly of binder-free and self-standing graphene-based materials, as well as their application in supercapacitors are reviewed, including electrical double layer capacitors, pseudocapacitors, and asymmetric supercapacitors. Various fabrication strategies and the influence of structures on the capacitance performance of 3D graphene-based materials are discussed. We finally give concluding remarks and an outlook on the scientific design of binder-free and self-standing graphene materials for achieving better capacitance performance.
A Simulated Environment Experiment on Annoyance Due to Combined Road Traffic and Industrial Noises
Marquis-Favre, Catherine; Morel, Julien
2015-01-01
Total annoyance due to combined noises is still difficult to predict adequately. This scientific gap is an obstacle for noise action planning, especially in urban areas where inhabitants are usually exposed to high noise levels from multiple sources. In this context, this work aims to highlight potential to enhance the prediction of total annoyance. The work is based on a simulated environment experiment where participants performed activities in a living room while exposed to combined road traffic and industrial noises. The first objective of the experiment presented in this paper was to gain further understanding of the effects on annoyance of some acoustical factors, non-acoustical factors and potential interactions between the combined noise sources. The second one was to assess total annoyance models constructed from the data collected during the experiment and tested using data gathered in situ. The results obtained in this work highlighted the superiority of perceptual models. In particular, perceptual models with an interaction term seemed to be the best predictors for the two combined noise sources under study, even with high differences in sound pressure level. Thus, these results reinforced the need to focus on perceptual models and to improve the prediction of partial annoyances. PMID:26197326
Prospects for Cherenkov Telescope Array Observations of the Young Supernova Remnant RX J1713.7-3946
NASA Astrophysics Data System (ADS)
Acero, F.; Aloisio, R.; Amans, J.; Amato, E.; Antonelli, L. A.; Aramo, C.; Armstrong, T.; Arqueros, F.; Asano, K.; Ashley, M.; Backes, M.; Balazs, C.; Balzer, A.; Bamba, A.; Barkov, M.; Barrio, J. A.; Benbow, W.; Bernlöhr, K.; Beshley, V.; Bigongiari, C.; Biland, A.; Bilinsky, A.; Bissaldi, E.; Biteau, J.; Blanch, O.; Blasi, P.; Blazek, J.; Boisson, C.; Bonanno, G.; Bonardi, A.; Bonavolontà, C.; Bonnoli, G.; Braiding, C.; Brau-Nogué, S.; Bregeon, J.; Brown, A. M.; Bugaev, V.; Bulgarelli, A.; Bulik, T.; Burton, M.; Burtovoi, A.; Busetto, G.; Böttcher, M.; Cameron, R.; Capalbi, M.; Caproni, A.; Caraveo, P.; Carosi, R.; Cascone, E.; Cerruti, M.; Chaty, S.; Chen, A.; Chen, X.; Chernyakova, M.; Chikawa, M.; Chudoba, J.; Cohen-Tanugi, J.; Colafrancesco, S.; Conforti, V.; Contreras, J. L.; Costa, A.; Cotter, G.; Covino, S.; Covone, G.; Cumani, P.; Cusumano, G.; D'Ammando, F.; D'Urso, D.; Daniel, M.; Dazzi, F.; De Angelis, A.; De Cesare, G.; De Franco, A.; De Frondat, F.; de Gouveia Dal Pino, E. M.; De Lisio, C.; de los Reyes Lopez, R.; De Lotto, B.; de Naurois, M.; De Palma, F.; Del Santo, M.; Delgado, C.; della Volpe, D.; Di Girolamo, T.; Di Giulio, C.; Di Pierro, F.; Di Venere, L.; Doro, M.; Dournaux, J.; Dumas, D.; Dwarkadas, V.; Díaz, C.; Ebr, J.; Egberts, K.; Einecke, S.; Elsässer, D.; Eschbach, S.; Falceta-Goncalves, D.; Fasola, G.; Fedorova, E.; Fernández-Barral, A.; Ferrand, G.; Fesquet, M.; Fiandrini, E.; Fiasson, A.; Filipovíc, M. D.; Fioretti, V.; Font, L.; Fontaine, G.; Franco, F. J.; Freixas Coromina, L.; Fujita, Y.; Fukui, Y.; Funk, S.; Förster, A.; Gadola, A.; Garcia López, R.; Garczarczyk, M.; Giglietto, N.; Giordano, F.; Giuliani, A.; Glicenstein, J.; Gnatyk, R.; Goldoni, P.; Grabarczyk, T.; Graciani, R.; Graham, J.; Grandi, P.; Granot, J.; Green, A. J.; Griffiths, S.; Gunji, S.; Hakobyan, H.; Hara, S.; Hassan, T.; Hayashida, M.; Heller, M.; Helo, J. C.; Hinton, J.; Hnatyk, B.; Huet, J.; Huetten, M.; Humensky, T. B.; Hussein, M.; Hörandel, J.; Ikeno, Y.; Inada, T.; Inome, Y.; Inoue, S.; Inoue, T.; Inoue, Y.; Ioka, K.; Iori, M.; Jacquemier, J.; Janecek, P.; Jankowsky, D.; Jung, I.; Kaaret, P.; Katagiri, H.; Kimeswenger, S.; Kimura, S.; Knödlseder, J.; Koch, B.; Kocot, J.; Kohri, K.; Komin, N.; Konno, Y.; Kosack, K.; Koyama, S.; Kraus, M.; Kubo, H.; Kukec Mezek, G.; Kushida, J.; La Palombara, N.; Lalik, K.; Lamanna, G.; Landt, H.; Lapington, J.; Laporte, P.; Lee, S.; Lees, J.; Lefaucheur, J.; Lenain, J.-P.; Leto, G.; Lindfors, E.; Lohse, T.; Lombardi, S.; Longo, F.; Lopez, M.; Lucarelli, F.; Luque-Escamilla, P. L.; López-Coto, R.; Maccarone, M. C.; Maier, G.; Malaguti, G.; Mandat, D.; Maneva, G.; Mangano, S.; Marcowith, A.; Martí, J.; Martínez, M.; Martínez, G.; Masuda, S.; Maurin, G.; Maxted, N.; Melioli, C.; Mineo, T.; Mirabal, N.; Mizuno, T.; Moderski, R.; Mohammed, M.; Montaruli, T.; Moralejo, A.; Mori, K.; Morlino, G.; Morselli, A.; Moulin, E.; Mukherjee, R.; Mundell, C.; Muraishi, H.; Murase, K.; Nagataki, S.; Nagayoshi, T.; Naito, T.; Nakajima, D.; Nakamori, T.; Nemmen, R.; Niemiec, J.; Nieto, D.; Nievas-Rosillo, M.; Nikołajuk, M.; Nishijima, K.; Noda, K.; Nogues, L.; Nosek, D.; Novosyadlyj, B.; Nozaki, S.; Ohira, Y.; Ohishi, M.; Ohm, S.; Okumura, A.; Ong, R. A.; Orito, R.; Orlati, A.; Ostrowski, M.; Oya, I.; Padovani, M.; Palacio, J.; Palatka, M.; Paredes, J. M.; Pavy, S.; Pe'er, A.; Persic, M.; Petrucci, P.; Petruk, O.; Pisarski, A.; Pohl, M.; Porcelli, A.; Prandini, E.; Prast, J.; Principe, G.; Prouza, M.; Pueschel, E.; Pühlhofer, G.; Quirrenbach, A.; Rameez, M.; Reimer, O.; Renaud, M.; Ribó, M.; Rico, J.; Rizi, V.; Rodriguez, J.; Rodriguez Fernandez, G.; Rodríguez Vázquez, J. J.; Romano, P.; Romeo, G.; Rosado, J.; Rousselle, J.; Rowell, G.; Rudak, B.; Sadeh, I.; Safi-Harb, S.; Saito, T.; Sakaki, N.; Sanchez, D.; Sangiorgi, P.; Sano, H.; Santander, M.; Sarkar, S.; Sawada, M.; Schioppa, E. J.; Schoorlemmer, H.; Schovanek, P.; Schussler, F.; Sergijenko, O.; Servillat, M.; Shalchi, A.; Shellard, R. C.; Siejkowski, H.; Sillanpää, A.; Simone, D.; Sliusar, V.; Sol, H.; Stanič, S.; Starling, R.; Stawarz, Ł.; Stefanik, S.; Stephan, M.; Stolarczyk, T.; Szanecki, M.; Szepieniec, T.; Tagliaferri, G.; Tajima, H.; Takahashi, M.; Takeda, J.; Tanaka, M.; Tanaka, S.; Tejedor, L. A.; Telezhinsky, I.; Temnikov, P.; Terada, Y.; Tescaro, D.; Teshima, M.; Testa, V.; Thoudam, S.; Tokanai, F.; Torres, D. F.; Torresi, E.; Tosti, G.; Townsley, C.; Travnicek, P.; Trichard, C.; Trifoglio, M.; Tsujimoto, S.; Vagelli, V.; Vallania, P.; Valore, L.; van Driel, W.; van Eldik, C.; Vandenbroucke, J.; Vassiliev, V.; Vecchi, M.; Vercellone, S.; Vergani, S.; Vigorito, C.; Vorobiov, S.; Vrastil, M.; Vázquez Acosta, M. L.; Wagner, S. J.; Wagner, R.; Wakely, S. P.; Walter, R.; Ward, J. E.; Watson, J. J.; Weinstein, A.; White, M.; White, R.; Wierzcholska, A.; Wilcox, P.; Williams, D. A.; Wischnewski, R.; Wojcik, P.; Yamamoto, T.; Yamamoto, H.; Yamazaki, R.; Yanagita, S.; Yang, L.; Yoshida, T.; Yoshida, M.; Yoshiike, S.; Yoshikoshi, T.; Zacharias, M.; Zampieri, L.; Zanin, R.; Zavrtanik, M.; Zavrtanik, D.; Zdziarski, A.; Zech, A.; Zechlin, H.; Zhdanov, V.; Ziegler, A.; Zorn, J.
2017-05-01
We perform simulations for future Cherenkov Telescope Array (CTA) observations of RX J1713.7-3946, a young supernova remnant (SNR) and one of the brightest sources ever discovered in very high energy (VHE) gamma rays. Special attention is paid to exploring possible spatial (anti)correlations of gamma rays with emission at other wavelengths, in particular X-rays and CO/H I emission. We present a series of simulated images of RX J1713.7-3946 for CTA based on a set of observationally motivated models for the gamma-ray emission. In these models, VHE gamma rays produced by high-energy electrons are assumed to trace the nonthermal X-ray emission observed by XMM-Newton, whereas those originating from relativistic protons delineate the local gas distributions. The local atomic and molecular gas distributions are deduced by the NANTEN team from CO and H I observations. Our primary goal is to show how one can distinguish the emission mechanism(s) of the gamma rays (I.e., hadronic versus leptonic, or a mixture of the two) through information provided by their spatial distribution, spectra, and time variation. This work is the first attempt to quantitatively evaluate the capabilities of CTA to achieve various proposed scientific goals by observing this important cosmic particle accelerator.
Multigrid treatment of implicit continuum diffusion
NASA Astrophysics Data System (ADS)
Francisquez, Manaure; Zhu, Ben; Rogers, Barrett
2017-10-01
Implicit treatment of diffusive terms of various differential orders common in continuum mechanics modeling, such as computational fluid dynamics, is investigated with spectral and multigrid algorithms in non-periodic 2D domains. In doubly periodic time dependent problems these terms can be efficiently and implicitly handled by spectral methods, but in non-periodic systems solved with distributed memory parallel computing and 2D domain decomposition, this efficiency is lost for large numbers of processors. We built and present here a multigrid algorithm for these types of problems which outperforms a spectral solution that employs the highly optimized FFTW library. This multigrid algorithm is not only suitable for high performance computing but may also be able to efficiently treat implicit diffusion of arbitrary order by introducing auxiliary equations of lower order. We test these solvers for fourth and sixth order diffusion with idealized harmonic test functions as well as a turbulent 2D magnetohydrodynamic simulation. It is also shown that an anisotropic operator without cross-terms can improve model accuracy and speed, and we examine the impact that the various diffusion operators have on the energy, the enstrophy, and the qualitative aspect of a simulation. This work was supported by DOE-SC-0010508. This research used resources of the National Energy Research Scientific Computing Center (NERSC).
An equivalent circuit model for terahertz quantum cascade lasers: Modeling and experiments
NASA Astrophysics Data System (ADS)
Yao, Chen; Xu, Tian-Hong; Wan, Wen-Jian; Zhu, Yong-Hao; Cao, Jun-Cheng
2015-09-01
Terahertz quantum cascade lasers (THz QCLs) emitted at 4.4 THz are fabricated and characterized. An equivalent circuit model is established based on the five-level rate equations to describe their characteristics. In order to illustrate the capability of the model, the steady and dynamic performances of the fabricated THz QCLs are simulated by the model. Compared to the sophisticated numerical methods, the presented model has advantages of fast calculation and good compatibility with circuit simulation for system-level designs and optimizations. The validity of the model is verified by the experimental and numerical results. Project supported by the National Basic Research Program of China (Grant No. 2014CB339803), the National High Technology Research and Development Program of China (Grant No. 2011AA010205), the National Natural Science Foundation of China (Grant Nos. 61131006, 61321492, and 61404149), the Major National Development Project of Scientific Instrument and Equipment, China (Grant No. 2011YQ150021), the National Science and Technology Major Project, China (Grant No. 2011ZX02707), the Major Project, China (Grant No. YYYJ-1123-1), the International Collaboration and Innovation Program on High Mobility Materials Engineering of the Chinese Academy of Sciences, and the Shanghai Municipal Commission of Science and Technology, China (Grant Nos. 14530711300).
Sánchez-Montero, Rocío; Camacho-Gómez, Carlos; López-Espí, Pablo-Luís; Salcedo-Sanz, Sancho
2018-06-21
This paper proposes a low-profile textile-modified meander line Inverted-F Antenna (IFA) with variable width and spacing meanders, for Industrial Scientific Medical (ISM) 2.4-GHz Wireless Body Area Networks (WBAN), optimized with a novel metaheuristic algorithm. Specifically, a metaheuristic known as Coral Reefs Optimization with Substrate Layer (CRO-SL) is used to obtain an optimal antenna for sensor systems, which allows covering properly and resiliently the 2.4⁻2.45-GHz industrial scientific medical bandwidth. Flexible pad foam has been used to make the designed prototype with a 1.1-mm thickness. We have used a version of the algorithm that is able to combine different searching operators within a single population of solutions. This approach is ideal to deal with hard optimization problems, such as the design of the proposed meander line IFA. During the optimization phase with the CRO-SL, the proposed antenna has been simulated using CST Microwave Studio software, linked to the CRO-SL by means of MATLAB implementation and Visual Basic Applications (VBA) code. We fully describe the antenna design process, the adaptation of the CRO-SL approach to this problem and several practical aspects of the optimization and details on the algorithm’s performance. To validate the simulation results, we have constructed and measured two prototypes of the antenna, designed with the proposed algorithm. Several practical aspects such as sensitivity during the antenna manufacturing or the agreement between the simulated and constructed antenna are also detailed in the paper.