Facilitating Internet-Scale Code Retrieval
ERIC Educational Resources Information Center
Bajracharya, Sushil Krishna
2010-01-01
Internet-Scale code retrieval deals with the representation, storage, and access of relevant source code from a large amount of source code available on the Internet. Internet-Scale code retrieval systems support common emerging practices among software developers related to finding and reusing source code. In this dissertation we focus on some…
DOE Office of Scientific and Technical Information (OSTI.GOV)
NONE
SCALE--a modular code system for Standardized Computer Analyses Licensing Evaluation--has been developed by Oak Ridge National Laboratory at the request of the US Nuclear Regulatory Commission. The SCALE system utilizes well-established computer codes and methods within standard analysis sequences that (1) allow an input format designed for the occasional user and/or novice, (2) automated the data processing and coupling between modules, and (3) provide accurate and reliable results. System development has been directed at problem-dependent cross-section processing and analysis of criticality safety, shielding, heat transfer, and depletion/decay problems. Since the initial release of SCALE in 1980, the code system hasmore » been heavily used for evaluation of nuclear fuel facility and package designs. This revision documents Version 4.3 of the system.« less
DOE Office of Scientific and Technical Information (OSTI.GOV)
Greene, N.M.; Petrie, L.M.; Westfall, R.M.
SCALE--a modular code system for Standardized Computer Analyses Licensing Evaluation--has been developed by Oak Ridge National Laboratory at the request of the US Nuclear Regulatory Commission. The SCALE system utilizes well-established computer codes and methods within standard analysis sequences that (1) allow an input format designed for the occasional user and/or novice, (2) automate the data processing and coupling between modules, and (3) provide accurate and reliable results. System development has been directed at problem-dependent cross-section processing and analysis of criticality safety, shielding, heat transfer, and depletion/decay problems. Since the initial release of SCALE in 1980, the code system hasmore » been heavily used for evaluation of nuclear fuel facility and package designs. This revision documents Version 4.2 of the system. The manual is divided into three volumes: Volume 1--for the control module documentation; Volume 2--for functional module documentation; and Volume 3--for documentation of the data libraries and subroutine libraries.« less
Development of an object-oriented ORIGEN for advanced nuclear fuel modeling applications
DOE Office of Scientific and Technical Information (OSTI.GOV)
Skutnik, S.; Havloej, F.; Lago, D.
2013-07-01
The ORIGEN package serves as the core depletion and decay calculation module within the SCALE code system. A recent major re-factor to the ORIGEN code architecture as part of an overall modernization of the SCALE code system has both greatly enhanced its maintainability as well as afforded several new capabilities useful for incorporating depletion analysis into other code frameworks. This paper will present an overview of the improved ORIGEN code architecture (including the methods and data structures introduced) as well as current and potential future applications utilizing the new ORIGEN framework. (authors)
The Therapy Process Observational Coding System for Child Psychotherapy Strategies Scale
ERIC Educational Resources Information Center
McLeod, Bryce D.; Weisz, John R.
2010-01-01
Most everyday child and adolescent psychotherapy does not follow manuals that document the procedures. Consequently, usual clinical care has remained poorly understood and rarely studied. The Therapy Process Observational Coding System for Child Psychotherapy-Strategies scale (TPOCS-S) is an observational measure of youth psychotherapy procedures…
Hadden, Kellie L; LeFort, Sandra; O'Brien, Michelle; Coyte, Peter C; Guerriere, Denise N
2016-04-01
The purpose of the current study was to examine the concurrent and discriminant validity of the Child Facial Coding System for children with cerebral palsy. Eighty-five children (mean = 8.35 years, SD = 4.72 years) were videotaped during a passive joint stretch with their physiotherapist and during 3 time segments: baseline, passive joint stretch, and recovery. Children's pain responses were rated from videotape using the Numerical Rating Scale and Child Facial Coding System. Results indicated that Child Facial Coding System scores during the passive joint stretch significantly correlated with Numerical Rating Scale scores (r = .72, P < .01). Child Facial Coding System scores were also significantly higher during the passive joint stretch than the baseline and recovery segments (P < .001). Facial activity was not significantly correlated with the developmental measures. These findings suggest that the Child Facial Coding System is a valid method of identifying pain in children with cerebral palsy. © The Author(s) 2015.
Technology Infusion of CodeSonar into the Space Network Ground Segment
NASA Technical Reports Server (NTRS)
Benson, Markland J.
2009-01-01
This slide presentation reviews the applicability of CodeSonar to the Space Network software. CodeSonar is a commercial off the shelf system that analyzes programs written in C, C++ or Ada for defects in the code. Software engineers use CodeSonar results as an input to the existing source code inspection process. The study is focused on large scale software developed using formal processes. The systems studied are mission critical in nature but some use commodity computer systems.
ERIC Educational Resources Information Center
Fjermestad, Krister W.; McLeod, Bryce D.; Heiervang, Einar R.; Havik, Odd E.; Ost, Lars-Goran; Haugland, Bente S. M.
2012-01-01
The aim of this study was to examine the factor structure and psychometric properties of an observer-rated youth alliance measure, the Therapy Process Observational Coding System for Child Psychotherapy-Alliance scale (TPOCS-A). The sample was 52 youth diagnosed with anxiety disorders ("M" age = 12.43, "SD" = 2.23, range = 15;…
ERIC Educational Resources Information Center
Salisbury, Amy L.; Fallone, Melissa Duncan; Lester, Barry
2005-01-01
This review provides an overview and definition of the concept of neurobehavior in human development. Two neurobehavioral assessments used by the authors in current fetal and infant research are discussed: the NICU Network Neurobehavioral Assessment Scale and the Fetal Neurobehavior Coding System. This review will present how the two assessments…
Robust Control of Multivariable and Large Scale Systems.
1986-03-14
AD-A175 $5B ROBUST CONTROL OF MULTIVRRIALE AND LARG SCALE SYSTEMS V2 R75 (U) HONEYWELL SYSTEMS AND RESEARCH CENTER MINNEAPOLIS MN J C DOYLE ET AL...ONIJQ 86 R alFS ja ,.AMIECFOEPF:ORMING ORGANIZATION So OFFICE SYMBOL 7a NAME OF MONITORING ORGANIZATI ON jonevwell Systems & Research If 4000c" Air...Force Office of Scientific Research .~ C :AE S C.rv. Stare arma ZIP Code) 7C ADDRESS (Crty. Stare. am ZIP Code, *3660 Marshall Street NE Building 410
Quasi-real-time end-to-end simulations of ELT-scale adaptive optics systems on GPUs
NASA Astrophysics Data System (ADS)
Gratadour, Damien
2011-09-01
Our team has started the development of a code dedicated to GPUs for the simulation of AO systems at the E-ELT scale. It uses the CUDA toolkit and an original binding to Yorick (an open source interpreted language) to provide the user with a comprehensive interface. In this paper we present the first performance analysis of our simulation code, showing its ability to provide Shack-Hartmann (SH) images and measurements at the kHz scale for VLT-sized AO system and in quasi-real-time (up to 70 Hz) for ELT-sized systems on a single top-end GPU. The simulation code includes multiple layers atmospheric turbulence generation, ray tracing through these layers, image formation at the focal plane of every sub-apertures of a SH sensor using either natural or laser guide stars and centroiding on these images using various algorithms. Turbulence is generated on-the-fly giving the ability to simulate hours of observations without the need of loading extremely large phase screens in the global memory. Because of its performance this code additionally provides the unique ability to test real-time controllers for future AO systems under nominal conditions.
FAPRS Manual: Manual for the Functional Analytic Psychotherapy Rating Scale
ERIC Educational Resources Information Center
Callaghan, Glenn M.; Follette, William C.
2008-01-01
The Functional Analytic Psychotherapy Rating Scale (FAPRS) is behavioral coding system designed to capture those essential client and therapist behaviors that occur during Functional Analytic Psychotherapy (FAP). The FAPRS manual presents the purpose and rules for documenting essential aspects of FAP. The FAPRS codes are exclusive and exhaustive…
Klein, Gerwin; Andronick, June; Keller, Gabriele; Matichuk, Daniel; Murray, Toby; O'Connor, Liam
2017-10-13
We present recent work on building and scaling trustworthy systems with formal, machine-checkable proof from the ground up, including the operating system kernel, at the level of binary machine code. We first give a brief overview of the seL4 microkernel verification and how it can be used to build verified systems. We then show two complementary techniques for scaling these methods to larger systems: proof engineering, to estimate verification effort; and code/proof co-generation, for scalable development of provably trustworthy applications.This article is part of the themed issue 'Verified trustworthy software systems'. © 2017 The Author(s).
Quantum error correction in crossbar architectures
NASA Astrophysics Data System (ADS)
Helsen, Jonas; Steudtner, Mark; Veldhorst, Menno; Wehner, Stephanie
2018-07-01
A central challenge for the scaling of quantum computing systems is the need to control all qubits in the system without a large overhead. A solution for this problem in classical computing comes in the form of so-called crossbar architectures. Recently we made a proposal for a large-scale quantum processor (Li et al arXiv:1711.03807 (2017)) to be implemented in silicon quantum dots. This system features a crossbar control architecture which limits parallel single-qubit control, but allows the scheme to overcome control scaling issues that form a major hurdle to large-scale quantum computing systems. In this work, we develop a language that makes it possible to easily map quantum circuits to crossbar systems, taking into account their architecture and control limitations. Using this language we show how to map well known quantum error correction codes such as the planar surface and color codes in this limited control setting with only a small overhead in time. We analyze the logical error behavior of this surface code mapping for estimated experimental parameters of the crossbar system and conclude that logical error suppression to a level useful for real quantum computation is feasible.
Peter, Frank J.; Dalton, Larry J.; Plummer, David W.
2002-01-01
A new class of mechanical code comparators is described which have broad potential for application in safety, surety, and security applications. These devices can be implemented as micro-scale electromechanical systems that isolate a secure or otherwise controlled device until an access code is entered. This access code is converted into a series of mechanical inputs to the mechanical code comparator, which compares the access code to a pre-input combination, entered previously into the mechanical code comparator by an operator at the system security control point. These devices provide extremely high levels of robust security. Being totally mechanical in operation, an access control system properly based on such devices cannot be circumvented by software attack alone.
HEC Applications on Columbia Project
NASA Technical Reports Server (NTRS)
Taft, Jim
2004-01-01
NASA's Columbia system consists of a cluster of twenty 512 processor SGI Altix systems. Each of these systems is 3 TFLOP/s in peak performance - approximately the same as the entire compute capability at NAS just one year ago. Each 512p system is a single system image machine with one Linunx O5, one high performance file system, and one globally shared memory. The NAS Terascale Applications Group (TAG) is chartered to assist in scaling NASA's mission critical codes to at least 512p in order to significantly improve emergency response during flight operations, as well as provide significant improvements in the codes. and rate of scientific discovery across the scientifc disciplines within NASA's Missions. Recent accomplishments are 4x improvements to codes in the ocean modeling community, 10x performance improvements in a number of computational fluid dynamics codes used in aero-vehicle design, and 5x improvements in a number of space science codes dealing in extreme physics. The TAG group will continue its scaling work to 2048p and beyond (10240 cpus) as the Columbia system becomes fully operational and the upgrades to the SGI NUMAlink memory fabric are in place. The NUMlink uprades dramatically improve system scalability for a single application. These upgrades will allow a number of codes to execute faster at higher fidelity than ever before on any other system, thus increasing the rate of scientific discovery even further
Evaluation and implementation of QR Code Identity Tag system for Healthcare in Turkey.
Uzun, Vassilya; Bilgin, Sami
2016-01-01
For this study, we designed a QR Code Identity Tag system to integrate into the Turkish healthcare system. This system provides QR code-based medical identification alerts and an in-hospital patient identification system. Every member of the medical system is assigned a unique QR Code Tag; to facilitate medical identification alerts, the QR Code Identity Tag can be worn as a bracelet or necklace or carried as an ID card. Patients must always possess the QR Code Identity bracelets within hospital grounds. These QR code bracelets link to the QR Code Identity website, where detailed information is stored; a smartphone or standalone QR code scanner can be used to scan the code. The design of this system allows authorized personnel (e.g., paramedics, firefighters, or police) to access more detailed patient information than the average smartphone user: emergency service professionals are authorized to access patient medical histories to improve the accuracy of medical treatment. In Istanbul, we tested the self-designed system with 174 participants. To analyze the QR Code Identity Tag system's usability, the participants completed the System Usability Scale questionnaire after using the system.
Establishment and assessment of code scaling capability
NASA Astrophysics Data System (ADS)
Lim, Jaehyok
In this thesis, a method for using RELAP5/MOD3.3 (Patch03) code models is described to establish and assess the code scaling capability and to corroborate the scaling methodology that has been used in the design of the Purdue University Multi-Dimensional Integral Test Assembly for ESBWR applications (PUMA-E) facility. It was sponsored by the United States Nuclear Regulatory Commission (USNRC) under the program "PUMA ESBWR Tests". PUMA-E facility was built for the USNRC to obtain data on the performance of the passive safety systems of the General Electric (GE) Nuclear Energy Economic Simplified Boiling Water Reactor (ESBWR). Similarities between the prototype plant and the scaled-down test facility were investigated for a Gravity-Driven Cooling System (GDCS) Drain Line Break (GDLB). This thesis presents the results of the GDLB test, i.e., the GDLB test with one Isolation Condenser System (ICS) unit disabled. The test is a hypothetical multi-failure small break loss of coolant (SB LOCA) accident scenario in the ESBWR. The test results indicated that the blow-down phase, Automatic Depressurization System (ADS) actuation, and GDCS injection processes occurred as expected. The GDCS as an emergency core cooling system provided adequate supply of water to keep the Reactor Pressure Vessel (RPV) coolant level well above the Top of Active Fuel (TAF) during the entire GDLB transient. The long-term cooling phase, which is governed by the Passive Containment Cooling System (PCCS) condensation, kept the reactor containment system that is composed of Drywell (DW) and Wetwell (WW) below the design pressure of 414 kPa (60 psia). In addition, the ICS continued participating in heat removal during the long-term cooling phase. A general Code Scaling, Applicability, and Uncertainty (CSAU) evaluation approach was discussed in detail relative to safety analyses of Light Water Reactor (LWR). The major components of the CSAU methodology that were highlighted particularly focused on the scaling issues of experiments and models and their applicability to the nuclear power plant transient and accidents. The major thermal-hydraulic phenomena to be analyzed were identified and the predictive models adopted in RELAP5/MOD3.3 (Patch03) code were briefly reviewed.
Cross-indexing of binary SIFT codes for large-scale image search.
Liu, Zhen; Li, Houqiang; Zhang, Liyan; Zhou, Wengang; Tian, Qi
2014-05-01
In recent years, there has been growing interest in mapping visual features into compact binary codes for applications on large-scale image collections. Encoding high-dimensional data as compact binary codes reduces the memory cost for storage. Besides, it benefits the computational efficiency since the computation of similarity can be efficiently measured by Hamming distance. In this paper, we propose a novel flexible scale invariant feature transform (SIFT) binarization (FSB) algorithm for large-scale image search. The FSB algorithm explores the magnitude patterns of SIFT descriptor. It is unsupervised and the generated binary codes are demonstrated to be dispreserving. Besides, we propose a new searching strategy to find target features based on the cross-indexing in the binary SIFT space and original SIFT space. We evaluate our approach on two publicly released data sets. The experiments on large-scale partial duplicate image retrieval system demonstrate the effectiveness and efficiency of the proposed algorithm.
QR code based noise-free optical encryption and decryption of a gray scale image
NASA Astrophysics Data System (ADS)
Jiao, Shuming; Zou, Wenbin; Li, Xia
2017-03-01
In optical encryption systems, speckle noise is one major challenge in obtaining high quality decrypted images. This problem can be addressed by employing a QR code based noise-free scheme. Previous works have been conducted for optically encrypting a few characters or a short expression employing QR codes. This paper proposes a practical scheme for optically encrypting and decrypting a gray-scale image based on QR codes for the first time. The proposed scheme is compatible with common QR code generators and readers. Numerical simulation results reveal the proposed method can encrypt and decrypt an input image correctly.
Optimizing fusion PIC code performance at scale on Cori Phase 2
DOE Office of Scientific and Technical Information (OSTI.GOV)
Koskela, T. S.; Deslippe, J.
In this paper we present the results of optimizing the performance of the gyrokinetic full-f fusion PIC code XGC1 on the Cori Phase Two Knights Landing system. The code has undergone substantial development to enable the use of vector instructions in its most expensive kernels within the NERSC Exascale Science Applications Program. We study the single-node performance of the code on an absolute scale using the roofline methodology to guide optimization efforts. We have obtained 2x speedups in single node performance due to enabling vectorization and performing memory layout optimizations. On multiple nodes, the code is shown to scale wellmore » up to 4000 nodes, near half the size of the machine. We discuss some communication bottlenecks that were identified and resolved during the work.« less
Parallel Scaling Characteristics of Selected NERSC User ProjectCodes
DOE Office of Scientific and Technical Information (OSTI.GOV)
Skinner, David; Verdier, Francesca; Anand, Harsh
This report documents parallel scaling characteristics of NERSC user project codes between Fiscal Year 2003 and the first half of Fiscal Year 2004 (Oct 2002-March 2004). The codes analyzed cover 60% of all the CPU hours delivered during that time frame on seaborg, a 6080 CPU IBM SP and the largest parallel computer at NERSC. The scale in terms of concurrency and problem size of the workload is analyzed. Drawing on batch queue logs, performance data and feedback from researchers we detail the motivations, benefits, and challenges of implementing highly parallel scientific codes on current NERSC High Performance Computing systems.more » An evaluation and outlook of the NERSC workload for Allocation Year 2005 is presented.« less
DOE Office of Scientific and Technical Information (OSTI.GOV)
Rearden, Bradley T.; Jessee, Matthew Anderson
The SCALE Code System is a widely-used modeling and simulation suite for nuclear safety analysis and design that is developed, maintained, tested, and managed by the Reactor and Nuclear Systems Division (RNSD) of Oak Ridge National Laboratory (ORNL). SCALE provides a comprehensive, verified and validated, user-friendly tool set for criticality safety, reactor and lattice physics, radiation shielding, spent fuel and radioactive source term characterization, and sensitivity and uncertainty analysis. Since 1980, regulators, licensees, and research institutions around the world have used SCALE for safety analysis and design. SCALE provides an integrated framework with dozens of computational modules including three deterministicmore » and three Monte Carlo radiation transport solvers that are selected based on the desired solution strategy. SCALE includes current nuclear data libraries and problem-dependent processing tools for continuous-energy (CE) and multigroup (MG) neutronics and coupled neutron-gamma calculations, as well as activation, depletion, and decay calculations. SCALE includes unique capabilities for automated variance reduction for shielding calculations, as well as sensitivity and uncertainty analysis. SCALE’s graphical user interfaces assist with accurate system modeling, visualization of nuclear data, and convenient access to desired results.« less
DOE Office of Scientific and Technical Information (OSTI.GOV)
Rearden, Bradley T.; Jessee, Matthew Anderson
The SCALE Code System is a widely-used modeling and simulation suite for nuclear safety analysis and design that is developed, maintained, tested, and managed by the Reactor and Nuclear Systems Division (RNSD) of Oak Ridge National Laboratory (ORNL). SCALE provides a comprehensive, verified and validated, user-friendly tool set for criticality safety, reactor and lattice physics, radiation shielding, spent fuel and radioactive source term characterization, and sensitivity and uncertainty analysis. Since 1980, regulators, licensees, and research institutions around the world have used SCALE for safety analysis and design. SCALE provides an integrated framework with dozens of computational modules including three deterministicmore » and three Monte Carlo radiation transport solvers that are selected based on the desired solution strategy. SCALE includes current nuclear data libraries and problem-dependent processing tools for continuous-energy (CE) and multigroup (MG) neutronics and coupled neutron-gamma calculations, as well as activation, depletion, and decay calculations. SCALE includes unique capabilities for automated variance reduction for shielding calculations, as well as sensitivity and uncertainty analysis. SCALE’s graphical user interfaces assist with accurate system modeling, visualization of nuclear data, and convenient access to desired results.« less
Video Monitoring a Simulation-Based Quality Improvement Program in Bihar, India.
Dyer, Jessica; Spindler, Hilary; Christmas, Amelia; Shah, Malay Bharat; Morgan, Melissa; Cohen, Susanna R; Sterne, Jason; Mahapatra, Tanmay; Walker, Dilys
2018-04-01
Simulation-based training has become an accepted clinical training andragogy in high-resource settings with its use increasing in low-resource settings. Video recordings of simulated scenarios are commonly used by facilitators. Beyond using the videos during debrief sessions, researchers can also analyze the simulation videos to quantify technical and nontechnical skills during simulated scenarios over time. Little is known about the feasibility and use of large-scale systems to video record and analyze simulation and debriefing data for monitoring and evaluation in low-resource settings. This manuscript describes the process of designing and implementing a large-scale video monitoring system. Mentees and Mentors were consented and all simulations and debriefs conducted at 320 Primary Health Centers (PHCs) were video recorded. The system design, number of video recordings, and inter-rater reliability of the coded videos were assessed. The final dataset included a total of 11,278 videos. Overall, a total of 2,124 simulation videos were coded and 183 (12%) were blindly double-coded. For the double-coded sample, the average inter-rater reliability (IRR) scores were 80% for nontechnical skills, and 94% for clinical technical skills. Among 4,450 long debrief videos received, 216 were selected for coding and all were double-coded. Data quality of simulation videos was found to be very good in terms of recorded instances of "unable to see" and "unable to hear" in Phases 1 and 2. This study demonstrates that video monitoring systems can be effectively implemented at scale in resource limited settings. Further, video monitoring systems can play several vital roles within program implementation, including monitoring and evaluation, provision of actionable feedback to program implementers, and assurance of program fidelity.
Bosse, Stefan
2015-01-01
Multi-agent systems (MAS) can be used for decentralized and self-organizing data processing in a distributed system, like a resource-constrained sensor network, enabling distributed information extraction, for example, based on pattern recognition and self-organization, by decomposing complex tasks in simpler cooperative agents. Reliable MAS-based data processing approaches can aid the material-integration of structural-monitoring applications, with agent processing platforms scaled to the microchip level. The agent behavior, based on a dynamic activity-transition graph (ATG) model, is implemented with program code storing the control and the data state of an agent, which is novel. The program code can be modified by the agent itself using code morphing techniques and is capable of migrating in the network between nodes. The program code is a self-contained unit (a container) and embeds the agent data, the initialization instructions and the ATG behavior implementation. The microchip agent processing platform used for the execution of the agent code is a standalone multi-core stack machine with a zero-operand instruction format, leading to a small-sized agent program code, low system complexity and high system performance. The agent processing is token-queue-based, similar to Petri-nets. The agent platform can be implemented in software, too, offering compatibility at the operational and code level, supporting agent processing in strong heterogeneous networks. In this work, the agent platform embedded in a large-scale distributed sensor network is simulated at the architectural level by using agent-based simulation techniques. PMID:25690550
Bosse, Stefan
2015-02-16
Multi-agent systems (MAS) can be used for decentralized and self-organizing data processing in a distributed system, like a resource-constrained sensor network, enabling distributed information extraction, for example, based on pattern recognition and self-organization, by decomposing complex tasks in simpler cooperative agents. Reliable MAS-based data processing approaches can aid the material-integration of structural-monitoring applications, with agent processing platforms scaled to the microchip level. The agent behavior, based on a dynamic activity-transition graph (ATG) model, is implemented with program code storing the control and the data state of an agent, which is novel. The program code can be modified by the agent itself using code morphing techniques and is capable of migrating in the network between nodes. The program code is a self-contained unit (a container) and embeds the agent data, the initialization instructions and the ATG behavior implementation. The microchip agent processing platform used for the execution of the agent code is a standalone multi-core stack machine with a zero-operand instruction format, leading to a small-sized agent program code, low system complexity and high system performance. The agent processing is token-queue-based, similar to Petri-nets. The agent platform can be implemented in software, too, offering compatibility at the operational and code level, supporting agent processing in strong heterogeneous networks. In this work, the agent platform embedded in a large-scale distributed sensor network is simulated at the architectural level by using agent-based simulation techniques.
NASA Astrophysics Data System (ADS)
Harfst, S.; Portegies Zwart, S.; McMillan, S.
2008-12-01
We present MUSE, a software framework for combining existing computational tools from different astrophysical domains into a single multi-physics, multi-scale application. MUSE facilitates the coupling of existing codes written in different languages by providing inter-language tools and by specifying an interface between each module and the framework that represents a balance between generality and computational efficiency. This approach allows scientists to use combinations of codes to solve highly-coupled problems without the need to write new codes for other domains or significantly alter their existing codes. MUSE currently incorporates the domains of stellar dynamics, stellar evolution and stellar hydrodynamics for studying generalized stellar systems. We have now reached a ``Noah's Ark'' milestone, with (at least) two available numerical solvers for each domain. MUSE can treat multi-scale and multi-physics systems in which the time- and size-scales are well separated, like simulating the evolution of planetary systems, small stellar associations, dense stellar clusters, galaxies and galactic nuclei. In this paper we describe two examples calculated using MUSE: the merger of two galaxies and an N-body simulation with live stellar evolution. In addition, we demonstrate an implementation of MUSE on a distributed computer which may also include special-purpose hardware, such as GRAPEs or GPUs, to accelerate computations. The current MUSE code base is publicly available as open source at http://muse.li.
High-speed inlet research program and supporting analysis
NASA Technical Reports Server (NTRS)
Coltrin, Robert E.
1990-01-01
The technology challenges faced by the high speed inlet designer are discussed by describing the considerations that went into the design of the Mach 5 research inlet. It is shown that the emerging three dimensional viscous computational fluid dynamics (CFD) flow codes, together with small scale experiments, can be used to guide larger scale full inlet systems research. Then, in turn, the results of the large scale research, if properly instrumented, can be used to validate or at least to calibrate the CFD codes.
Emotional Availability Scale Among Three U.S. Race/Ethnic Groups.
Derscheid, Della J; Fogg, Louis F; Julion, Wrenetha; Johnson, Mary E; Tucker, Sharon; Delaney, Kathleen R
2018-05-01
This study used a cross-sectional design to conduct a subgroup psychometric analysis of the Emotional Availability Scale among matched Hispanic ( n = 20), African American ( n = 20), and European American ( n = 10) English-speaking mother-child dyads in the United States. Differences by race/ethnicity were tested ( p < .05) among (a) Emotional Availability Scale dimensions with ANOVA, and (b) relationships of Emotional Availability Scale dimensions with select Dyadic Parent-Child Interaction Coding System variables with Pearson correlation and matched moderated regression. Internal consistency was .950 (Cronbach's α; N = 50). No significant differences in the six Emotional Availability Scale dimension scores by race/ethnicity emerged. Two Dyadic Parent-Child Interaction Coding System behaviors predicted two Emotional Availability Scale dimensions each for Hispanic and African American mother-child dyads. Results suggest emotional availability similarity among race/ethnic subgroups with few predictive differences of emotional availability dimensions by specific behaviors for Hispanic and African American subgroups.
Improving the sensitivity and specificity of the abbreviated injury scale coding system.
Kramer, C F; Barancik, J I; Thode, H C
1990-01-01
The Abbreviated Injury Scale with Epidemiologic Modifications (AIS 85-EM) was developed to make it possible to code information about anatomic injury types and locations that, although generally available from medical records, is not codable under the standard Abbreviated Injury Scale, published by the American Association for Automotive Medicine in 1985 (AIS 85). In a population-based sample of 3,223 motor vehicle trauma cases, 68 percent of the patients had one or more injuries that were coded to the AIS 85 body region nonspecific category external. When the same patients' injuries were coded using the AIS 85-EM coding procedure, only 15 percent of the patients had injuries that could not be coded to a specific body region. With AIS 85-EM, the proportion of codable head injury cases increased from 16 percent to 37 percent, thereby improving the potential for identifying cases with head and threshold brain injury. The data suggest that body region coding of all injuries is necessary to draw valid and reliable conclusions about changes in injury patterns and their sequelae. The increased specificity of body region coding improves assessments of the efficacy of injury intervention strategies and countermeasure programs using epidemiologic methodology. PMID:2116633
Diversity-optimal power loading for intensity modulated MIMO optical wireless communications.
Zhang, Yan-Yu; Yu, Hong-Yi; Zhang, Jian-Kang; Zhu, Yi-Jun
2016-04-18
In this paper, we consider the design of space code for an intensity modulated direct detection multi-input-multi-output optical wireless communication (IM/DD MIMO-OWC) system, in which channel coefficients are independent and non-identically log-normal distributed, with variances and means known at the transmitter and channel state information available at the receiver. Utilizing the existing space code design criterion for IM/DD MIMO-OWC with a maximum likelihood (ML) detector, we design a diversity-optimal space code (DOSC) that maximizes both large-scale diversity and small-scale diversity gains and prove that the spatial repetition code (RC) with a diversity-optimized power allocation is diversity-optimal among all the high dimensional nonnegative space code schemes under a commonly used optical power constraint. In addition, we show that one of significant advantages of the DOSC is to allow low-complexity ML detection. Simulation results indicate that in high signal-to-noise ratio (SNR) regimes, our proposed DOSC significantly outperforms RC, which is the best space code currently available for such system.
[The Abbreviated Injury Scale (AIS). Options and problems in application].
Haasper, C; Junge, M; Ernstberger, A; Brehme, H; Hannawald, L; Langer, C; Nehmzow, J; Otte, D; Sander, U; Krettek, C; Zwipp, H
2010-05-01
The new AIS (Abbreviated Injury Scale) was released with an update by the AAAM (Association for the Advancement of Automotive Medicine) in 2008. It is a universal scoring system in the field of trauma applicable in clinic and research. In engineering it is used as a classification system for vehicle safety. The AIS can therefore be considered as an international, interdisciplinary and universal code of injury severity. This review focuses on a historical overview, potential applications and new coding options in the current version and also outlines the associated problems.
SCALE Continuous-Energy Eigenvalue Sensitivity Coefficient Calculations
Perfetti, Christopher M.; Rearden, Bradley T.; Martin, William R.
2016-02-25
Sensitivity coefficients describe the fractional change in a system response that is induced by changes to system parameters and nuclear data. The Tools for Sensitivity and UNcertainty Analysis Methodology Implementation (TSUNAMI) code within the SCALE code system makes use of eigenvalue sensitivity coefficients for an extensive number of criticality safety applications, including quantifying the data-induced uncertainty in the eigenvalue of critical systems, assessing the neutronic similarity between different critical systems, and guiding nuclear data adjustment studies. The need to model geometrically complex systems with improved fidelity and the desire to extend TSUNAMI analysis to advanced applications has motivated the developmentmore » of a methodology for calculating sensitivity coefficients in continuous-energy (CE) Monte Carlo applications. The Contributon-Linked eigenvalue sensitivity/Uncertainty estimation via Tracklength importance CHaracterization (CLUTCH) and Iterated Fission Probability (IFP) eigenvalue sensitivity methods were recently implemented in the CE-KENO framework of the SCALE code system to enable TSUNAMI-3D to perform eigenvalue sensitivity calculations using continuous-energy Monte Carlo methods. This work provides a detailed description of the theory behind the CLUTCH method and describes in detail its implementation. This work explores the improvements in eigenvalue sensitivity coefficient accuracy that can be gained through the use of continuous-energy sensitivity methods and also compares several sensitivity methods in terms of computational efficiency and memory requirements.« less
DiClemente, Carlo C; Crouch, Taylor Berens; Norwood, Amber E Q; Delahanty, Janine; Welsh, Christopher
2015-03-01
Screening, brief intervention, and referral to treatment (SBIRT) has become an empirically supported and widely implemented approach in primary and specialty care for addressing substance misuse. Accordingly, training of providers in SBIRT has increased exponentially in recent years. However, the quality and fidelity of training programs and subsequent interventions are largely unknown because of the lack of SBIRT-specific evaluation tools. The purpose of this study was to create a coding scale to assess quality and fidelity of SBIRT interactions addressing alcohol, tobacco, illicit drugs, and prescription medication misuse. The scale was developed to evaluate performance in an SBIRT residency training program. Scale development was based on training protocol and competencies with consultation from Motivational Interviewing coding experts. Trained medical residents practiced SBIRT with standardized patients during 10- to 15-min videotaped interactions. This study included 25 tapes from the Family Medicine program coded by 3 unique coder pairs with varying levels of coding experience. Interrater reliability was assessed for overall scale components and individual items via intraclass correlation coefficients. Coder pair-specific reliability was also assessed. Interrater reliability was excellent overall for the scale components (>.85) and nearly all items. Reliability was higher for more experienced coders, though still adequate for the trained coder pair. Descriptive data demonstrated a broad range of adherence and skills. Subscale correlations supported concurrent and discriminant validity. Data provide evidence that the MD3 SBIRT Coding Scale is a psychometrically reliable coding system for evaluating SBIRT interactions and can be used to evaluate implementation skills for fidelity, training, assessment, and research. Recommendations for refinement and further testing of the measure are discussed. (PsycINFO Database Record (c) 2015 APA, all rights reserved).
DOE Office of Scientific and Technical Information (OSTI.GOV)
Rearden, Bradley T.; Jessee, Matthew Anderson
The SCALE Code System is a widely used modeling and simulation suite for nuclear safety analysis and design that is developed, maintained, tested, and managed by the Reactor and Nuclear Systems Division (RNSD) of Oak Ridge National Laboratory (ORNL). SCALE provides a comprehensive, verified and validated, user-friendly tool set for criticality safety, reactor physics, radiation shielding, radioactive source term characterization, and sensitivity and uncertainty analysis. Since 1980, regulators, licensees, and research institutions around the world have used SCALE for safety analysis and design. SCALE provides an integrated framework with dozens of computational modules including 3 deterministic and 3 Monte Carlomore » radiation transport solvers that are selected based on the desired solution strategy. SCALE includes current nuclear data libraries and problem-dependent processing tools for continuous-energy (CE) and multigroup (MG) neutronics and coupled neutron-gamma calculations, as well as activation, depletion, and decay calculations. SCALE includes unique capabilities for automated variance reduction for shielding calculations, as well as sensitivity and uncertainty analysis. SCALE’s graphical user interfaces assist with accurate system modeling, visualization of nuclear data, and convenient access to desired results. SCALE 6.2 represents one of the most comprehensive revisions in the history of SCALE, providing several new capabilities and significant improvements in many existing features.« less
NASA Technical Reports Server (NTRS)
Swift, Daniel W.
1991-01-01
The primary methodology during the grant period has been the use of micro or meso-scale simulations to address specific questions concerning magnetospheric processes related to the aurora and substorm morphology. This approach, while useful in providing some answers, has its limitations. Many of the problems relating to the magnetosphere are inherently global and kinetic. Effort during the last year of the grant period has increasingly focused on development of a global-scale hybrid code to model the entire, coupled magnetosheath - magnetosphere - ionosphere system. In particular, numerical procedures for curvilinear coordinate generation and exactly conservative differencing schemes for hybrid codes in curvilinear coordinates have been developed. The new computer algorithms and the massively parallel computer architectures now make this global code a feasible proposition. Support provided by this project has played an important role in laying the groundwork for the eventual development or a global-scale code to model and forecast magnetospheric weather.
Secure web-based invocation of large-scale plasma simulation codes
NASA Astrophysics Data System (ADS)
Dimitrov, D. A.; Busby, R.; Exby, J.; Bruhwiler, D. L.; Cary, J. R.
2004-12-01
We present our design and initial implementation of a web-based system for running, both in parallel and serial, Particle-In-Cell (PIC) codes for plasma simulations with automatic post processing and generation of visual diagnostics.
Requirements for migration of NSSD code systems from LTSS to NLTSS
NASA Technical Reports Server (NTRS)
Pratt, M.
1984-01-01
The purpose of this document is to address the requirements necessary for a successful conversion of the Nuclear Design (ND) application code systems to the NLTSS environment. The ND application code system community can be characterized as large-scale scientific computation carried out on supercomputers. NLTSS is a distributed operating system being developed at LLNL to replace the LTSS system currently in use. The implications of change are examined including a description of the computational environment and users in ND. The discussion then turns to requirements, first in a general way, followed by specific requirements, including a proposal for managing the transition.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Rearden, Bradley T; Marshall, William BJ J
In the course of criticality code validation, outlier cases are frequently encountered. Historically, the causes of these unexpected results could be diagnosed only through comparison with other similar cases or through the known presence of a unique component of the critical experiment. The sensitivity and uncertainty (S/U) analysis tools available in the SCALE 6.1 code system provide a much broader range of options to examine underlying causes of outlier cases. This paper presents some case studies performed as a part of the recent validation of the KENO codes in SCALE 6.1 using S/U tools to examine potential causes of biases.
Hybrid reduced order modeling for assembly calculations
Bang, Youngsuk; Abdel-Khalik, Hany S.; Jessee, Matthew A.; ...
2015-08-14
While the accuracy of assembly calculations has greatly improved due to the increase in computer power enabling more refined description of the phase space and use of more sophisticated numerical algorithms, the computational cost continues to increase which limits the full utilization of their effectiveness for routine engineering analysis. Reduced order modeling is a mathematical vehicle that scales down the dimensionality of large-scale numerical problems to enable their repeated executions on small computing environment, often available to end users. This is done by capturing the most dominant underlying relationships between the model's inputs and outputs. Previous works demonstrated the usemore » of the reduced order modeling for a single physics code, such as a radiation transport calculation. This paper extends those works to coupled code systems as currently employed in assembly calculations. Finally, numerical tests are conducted using realistic SCALE assembly models with resonance self-shielding, neutron transport, and nuclides transmutation/depletion models representing the components of the coupled code system.« less
EvoluCode: Evolutionary Barcodes as a Unifying Framework for Multilevel Evolutionary Data.
Linard, Benjamin; Nguyen, Ngoc Hoan; Prosdocimi, Francisco; Poch, Olivier; Thompson, Julie D
2012-01-01
Evolutionary systems biology aims to uncover the general trends and principles governing the evolution of biological networks. An essential part of this process is the reconstruction and analysis of the evolutionary histories of these complex, dynamic networks. Unfortunately, the methodologies for representing and exploiting such complex evolutionary histories in large scale studies are currently limited. Here, we propose a new formalism, called EvoluCode (Evolutionary barCode), which allows the integration of different evolutionary parameters (eg, sequence conservation, orthology, synteny …) in a unifying format and facilitates the multilevel analysis and visualization of complex evolutionary histories at the genome scale. The advantages of the approach are demonstrated by constructing barcodes representing the evolution of the complete human proteome. Two large-scale studies are then described: (i) the mapping and visualization of the barcodes on the human chromosomes and (ii) automatic clustering of the barcodes to highlight protein subsets sharing similar evolutionary histories and their functional analysis. The methodologies developed here open the way to the efficient application of other data mining and knowledge extraction techniques in evolutionary systems biology studies. A database containing all EvoluCode data is available at: http://lbgi.igbmc.fr/barcodes.
Moving from Batch to Field Using the RT3D Reactive Transport Modeling System
NASA Astrophysics Data System (ADS)
Clement, T. P.; Gautam, T. R.
2002-12-01
The public domain reactive transport code RT3D (Clement, 1997) is a general-purpose numerical code for solving coupled, multi-species reactive transport in saturated groundwater systems. The code uses MODFLOW to simulate flow and several modules of MT3DMS to simulate the advection and dispersion processes. RT3D employs the operator-split strategy which allows the code solve the coupled reactive transport problem in a modular fashion. The coupling between reaction and transport is defined through a separate module where the reaction equations are specified. The code supports a versatile user-defined reaction option that allows users to define their own reaction system through a Fortran-90 subroutine, known as the RT3D-reaction package. Further a utility code, known as BATCHRXN, allows the users to independently test and debug their reaction package. To analyze a new reaction system at a batch scale, users should first run BATCHRXN to test the ability of their reaction package to model the batch data. After testing, the reaction package can simply be ported to the RT3D environment to study the model response under 1-, 2-, or 3-dimensional transport conditions. This paper presents example problems that demonstrate the methods for moving from batch to field-scale simulations using BATCHRXN and RT3D codes. The first example describes a simple first-order reaction system for simulating the sequential degradation of Tetrachloroethene (PCE) and its daughter products. The second example uses a relatively complex reaction system for describing the multiple degradation pathways of Tetrachloroethane (PCA) and its daughter products. References 1) Clement, T.P, RT3D - A modular computer code for simulating reactive multi-species transport in 3-Dimensional groundwater aquifers, Battelle Pacific Northwest National Laboratory Research Report, PNNL-SA-28967, September, 1997. Available at: http://bioprocess.pnl.gov/rt3d.htm.
Extremely Scalable Spiking Neuronal Network Simulation Code: From Laptops to Exascale Computers.
Jordan, Jakob; Ippen, Tammo; Helias, Moritz; Kitayama, Itaru; Sato, Mitsuhisa; Igarashi, Jun; Diesmann, Markus; Kunkel, Susanne
2018-01-01
State-of-the-art software tools for neuronal network simulations scale to the largest computing systems available today and enable investigations of large-scale networks of up to 10 % of the human cortex at a resolution of individual neurons and synapses. Due to an upper limit on the number of incoming connections of a single neuron, network connectivity becomes extremely sparse at this scale. To manage computational costs, simulation software ultimately targeting the brain scale needs to fully exploit this sparsity. Here we present a two-tier connection infrastructure and a framework for directed communication among compute nodes accounting for the sparsity of brain-scale networks. We demonstrate the feasibility of this approach by implementing the technology in the NEST simulation code and we investigate its performance in different scaling scenarios of typical network simulations. Our results show that the new data structures and communication scheme prepare the simulation kernel for post-petascale high-performance computing facilities without sacrificing performance in smaller systems.
Extremely Scalable Spiking Neuronal Network Simulation Code: From Laptops to Exascale Computers
Jordan, Jakob; Ippen, Tammo; Helias, Moritz; Kitayama, Itaru; Sato, Mitsuhisa; Igarashi, Jun; Diesmann, Markus; Kunkel, Susanne
2018-01-01
State-of-the-art software tools for neuronal network simulations scale to the largest computing systems available today and enable investigations of large-scale networks of up to 10 % of the human cortex at a resolution of individual neurons and synapses. Due to an upper limit on the number of incoming connections of a single neuron, network connectivity becomes extremely sparse at this scale. To manage computational costs, simulation software ultimately targeting the brain scale needs to fully exploit this sparsity. Here we present a two-tier connection infrastructure and a framework for directed communication among compute nodes accounting for the sparsity of brain-scale networks. We demonstrate the feasibility of this approach by implementing the technology in the NEST simulation code and we investigate its performance in different scaling scenarios of typical network simulations. Our results show that the new data structures and communication scheme prepare the simulation kernel for post-petascale high-performance computing facilities without sacrificing performance in smaller systems. PMID:29503613
NASA Astrophysics Data System (ADS)
Jia, Weile; Wang, Jue; Chi, Xuebin; Wang, Lin-Wang
2017-02-01
LS3DF, namely linear scaling three-dimensional fragment method, is an efficient linear scaling ab initio total energy electronic structure calculation code based on a divide-and-conquer strategy. In this paper, we present our GPU implementation of the LS3DF code. Our test results show that the GPU code can calculate systems with about ten thousand atoms fully self-consistently in the order of 10 min using thousands of computing nodes. This makes the electronic structure calculations of 10,000-atom nanosystems routine work. This speed is 4.5-6 times faster than the CPU calculations using the same number of nodes on the Titan machine in the Oak Ridge leadership computing facility (OLCF). Such speedup is achieved by (a) carefully re-designing of the computationally heavy kernels; (b) redesign of the communication pattern for heterogeneous supercomputers.
NASA Astrophysics Data System (ADS)
Leinhardt, Zoë M.; Richardson, Derek C.
2005-08-01
We present a new code ( companion) that identifies bound systems of particles in O(NlogN) time. Simple binaries consisting of pairs of mutually bound particles and complex hierarchies consisting of collections of mutually bound particles are identifiable with this code. In comparison, brute force binary search methods scale as O(N) while full hierarchy searches can be as expensive as O(N), making analysis highly inefficient for multiple data sets with N≳10. A simple test case is provided to illustrate the method. Timing tests demonstrating O(NlogN) scaling with the new code on real data are presented. We apply our method to data from asteroid satellite simulations [Durda et al., 2004. Icarus 167, 382-396; Erratum: Icarus 170, 242; reprinted article: Icarus 170, 243-257] and note interesting multi-particle configurations. The code is available at http://www.astro.umd.edu/zoe/companion/ and is distributed under the terms and conditions of the GNU Public License.
Visual attention mitigates information loss in small- and large-scale neural codes
Sprague, Thomas C; Saproo, Sameer; Serences, John T
2015-01-01
Summary The visual system transforms complex inputs into robust and parsimonious neural codes that efficiently guide behavior. Because neural communication is stochastic, the amount of encoded visual information necessarily decreases with each synapse. This constraint requires processing sensory signals in a manner that protects information about relevant stimuli from degradation. Such selective processing – or selective attention – is implemented via several mechanisms, including neural gain and changes in tuning properties. However, examining each of these effects in isolation obscures their joint impact on the fidelity of stimulus feature representations by large-scale population codes. Instead, large-scale activity patterns can be used to reconstruct representations of relevant and irrelevant stimuli, providing a holistic understanding about how neuron-level modulations collectively impact stimulus encoding. PMID:25769502
Onboard Image Processing System for Hyperspectral Sensor
Hihara, Hiroki; Moritani, Kotaro; Inoue, Masao; Hoshi, Yoshihiro; Iwasaki, Akira; Takada, Jun; Inada, Hitomi; Suzuki, Makoto; Seki, Taeko; Ichikawa, Satoshi; Tanii, Jun
2015-01-01
Onboard image processing systems for a hyperspectral sensor have been developed in order to maximize image data transmission efficiency for large volume and high speed data downlink capacity. Since more than 100 channels are required for hyperspectral sensors on Earth observation satellites, fast and small-footprint lossless image compression capability is essential for reducing the size and weight of a sensor system. A fast lossless image compression algorithm has been developed, and is implemented in the onboard correction circuitry of sensitivity and linearity of Complementary Metal Oxide Semiconductor (CMOS) sensors in order to maximize the compression ratio. The employed image compression method is based on Fast, Efficient, Lossless Image compression System (FELICS), which is a hierarchical predictive coding method with resolution scaling. To improve FELICS’s performance of image decorrelation and entropy coding, we apply a two-dimensional interpolation prediction and adaptive Golomb-Rice coding. It supports progressive decompression using resolution scaling while still maintaining superior performance measured as speed and complexity. Coding efficiency and compression speed enlarge the effective capacity of signal transmission channels, which lead to reducing onboard hardware by multiplexing sensor signals into a reduced number of compression circuits. The circuitry is embedded into the data formatter of the sensor system without adding size, weight, power consumption, and fabrication cost. PMID:26404281
Software engineering and automatic continuous verification of scientific software
NASA Astrophysics Data System (ADS)
Piggott, M. D.; Hill, J.; Farrell, P. E.; Kramer, S. C.; Wilson, C. R.; Ham, D.; Gorman, G. J.; Bond, T.
2011-12-01
Software engineering of scientific code is challenging for a number of reasons including pressure to publish and a lack of awareness of the pitfalls of software engineering by scientists. The Applied Modelling and Computation Group at Imperial College is a diverse group of researchers that employ best practice software engineering methods whilst developing open source scientific software. Our main code is Fluidity - a multi-purpose computational fluid dynamics (CFD) code that can be used for a wide range of scientific applications from earth-scale mantle convection, through basin-scale ocean dynamics, to laboratory-scale classic CFD problems, and is coupled to a number of other codes including nuclear radiation and solid modelling. Our software development infrastructure consists of a number of free tools that could be employed by any group that develops scientific code and has been developed over a number of years with many lessons learnt. A single code base is developed by over 30 people for which we use bazaar for revision control, making good use of the strong branching and merging capabilities. Using features of Canonical's Launchpad platform, such as code review, blueprints for designing features and bug reporting gives the group, partners and other Fluidity uers an easy-to-use platform to collaborate and allows the induction of new members of the group into an environment where software development forms a central part of their work. The code repositoriy are coupled to an automated test and verification system which performs over 20,000 tests, including unit tests, short regression tests, code verification and large parallel tests. Included in these tests are build tests on HPC systems, including local and UK National HPC services. The testing of code in this manner leads to a continuous verification process; not a discrete event performed once development has ceased. Much of the code verification is done via the "gold standard" of comparisons to analytical solutions via the method of manufactured solutions. By developing and verifying code in tandem we avoid a number of pitfalls in scientific software development and advocate similar procedures for other scientific code applications.
Uplink Downlink Rate Balancing and Throughput Scaling in FDD Massive MIMO Systems
NASA Astrophysics Data System (ADS)
Bergel, Itsik; Perets, Yona; Shamai, Shlomo
2016-05-01
In this work we extend the concept of uplink-downlink rate balancing to frequency division duplex (FDD) massive MIMO systems. We consider a base station with large number antennas serving many single antenna users. We first show that any unused capacity in the uplink can be traded off for higher throughput in the downlink in a system that uses either dirty paper (DP) coding or linear zero-forcing (ZF) precoding. We then also study the scaling of the system throughput with the number of antennas in cases of linear Beamforming (BF) Precoding, ZF Precoding, and DP coding. We show that the downlink throughput is proportional to the logarithm of the number of antennas. While, this logarithmic scaling is lower than the linear scaling of the rate in the uplink, it can still bring significant throughput gains. For example, we demonstrate through analysis and simulation that increasing the number of antennas from 4 to 128 will increase the throughput by more than a factor of 5. We also show that a logarithmic scaling of downlink throughput as a function of the number of receive antennas can be achieved even when the number of transmit antennas only increases logarithmically with the number of receive antennas.
CPMIP: measurements of real computational performance of Earth system models in CMIP6
NASA Astrophysics Data System (ADS)
Balaji, Venkatramani; Maisonnave, Eric; Zadeh, Niki; Lawrence, Bryan N.; Biercamp, Joachim; Fladrich, Uwe; Aloisio, Giovanni; Benson, Rusty; Caubel, Arnaud; Durachta, Jeffrey; Foujols, Marie-Alice; Lister, Grenville; Mocavero, Silvia; Underwood, Seth; Wright, Garrett
2017-01-01
A climate model represents a multitude of processes on a variety of timescales and space scales: a canonical example of multi-physics multi-scale modeling. The underlying climate system is physically characterized by sensitive dependence on initial conditions, and natural stochastic variability, so very long integrations are needed to extract signals of climate change. Algorithms generally possess weak scaling and can be I/O and/or memory-bound. Such weak-scaling, I/O, and memory-bound multi-physics codes present particular challenges to computational performance. Traditional metrics of computational efficiency such as performance counters and scaling curves do not tell us enough about real sustained performance from climate models on different machines. They also do not provide a satisfactory basis for comparative information across models. codes present particular challenges to computational performance. We introduce a set of metrics that can be used for the study of computational performance of climate (and Earth system) models. These measures do not require specialized software or specific hardware counters, and should be accessible to anyone. They are independent of platform and underlying parallel programming models. We show how these metrics can be used to measure actually attained performance of Earth system models on different machines, and identify the most fruitful areas of research and development for performance engineering. codes present particular challenges to computational performance. We present results for these measures for a diverse suite of models from several modeling centers, and propose to use these measures as a basis for a CPMIP, a computational performance model intercomparison project (MIP).
Supporting Source Code Comprehension during Software Evolution and Maintenance
ERIC Educational Resources Information Center
Alhindawi, Nouh
2013-01-01
This dissertation addresses the problems of program comprehension to support the evolution of large-scale software systems. The research concerns how software engineers locate features and concepts along with categorizing changes within very large bodies of source code along with their versioned histories. More specifically, advanced Information…
TRAC-PF1 code verification with data from the OTIS test facility. [Once-Through Intergral System
DOE Office of Scientific and Technical Information (OSTI.GOV)
Childerson, M.T.; Fujita, R.K.
1985-01-01
A computer code (TRAC-PF1/MOD1) developed for predicting transient thermal and hydraulic integral nuclear steam supply system (NSSS) response was benchmarked. Post-small break loss-of-coolant accident (LOCA) data from a scaled, experimental facility, designated the One-Through Integral System (OTIS), were obtained for the Babcock and Wilcox NSSS and compared to TRAC predictions. The OTIS tests provided a challenging small break LOCA data set for TRAC verification. The major phases of a small break LOCA observed in the OTIS tests included pressurizer draining and loop saturation, intermittent reactor coolant system circulation, boiler-condenser mode, and the initial stages of refill. The TRAC code wasmore » successful in predicting OTIS loop conditions (system pressures and temperatures) after modification of the steam generator model. In particular, the code predicted both pool and auxiliary-feedwater initiated boiler-condenser mode heat transfer.« less
Wilkinson, Karl A; Hine, Nicholas D M; Skylaris, Chris-Kriton
2014-11-11
We present a hybrid MPI-OpenMP implementation of Linear-Scaling Density Functional Theory within the ONETEP code. We illustrate its performance on a range of high performance computing (HPC) platforms comprising shared-memory nodes with fast interconnect. Our work has focused on applying OpenMP parallelism to the routines which dominate the computational load, attempting where possible to parallelize different loops from those already parallelized within MPI. This includes 3D FFT box operations, sparse matrix algebra operations, calculation of integrals, and Ewald summation. While the underlying numerical methods are unchanged, these developments represent significant changes to the algorithms used within ONETEP to distribute the workload across CPU cores. The new hybrid code exhibits much-improved strong scaling relative to the MPI-only code and permits calculations with a much higher ratio of cores to atoms. These developments result in a significantly shorter time to solution than was possible using MPI alone and facilitate the application of the ONETEP code to systems larger than previously feasible. We illustrate this with benchmark calculations from an amyloid fibril trimer containing 41,907 atoms. We use the code to study the mechanism of delamination of cellulose nanofibrils when undergoing sonification, a process which is controlled by a large number of interactions that collectively determine the structural properties of the fibrils. Many energy evaluations were needed for these simulations, and as these systems comprise up to 21,276 atoms this would not have been feasible without the developments described here.
Visual attention mitigates information loss in small- and large-scale neural codes.
Sprague, Thomas C; Saproo, Sameer; Serences, John T
2015-04-01
The visual system transforms complex inputs into robust and parsimonious neural codes that efficiently guide behavior. Because neural communication is stochastic, the amount of encoded visual information necessarily decreases with each synapse. This constraint requires that sensory signals are processed in a manner that protects information about relevant stimuli from degradation. Such selective processing--or selective attention--is implemented via several mechanisms, including neural gain and changes in tuning properties. However, examining each of these effects in isolation obscures their joint impact on the fidelity of stimulus feature representations by large-scale population codes. Instead, large-scale activity patterns can be used to reconstruct representations of relevant and irrelevant stimuli, thereby providing a holistic understanding about how neuron-level modulations collectively impact stimulus encoding. Copyright © 2015 Elsevier Ltd. All rights reserved.
NASA Astrophysics Data System (ADS)
Hori, T.; Agata, R.; Ichimura, T.; Fujita, K.; Yamaguchi, T.; Takahashi, N.
2017-12-01
Recently, we can obtain continuous dense surface deformation data on land and partly on the sea floor, the obtained data are not fully utilized for monitoring and forecasting of crustal activity, such as spatio-temporal variation in slip velocity on the plate interface including earthquakes, seismic wave propagation, and crustal deformation. For construct a system for monitoring and forecasting, it is necessary to develop a physics-based data analysis system including (1) a structural model with the 3D geometry of the plate inter-face and the material property such as elasticity and viscosity, (2) calculation code for crustal deformation and seismic wave propagation using (1), (3) inverse analysis or data assimilation code both for structure and fault slip using (1) & (2). To accomplish this, it is at least necessary to develop highly reliable large-scale simulation code to calculate crustal deformation and seismic wave propagation for 3D heterogeneous structure. Unstructured FE non-linear seismic wave simulation code has been developed. This achieved physics-based urban earthquake simulation enhanced by 1.08 T DOF x 6.6 K time-step. A high fidelity FEM simulation code with mesh generator has also been developed to calculate crustal deformation in and around Japan with complicated surface topography and subducting plate geometry for 1km mesh. This code has been improved the code for crustal deformation and achieved 2.05 T-DOF with 45m resolution on the plate interface. This high-resolution analysis enables computation of change of stress acting on the plate interface. Further, for inverse analyses, waveform inversion code for modeling 3D crustal structure has been developed, and the high-fidelity FEM code has been improved to apply an adjoint method for estimating fault slip and asthenosphere viscosity. Hence, we have large-scale simulation and analysis tools for monitoring. We are developing the methods for forecasting the slip velocity variation on the plate interface. Although the prototype is for elastic half space model, we are applying it for 3D heterogeneous structure with the high-fidelity FE model. Furthermore, large-scale simulation codes for monitoring are being implemented on the GPU clusters and analysis tools are developing to include other functions such as examination in model errors.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Fleck, J.A. Jr.; Morris, J.R.; Thompson, P.F.
1976-10-01
The FLAC code (Fourier Laser Amplifier Code) was used to simulate the CYCLOPS laser system up to the third B-module and to calculate the maximum ripple gain spectrum. The model of this portion of CYCLOPS consists of 33 segments that correspond to 20 optical elements (simulation of the cell requires 2 segments and 12 external air spaces). (MHR)
DOE Office of Scientific and Technical Information (OSTI.GOV)
Schultz, Peter Andrew
The objective of the U.S. Department of Energy Office of Nuclear Energy Advanced Modeling and Simulation Waste Integrated Performance and Safety Codes (NEAMS Waste IPSC) is to provide an integrated suite of computational modeling and simulation (M&S) capabilities to quantitatively assess the long-term performance of waste forms in the engineered and geologic environments of a radioactive-waste storage facility or disposal repository. Achieving the objective of modeling the performance of a disposal scenario requires describing processes involved in waste form degradation and radionuclide release at the subcontinuum scale, beginning with mechanistic descriptions of chemical reactions and chemical kinetics at the atomicmore » scale, and upscaling into effective, validated constitutive models for input to high-fidelity continuum scale codes for coupled multiphysics simulations of release and transport. Verification and validation (V&V) is required throughout the system to establish evidence-based metrics for the level of confidence in M&S codes and capabilities, including at the subcontiunuum scale and the constitutive models they inform or generate. This Report outlines the nature of the V&V challenge at the subcontinuum scale, an approach to incorporate V&V concepts into subcontinuum scale modeling and simulation (M&S), and a plan to incrementally incorporate effective V&V into subcontinuum scale M&S destined for use in the NEAMS Waste IPSC work flow to meet requirements of quantitative confidence in the constitutive models informed by subcontinuum scale phenomena.« less
Design Aspects of the Rayleigh Convection Code
NASA Astrophysics Data System (ADS)
Featherstone, N. A.
2017-12-01
Understanding the long-term generation of planetary or stellar magnetic field requires complementary knowledge of the large-scale fluid dynamics pervading large fractions of the object's interior. Such large-scale motions are sensitive to the system's geometry which, in planets and stars, is spherical to a good approximation. As a result, computational models designed to study such systems often solve the MHD equations in spherical geometry, frequently employing a spectral approach involving spherical harmonics. We present computational and user-interface design aspects of one such modeling tool, the Rayleigh convection code, which is suitable for deployment on desktop and petascale-hpc architectures alike. In this poster, we will present an overview of this code's parallel design and its built-in diagnostics-output package. Rayleigh has been developed with NSF support through the Computational Infrastructure for Geodynamics and is expected to be released as open-source software in winter 2017/2018.
Competitive code-based fast palmprint identification using a set of cover trees
NASA Astrophysics Data System (ADS)
Yue, Feng; Zuo, Wangmeng; Zhang, David; Wang, Kuanquan
2009-06-01
A palmprint identification system recognizes a query palmprint image by searching for its nearest neighbor from among all the templates in a database. When applied on a large-scale identification system, it is often necessary to speed up the nearest-neighbor searching process. We use competitive code, which has very fast feature extraction and matching speed, for palmprint identification. To speed up the identification process, we extend the cover tree method and propose to use a set of cover trees to facilitate the fast and accurate nearest-neighbor searching. We can use the cover tree method because, as we show, the angular distance used in competitive code can be decomposed into a set of metrics. Using the Hong Kong PolyU palmprint database (version 2) and a large-scale palmprint database, our experimental results show that the proposed method searches for nearest neighbors faster than brute force searching.
The Magnetic Reconnection Code: an AMR-based fully implicit simulation suite
NASA Astrophysics Data System (ADS)
Germaschewski, K.; Bhattacharjee, A.; Ng, C.-S.
2006-12-01
Extended MHD models, which incorporate two-fluid effects, are promising candidates to enhance understanding of collisionless reconnection phenomena in laboratory, space and astrophysical plasma physics. In this paper, we introduce two simulation codes in the Magnetic Reconnection Code suite which integrate reduced and full extended MHD models. Numerical integration of these models comes with two challenges: Small-scale spatial structures, e.g. thin current sheets, develop and must be well resolved by the code. Adaptive mesh refinement (AMR) is employed to provide high resolution where needed while maintaining good performance. Secondly, the two-fluid effects in extended MHD give rise to dispersive waves, which lead to a very stringent CFL condition for explicit codes, while reconnection happens on a much slower time scale. We use a fully implicit Crank--Nicholson time stepping algorithm. Since no efficient preconditioners are available for our system of equations, we instead use a direct solver to handle the inner linear solves. This requires us to actually compute the Jacobian matrix, which is handled by a code generator that calculates the derivative symbolically and then outputs code to calculate it.
GAPD: a GPU-accelerated atom-based polychromatic diffraction simulation code.
E, J C; Wang, L; Chen, S; Zhang, Y Y; Luo, S N
2018-03-01
GAPD, a graphics-processing-unit (GPU)-accelerated atom-based polychromatic diffraction simulation code for direct, kinematics-based, simulations of X-ray/electron diffraction of large-scale atomic systems with mono-/polychromatic beams and arbitrary plane detector geometries, is presented. This code implements GPU parallel computation via both real- and reciprocal-space decompositions. With GAPD, direct simulations are performed of the reciprocal lattice node of ultralarge systems (∼5 billion atoms) and diffraction patterns of single-crystal and polycrystalline configurations with mono- and polychromatic X-ray beams (including synchrotron undulator sources), and validation, benchmark and application cases are presented.
Extreme Scale Plasma Turbulence Simulations on Top Supercomputers Worldwide
Tang, William; Wang, Bei; Ethier, Stephane; ...
2016-11-01
The goal of the extreme scale plasma turbulence studies described in this paper is to expedite the delivery of reliable predictions on confinement physics in large magnetic fusion systems by using world-class supercomputers to carry out simulations with unprecedented resolution and temporal duration. This has involved architecture-dependent optimizations of performance scaling and addressing code portability and energy issues, with the metrics for multi-platform comparisons being 'time-to-solution' and 'energy-to-solution'. Realistic results addressing how confinement losses caused by plasma turbulence scale from present-day devices to the much larger $25 billion international ITER fusion facility have been enabled by innovative advances in themore » GTC-P code including (i) implementation of one-sided communication from MPI 3.0 standard; (ii) creative optimization techniques on Xeon Phi processors; and (iii) development of a novel performance model for the key kernels of the PIC code. Our results show that modeling data movement is sufficient to predict performance on modern supercomputer platforms.« less
Tohira, Hideo; Jacobs, Ian; Mountain, David; Gibson, Nick; Yeo, Allen; Ueno, Masato; Watanabe, Hiroaki
2011-12-01
The Abbreviated Injury Scale 2008 (AIS 2008) is the most recent injury coding system. A mapping table from a previous AIS 98 to AIS 2008 is available. However, AIS 98 codes that are unmappable to AIS 2008 codes exist in this table. Furthermore, some AIS 98 codes can be mapped to multiple candidate AIS 2008 codes with different severities. We aimed to modify the original table to adjust the severities and to validate these changes. We modified the original table by adding links from unmappable AIS 98 codes to AIS 2008 codes. We applied the original table and our modified table to AIS 98 codes for major trauma patients. We also assigned candidate codes with different severities the weighted averages of their severities as an adjusted severity. The proportion of cases whose injury severity scores (ISSs) were computable were compared. We also compared the agreement of the ISS and New ISS (NISS) between manually determined AIS 2008 codes (MAN) and mapped codes by using our table (MAP) with unadjusted or adjusted severities. All and 72.3% of cases had their ISSs computed by our modified table and the original table, respectively. The agreement between MAN and MAP with respect to the ISS and NISS was substantial (intraclass correlation coefficient = 0.939 for ISS and 0.943 for NISS). Using adjusted severities, the agreements of the ISS and NISS improved to 0.953 (p = 0.11) and 0.963 (p = 0.007), respectively. Our modified mapping table seems to allow more ISSs to be computed than the original table. Severity scores exhibited substantial agreement between MAN and MAP. The use of adjusted severities improved these agreements further.
The SERGISAI procedure for seismic risk assessment
NASA Astrophysics Data System (ADS)
Zonno, G.; Garcia-Fernandez, M.; Jimenez, M.J.; Menoni, S.; Meroni, F.; Petrini, V.
The European project SERGISAI developed a computational tool where amethodology for seismic risk assessment at different geographical scales hasbeen implemented. Experts of various disciplines, including seismologists,engineers, planners, geologists, and computer scientists, co-operated in anactual multidisciplinary process to develop this tool. Standard proceduralcodes, Geographical Information Systems (GIS), and Artificial Intelligence(AI) techniques compose the whole system, that will enable the end userto carry out a complete seismic risk assessment at three geographical scales:regional, sub-regional and local. At present, single codes or models thathave been incorporated are not new in general, but the modularity of theprototype, based on a user-friendly front-end, offers potential users thepossibility of updating or replacing any code or model if desired. Theproposed procedure is a first attempt to integrate tools, codes and methodsfor assessing expected earthquake damage, and it was mainly designedto become a useful support for civil defence and land use planning agencies.Risk factors have been treated in the most suitable way for each one, interms of level of detail, kind of parameters and units of measure.Identifying various geographical scales is not a mere question of dimension;since entities to be studied correspond to areas defined by administrativeand geographical borders. The procedure was applied in the following areas:Toscana in Italy, for the regional scale, the Garfagnana area in Toscana, forthe sub-regional scale, and a part of Barcelona city, Spain, for the localscale.
TFaNS Tone Fan Noise Design/Prediction System. Volume 3; Evaluation of System Codes
NASA Technical Reports Server (NTRS)
Topol, David A.
1999-01-01
TFANS is the Tone Fan Noise Design/Prediction System developed by Pratt & Whitney under contract to NASA Lewis (presently NASA Glenn). The purpose of this system is to predict tone noise emanating from a fan stage including the effects of reflection and transmission by the rotor and stator and by the duct inlet and nozzle. These effects have been added to an existing annular duct/isolated stator noise prediction capability. TFANS consists of: The codes that compute the acoustic properties (reflection and transmission coefficients) of the various elements and write them to files. Cup3D: Fan Noise Coupling Code that reads these files, solves the coupling problem, and outputs the desired noise predictions. AWAKEN: CFD/Measured Wake Postprocessor which reformats CFD wake predictions and/or measured wake data so it can be used by the system. This volume of the report evaluates TFANS versus full-scale and ADP 22" fig data using the semi-empirical wake modelling in the system. This report is divided into three volumes: Volume 1: System Description, CUP3D Technical Documentation, and Manual for Code Developers; Volume II: User's Manual, TFANS Version 1.4; Volume III: Evaluation of System Codes.
Aerodynamic Simulation of the MARINTEK Braceless Semisubmersible Wave Tank Tests
NASA Astrophysics Data System (ADS)
Stewart, Gordon; Muskulus, Michael
2016-09-01
Model scale experiments of floating offshore wind turbines are important for both platform design for the industry as well as numerical model validation for the research community. An important consideration in the wave tank testing of offshore wind turbines are scaling effects, especially the tension between accurate scaling of both hydrodynamic and aerodynamic forces. The recent MARINTEK braceless semisubmersible wave tank experiment utilizes a novel aerodynamic force actuator to decouple the scaling of the aerodynamic forces. This actuator consists of an array of motors that pull on cables to provide aerodynamic forces that are calculated by a blade-element momentum code in real time as the experiment is conducted. This type of system has the advantage of supplying realistically scaled aerodynamic forces that include dynamic forces from platform motion, but does not provide the insights into the accuracy of the aerodynamic models that an actual model-scale rotor could provide. The modeling of this system presents an interesting challenge, as there are two ways to simulate the aerodynamics; either by using the turbulent wind fields as inputs to the aerodynamic model of the design code, or by surpassing the aerodynamic model and using the forces applied to the experimental turbine as direct inputs to the simulation. This paper investigates the best practices of modeling this type of novel aerodynamic actuator using a modified wind turbine simulation tool, and demonstrates that bypassing the dynamic aerodynamics solver of design codes can lead to erroneous results.
Method and apparatus for ultra-high-sensitivity, incremental and absolute optical encoding
NASA Technical Reports Server (NTRS)
Leviton, Douglas B. (Inventor)
1999-01-01
An absolute optical linear or rotary encoder which encodes the motion of an object (3) with increased resolution and encoding range and decreased sensitivity to damage to the scale includes a scale (5), which moves with the object and is illuminated by a light source (11). The scale carries a pattern (9) which is imaged by a microscope optical system (13) on a CCD array (17) in a camera head (15). The pattern includes both fiducial markings (31) which are identical for each period of the pattern and code areas (33) which include binary codings of numbers identifying the individual periods of the pattern. The image of the pattern formed on the CCD array is analyzed by an image processor (23) to locate the fiducial marking, decode the information encoded in the code area, and thereby determine the position of the object.
Scaling in nature: From DNA through heartbeats to weather
NASA Astrophysics Data System (ADS)
Havlin, S.; Buldyrev, S. V.; Bunde, A.; Goldberger, A. L.; Ivanov, P. Ch.; Peng, C.-K.; Stanley, H. E.
1999-12-01
The purpose of this talk is to describe some recent progress in applying scaling concepts to various systems in nature. We review several systems characterized by scaling laws such as DNA sequences, heartbeat rates and weather variations. We discuss the finding that the exponent α quantifying the scaling in DNA in smaller for coding than for noncoding sequences. We also discuss the application of fractal scaling analysis to the dynamics of heartbeat regulation, and report the recent finding that the scaling exponent α is smaller during sleep periods compared to wake periods. We also discuss the recent findings that suggest a universal scaling exponent characterizing the weather fluctuations.
Scaling in nature: from DNA through heartbeats to weather
NASA Technical Reports Server (NTRS)
Havlin, S.; Buldyrev, S. V.; Bunde, A.; Goldberger, A. L.; Peng, C. K.; Stanley, H. E.
1999-01-01
The purpose of this report is to describe some recent progress in applying scaling concepts to various systems in nature. We review several systems characterized by scaling laws such as DNA sequences, heartbeat rates and weather variations. We discuss the finding that the exponent alpha quantifying the scaling in DNA in smaller for coding than for noncoding sequences. We also discuss the application of fractal scaling analysis to the dynamics of heartbeat regulation, and report the recent finding that the scaling exponent alpha is smaller during sleep periods compared to wake periods. We also discuss the recent findings that suggest a universal scaling exponent characterizing the weather fluctuations.
Entanglement renormalization, quantum error correction, and bulk causality
NASA Astrophysics Data System (ADS)
Kim, Isaac H.; Kastoryano, Michael J.
2017-04-01
Entanglement renormalization can be viewed as an encoding circuit for a family of approximate quantum error correcting codes. The logical information becomes progres-sively more well-protected against erasure errors at larger length scales. In particular, an approximate variant of holographic quantum error correcting code emerges at low energy for critical systems. This implies that two operators that are largely separated in scales behave as if they are spatially separated operators, in the sense that they obey a Lieb-Robinson type locality bound under a time evolution generated by a local Hamiltonian.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Brown, William Michael; Plimpton, Steven James; Wang, Peng
2010-03-01
LAMMPS is a classical molecular dynamics code, and an acronym for Large-scale Atomic/Molecular Massively Parallel Simulator. LAMMPS has potentials for soft materials (biomolecules, polymers) and solid-state materials (metals, semiconductors) and coarse-grained or mesoscopic systems. It can be used to model atoms or, more generically, as a parallel particle simulator at the atomic, meso, or continuum scale. LAMMPS runs on single processors or in parallel using message-passing techniques and a spatial-decomposition of the simulation domain. The code is designed to be easy to modify or extend with new functionality.
NASA Astrophysics Data System (ADS)
Hori, Takane; Ichimura, Tsuyoshi; Takahashi, Narumi
2017-04-01
Here we propose a system for monitoring and forecasting of crustal activity, such as spatio-temporal variation in slip velocity on the plate interface including earthquakes, seismic wave propagation, and crustal deformation. Although, we can obtain continuous dense surface deformation data on land and partly on the sea floor, the obtained data are not fully utilized for monitoring and forecasting. It is necessary to develop a physics-based data analysis system including (1) a structural model with the 3D geometry of the plate interface and the material property such as elasticity and viscosity, (2) calculation code for crustal deformation and seismic wave propagation using (1), (3) inverse analysis or data assimilation code both for structure and fault slip using (1) & (2). To accomplish this, it is at least necessary to develop highly reliable large-scale simulation code to calculate crustal deformation and seismic wave propagation for 3D heterogeneous structure. Actually, Ichimura et al. (2015, SC15) has developed unstructured FE non-linear seismic wave simulation code, which achieved physics-based urban earthquake simulation enhanced by 1.08 T DOF x 6.6 K time-step. Ichimura et al. (2013, GJI) has developed high fidelity FEM simulation code with mesh generator to calculate crustal deformation in and around Japan with complicated surface topography and subducting plate geometry for 1km mesh. Fujita et al. (2016, SC16) has improved the code for crustal deformation and achieved 2.05 T-DOF with 45m resolution on the plate interface. This high-resolution analysis enables computation of change of stress acting on the plate interface. Further, for inverse analyses, Errol et al. (2012, BSSA) has developed waveform inversion code for modeling 3D crustal structure, and Agata et al. (2015, AGU Fall Meeting) has improved the high-fidelity FEM code to apply an adjoint method for estimating fault slip and asthenosphere viscosity. Hence, we have large-scale simulation and analysis tools for monitoring. Furthermore, we are developing the methods for forecasting the slip velocity variation on the plate interface. Basic concept is given in Hori et al. (2014, Oceanography) introducing ensemble based sequential data assimilation procedure. Although the prototype described there is for elastic half space model, we are applying it for 3D heterogeneous structure with the high-fidelity FE model.
Kinetic turbulence simulations at extreme scale on leadership-class systems
DOE Office of Scientific and Technical Information (OSTI.GOV)
Wang, Bei; Ethier, Stephane; Tang, William
2013-01-01
Reliable predictive simulation capability addressing confinement properties in magnetically confined fusion plasmas is critically-important for ITER, a 20 billion dollar international burning plasma device under construction in France. The complex study of kinetic turbulence, which can severely limit the energy confinement and impact the economic viability of fusion systems, requires simulations at extreme scale for such an unprecedented device size. Our newly optimized, global, ab initio particle-in-cell code solving the nonlinear equations underlying gyrokinetic theory achieves excellent performance with respect to "time to solution" at the full capacity of the IBM Blue Gene/Q on 786,432 cores of Mira at ALCFmore » and recently of the 1,572,864 cores of Sequoia at LLNL. Recent multithreading and domain decomposition optimizations in the new GTC-P code represent critically important software advances for modern, low memory per core systems by enabling routine simulations at unprecedented size (130 million grid points ITER-scale) and resolution (65 billion particles).« less
Correlated Errors in the Surface Code
NASA Astrophysics Data System (ADS)
Lopez, Daniel; Mucciolo, E. R.; Novais, E.
2012-02-01
A milestone step into the development of quantum information technology would be the ability to design and operate a reliable quantum memory. The greatest obstacle to create such a device has been decoherence due to the unavoidable interaction between the quantum system and its environment. Quantum Error Correction is therefore an essential ingredient to any quantum computing information device. A great deal of attention has been given to surface codes, since it has very good scaling properties. In this seminar, we discuss the time evolution of a qubit encoded in the logical basis of a surface code. The system is interacting with a bosonic environment at zero temperature. Our results show how much spatial and time correlations can be detrimental to the efficiency of the code.
Scaling NASA Applications to 1024 CPUs on Origin 3K
NASA Technical Reports Server (NTRS)
Taft, Jim
2002-01-01
The long and highly successful joint SGI-NASA research effort in ever larger SSI systems was to a large degree the result of the successful development of the MLP scalable parallel programming paradigm developed at ARC: 1) MLP scaling in real production codes justified ever larger systems at NAS; 2) MLP scaling on 256p Origin 2000 gave SGl impetus to productize 256p; 3) MLP scaling on 512 gave SGI courage to build 1024p O3K; and 4) History of MLP success resulted in IBM Star Cluster based MLP effort.
NCI HPC Scaling and Optimisation in Climate, Weather, Earth system science and the Geosciences
NASA Astrophysics Data System (ADS)
Evans, B. J. K.; Bermous, I.; Freeman, J.; Roberts, D. S.; Ward, M. L.; Yang, R.
2016-12-01
The Australian National Computational Infrastructure (NCI) has a national focus in the Earth system sciences including climate, weather, ocean, water management, environment and geophysics. NCI leads a Program across its partners from the Australian science agencies and research communities to identify priority computational models to scale-up. Typically, these cases place a large overall demand on the available computer time, need to scale to higher resolutions, use excessive scarce resources such as large memory or bandwidth that limits, or in some cases, need to meet requirements for transition to a separate operational forecasting system, with set time-windows. The model codes include the UK Met Office Unified Model atmospheric model (UM), GFDL's Modular Ocean Model (MOM), both the UK Met Office's GC3 and Australian ACCESS coupled-climate systems (including sea ice), 4D-Var data assimilation and satellite processing, the Regional Ocean Model (ROMS), and WaveWatch3 as well as geophysics codes including hazards, magentuellerics, seismic inversions, and geodesy. Many of these codes use significant compute resources both for research applications as well as within the operational systems. Some of these models are particularly complex, and their behaviour had not been critically analysed for effective use of the NCI supercomputer or how they could be improved. As part of the Program, we have established a common profiling methodology that uses a suite of open source tools for performing scaling analyses. The most challenging cases are profiling multi-model coupled systems where the component models have their own complex algorithms and performance issues. We have also found issues within the current suite of profiling tools, and no single tool fully exposes the nature of the code performance. As a result of this work, international collaborations are now in place to ensure that improvements are incorporated within the community models, and our effort can be targeted in a coordinated way. The coordinations have involved user stakeholders, the model developer community, and dependent software libraries. For example, we have spent significant time characterising I/O scalability, and improving the use of libraries such as NetCDF and HDF5.
Development of the US3D Code for Advanced Compressible and Reacting Flow Simulations
NASA Technical Reports Server (NTRS)
Candler, Graham V.; Johnson, Heath B.; Nompelis, Ioannis; Subbareddy, Pramod K.; Drayna, Travis W.; Gidzak, Vladimyr; Barnhardt, Michael D.
2015-01-01
Aerothermodynamics and hypersonic flows involve complex multi-disciplinary physics, including finite-rate gas-phase kinetics, finite-rate internal energy relaxation, gas-surface interactions with finite-rate oxidation and sublimation, transition to turbulence, large-scale unsteadiness, shock-boundary layer interactions, fluid-structure interactions, and thermal protection system ablation and thermal response. Many of the flows have a large range of length and time scales, requiring large computational grids, implicit time integration, and large solution run times. The University of Minnesota NASA US3D code was designed for the simulation of these complex, highly-coupled flows. It has many of the features of the well-established DPLR code, but uses unstructured grids and has many advanced numerical capabilities and physical models for multi-physics problems. The main capabilities of the code are described, the physical modeling approaches are discussed, the different types of numerical flux functions and time integration approaches are outlined, and the parallelization strategy is overviewed. Comparisons between US3D and the NASA DPLR code are presented, and several advanced simulations are presented to illustrate some of novel features of the code.
Source Code Analysis Laboratory (SCALe) for Energy Delivery Systems
2010-12-01
the software for reevaluation. Once the ree- valuation process is completed, CERT provides the client a report detailing the software’s con - formance...Flagged Nonconformities (FNC) Software System TP/FNC Ratio Mozilla Firefox version 2.0 6/12 50% Linux kernel version 2.6.15 10/126 8% Wine...inappropriately tuned for analysis of the Linux kernel, which has anomalous results. Customizing SCALe to work with energy system software will help
Analysis of Delays in Transmitting Time Code Using an Automated Computer Time Distribution System
1999-12-01
jlevine@clock. bldrdoc.gov Abstract An automated computer time distribution system broadcasts standard tune to users using computers and modems via...contributed to &lays - sofhareplatform (50% of the delay), transmission speed of time- codes (25OA), telephone network (lS%), modem and others (10’4). The... modems , and telephone lines. Users dial the ACTS server to receive time traceable to the national time scale of Singapore, UTC(PSB). The users can in
Mountainous Coasts: A change to the GFS post codes will remove a persistent, spurious high pressure system ENVIRONMENTAL PREDICTION /NCEP/ WILL UPGRADE THE GFS POST PROCESSOR. THE PRIMARY EFFORT BEHIND THIS UPGRADE WILL BE TO UNIFY THE POST PROCESSING CODE FOR THE NORTH AMERICAN MESO SCALE /NAM/ MODEL AND THE GFS INTO
Multi-scale modeling of irradiation effects in spallation neutron source materials
NASA Astrophysics Data System (ADS)
Yoshiie, T.; Ito, T.; Iwase, H.; Kaneko, Y.; Kawai, M.; Kishida, I.; Kunieda, S.; Sato, K.; Shimakawa, S.; Shimizu, F.; Hashimoto, S.; Hashimoto, N.; Fukahori, T.; Watanabe, Y.; Xu, Q.; Ishino, S.
2011-07-01
Changes in mechanical property of Ni under irradiation by 3 GeV protons were estimated by multi-scale modeling. The code consisted of four parts. The first part was based on the Particle and Heavy-Ion Transport code System (PHITS) code for nuclear reactions, and modeled the interactions between high energy protons and nuclei in the target. The second part covered atomic collisions by particles without nuclear reactions. Because the energy of the particles was high, subcascade analysis was employed. The direct formation of clusters and the number of mobile defects were estimated using molecular dynamics (MD) and kinetic Monte-Carlo (kMC) methods in each subcascade. The third part considered damage structural evolutions estimated by reaction kinetic analysis. The fourth part involved the estimation of mechanical property change using three-dimensional discrete dislocation dynamics (DDD). Using the above four part code, stress-strain curves for high energy proton irradiated Ni were obtained.
Discussion on LDPC Codes and Uplink Coding
NASA Technical Reports Server (NTRS)
Andrews, Ken; Divsalar, Dariush; Dolinar, Sam; Moision, Bruce; Hamkins, Jon; Pollara, Fabrizio
2007-01-01
This slide presentation reviews the progress that the workgroup on Low-Density Parity-Check (LDPC) for space link coding. The workgroup is tasked with developing and recommending new error correcting codes for near-Earth, Lunar, and deep space applications. Included in the presentation is a summary of the technical progress of the workgroup. Charts that show the LDPC decoder sensitivity to symbol scaling errors are reviewed, as well as a chart showing the performance of several frame synchronizer algorithms compared to that of some good codes and LDPC decoder tests at ESTL. Also reviewed is a study on Coding, Modulation, and Link Protocol (CMLP), and the recommended codes. A design for the Pseudo-Randomizer with LDPC Decoder and CRC is also reviewed. A chart that summarizes the three proposed coding systems is also presented.
Combat injury coding: a review and reconfiguration.
Lawnick, Mary M; Champion, Howard R; Gennarelli, Thomas; Galarneau, Michael R; D'Souza, Edwin; Vickers, Ross R; Wing, Vern; Eastridge, Brian J; Young, Lee Ann; Dye, Judy; Spott, Mary Ann; Jenkins, Donald H; Holcomb, John; Blackbourne, Lorne H; Ficke, James R; Kalin, Ellen J; Flaherty, Stephen
2013-10-01
The current civilian Abbreviated Injury Scale (AIS), designed for automobile crash injuries, yields important information about civilian injuries. It has been recognized for some time, however, that both the AIS and AIS-based scores such as the Injury Severity Score (ISS) are inadequate for describing penetrating injuries, especially those sustained in combat. Existing injury coding systems do not adequately describe (they actually exclude) combat injuries such as the devastating multi-mechanistic injuries resulting from attacks with improvised explosive devices (IEDs). After quantifying the inapplicability of current coding systems, the Military Combat Injury Scale (MCIS), which includes injury descriptors that accurately characterize combat anatomic injury, and the Military Functional Incapacity Scale (MFIS), which indicates immediate tactical functional impairment, were developed by a large tri-service military and civilian group of combat trauma subject-matter experts. Assignment of MCIS severity levels was based on urgency, level of care needed, and risk of death from each individual injury. The MFIS was developed based on the casualty's ability to shoot, move, and communicate, and comprises four levels ranging from "Able to continue mission" to "Lost to military." Separate functional impairments were identified for injuries aboard ship. Preliminary evaluation of MCIS discrimination, calibration, and casualty disposition was performed on 992 combat-injured patients using two modeling processes. Based on combat casualty data, the MCIS is a new, simpler, comprehensive severity scale with 269 codes (vs. 1999 in AIS) that specifically characterize and distinguish the many unique injuries encountered in combat. The MCIS integrates with the MFIS, which associates immediate combat functional impairment with minor and moderate-severity injuries. Predictive validation on combat datasets shows improved performance over AIS-based tools in addition to improved face, construct, and content validity and coding inter-rater reliability. Thus, the MCIS has greater relevance, accuracy, and precision for many military-specific applications. Over a period of several years, the Military Combat Injury Scale and Military Functional Incapacity Scale were developed, tested and validated by teams of civilian and tri-service military expertise. MCIS shows significant promise in documenting the nature, severity and complexity of modern combat injury.
Probabilistic structural analysis methods for select space propulsion system components
NASA Technical Reports Server (NTRS)
Millwater, H. R.; Cruse, T. A.
1989-01-01
The Probabilistic Structural Analysis Methods (PSAM) project developed at the Southwest Research Institute integrates state-of-the-art structural analysis techniques with probability theory for the design and analysis of complex large-scale engineering structures. An advanced efficient software system (NESSUS) capable of performing complex probabilistic analysis has been developed. NESSUS contains a number of software components to perform probabilistic analysis of structures. These components include: an expert system, a probabilistic finite element code, a probabilistic boundary element code and a fast probability integrator. The NESSUS software system is shown. An expert system is included to capture and utilize PSAM knowledge and experience. NESSUS/EXPERT is an interactive menu-driven expert system that provides information to assist in the use of the probabilistic finite element code NESSUS/FEM and the fast probability integrator (FPI). The expert system menu structure is summarized. The NESSUS system contains a state-of-the-art nonlinear probabilistic finite element code, NESSUS/FEM, to determine the structural response and sensitivities. A broad range of analysis capabilities and an extensive element library is present.
RETRACTED — PMD mitigation through interleaving LDPC codes with polarization scramblers
NASA Astrophysics Data System (ADS)
Han, Dahai; Chen, Haoran; Xi, Lixia
2012-11-01
The combination of forward error correction (FEC) and distributed fast polarization scramblers (D-FPSs) is approved as an effective method to mitigate polarization mode dispersion (PMD) in high-speed optical fiber communication system. The low-density parity-check (LDPC) codes are newly introduced into the PMD mitigation scheme with D-FPSs in this paper as one of the promising FEC codes to achieve better performance. The scrambling speed of FPS for LDPC (2040, 1903) codes system is discussed, and the reasonable speed 10 MHz is obtained from the simulation results. For easy application in practical large scale integrated (LSI) circuit, the number of iterations in decoding LDPC codes is also investigated. The PMD tolerance and cut-off optical signal-to-noise ratio (OSNR) of LDPC codes are compared with Reed-Solomon (RS) codes in different conditions. In the simulation, the interleaving LDPC codes brings incremental performance of error correction, and the PMD tolerance is 10 ps at OSNR=11.4 dB. The results show that the meaning of the work is that LDPC codes are a substitute for traditional RS codes with D-FPSs and all of the executable code files are open for researchers who have practical LSI platform for PMD mitigation.
PMD mitigation through interleaving LDPC codes with polarization scramblers
NASA Astrophysics Data System (ADS)
Han, Dahai; Chen, Haoran; Xi, Lixia
2013-09-01
The combination of forward error correction (FEC) and distributed fast polarization scramblers (D-FPSs) is approved an effective method to mitigate polarization mode dispersion (PMD) in high-speed optical fiber communication system. The low-density parity-check (LDPC) codes are newly introduced into the PMD mitigation scheme with D-FPSs in this article as one of the promising FEC codes to achieve better performance. The scrambling speed of FPS for LDPC (2040, 1903) codes system is discussed, and the reasonable speed 10MHz is obtained from the simulation results. For easy application in practical large scale integrated (LSI) circuit, the number of iterations in decoding LDPC codes is also investigated. The PMD tolerance and cut-off optical signal-to-noise ratio (OSNR) of LDPC codes are compared with Reed-Solomon (RS) codes in different conditions. In the simulation, the interleaving LDPC codes bring incremental performance of error correction, and the PMD tolerance is 10ps at OSNR=11.4dB. The results show the meaning of the work is that LDPC codes are a substitute for traditional RS codes with D-FPSs and all of the executable code files are open for researchers who have practical LSI platform for PMD mitigation.
DNA Barcoding through Quaternary LDPC Codes
Tapia, Elizabeth; Spetale, Flavio; Krsticevic, Flavia; Angelone, Laura; Bulacio, Pilar
2015-01-01
For many parallel applications of Next-Generation Sequencing (NGS) technologies short barcodes able to accurately multiplex a large number of samples are demanded. To address these competitive requirements, the use of error-correcting codes is advised. Current barcoding systems are mostly built from short random error-correcting codes, a feature that strongly limits their multiplexing accuracy and experimental scalability. To overcome these problems on sequencing systems impaired by mismatch errors, the alternative use of binary BCH and pseudo-quaternary Hamming codes has been proposed. However, these codes either fail to provide a fine-scale with regard to size of barcodes (BCH) or have intrinsic poor error correcting abilities (Hamming). Here, the design of barcodes from shortened binary BCH codes and quaternary Low Density Parity Check (LDPC) codes is introduced. Simulation results show that although accurate barcoding systems of high multiplexing capacity can be obtained with any of these codes, using quaternary LDPC codes may be particularly advantageous due to the lower rates of read losses and undetected sample misidentification errors. Even at mismatch error rates of 10−2 per base, 24-nt LDPC barcodes can be used to multiplex roughly 2000 samples with a sample misidentification error rate in the order of 10−9 at the expense of a rate of read losses just in the order of 10−6. PMID:26492348
DNA Barcoding through Quaternary LDPC Codes.
Tapia, Elizabeth; Spetale, Flavio; Krsticevic, Flavia; Angelone, Laura; Bulacio, Pilar
2015-01-01
For many parallel applications of Next-Generation Sequencing (NGS) technologies short barcodes able to accurately multiplex a large number of samples are demanded. To address these competitive requirements, the use of error-correcting codes is advised. Current barcoding systems are mostly built from short random error-correcting codes, a feature that strongly limits their multiplexing accuracy and experimental scalability. To overcome these problems on sequencing systems impaired by mismatch errors, the alternative use of binary BCH and pseudo-quaternary Hamming codes has been proposed. However, these codes either fail to provide a fine-scale with regard to size of barcodes (BCH) or have intrinsic poor error correcting abilities (Hamming). Here, the design of barcodes from shortened binary BCH codes and quaternary Low Density Parity Check (LDPC) codes is introduced. Simulation results show that although accurate barcoding systems of high multiplexing capacity can be obtained with any of these codes, using quaternary LDPC codes may be particularly advantageous due to the lower rates of read losses and undetected sample misidentification errors. Even at mismatch error rates of 10(-2) per base, 24-nt LDPC barcodes can be used to multiplex roughly 2000 samples with a sample misidentification error rate in the order of 10(-9) at the expense of a rate of read losses just in the order of 10(-6).
The TeraShake Computational Platform for Large-Scale Earthquake Simulations
NASA Astrophysics Data System (ADS)
Cui, Yifeng; Olsen, Kim; Chourasia, Amit; Moore, Reagan; Maechling, Philip; Jordan, Thomas
Geoscientific and computer science researchers with the Southern California Earthquake Center (SCEC) are conducting a large-scale, physics-based, computationally demanding earthquake system science research program with the goal of developing predictive models of earthquake processes. The computational demands of this program continue to increase rapidly as these researchers seek to perform physics-based numerical simulations of earthquake processes for larger meet the needs of this research program, a multiple-institution team coordinated by SCEC has integrated several scientific codes into a numerical modeling-based research tool we call the TeraShake computational platform (TSCP). A central component in the TSCP is a highly scalable earthquake wave propagation simulation program called the TeraShake anelastic wave propagation (TS-AWP) code. In this chapter, we describe how we extended an existing, stand-alone, wellvalidated, finite-difference, anelastic wave propagation modeling code into the highly scalable and widely used TS-AWP and then integrated this code into the TeraShake computational platform that provides end-to-end (initialization to analysis) research capabilities. We also describe the techniques used to enhance the TS-AWP parallel performance on TeraGrid supercomputers, as well as the TeraShake simulations phases including input preparation, run time, data archive management, and visualization. As a result of our efforts to improve its parallel efficiency, the TS-AWP has now shown highly efficient strong scaling on over 40K processors on IBM’s BlueGene/L Watson computer. In addition, the TSCP has developed into a computational system that is useful to many members of the SCEC community for performing large-scale earthquake simulations.
Visual analysis of inter-process communication for large-scale parallel computing.
Muelder, Chris; Gygi, Francois; Ma, Kwan-Liu
2009-01-01
In serial computation, program profiling is often helpful for optimization of key sections of code. When moving to parallel computation, not only does the code execution need to be considered but also communication between the different processes which can induce delays that are detrimental to performance. As the number of processes increases, so does the impact of the communication delays on performance. For large-scale parallel applications, it is critical to understand how the communication impacts performance in order to make the code more efficient. There are several tools available for visualizing program execution and communications on parallel systems. These tools generally provide either views which statistically summarize the entire program execution or process-centric views. However, process-centric visualizations do not scale well as the number of processes gets very large. In particular, the most common representation of parallel processes is a Gantt char t with a row for each process. As the number of processes increases, these charts can become difficult to work with and can even exceed screen resolution. We propose a new visualization approach that affords more scalability and then demonstrate it on systems running with up to 16,384 processes.
Lightweight computational steering of very large scale molecular dynamics simulations
DOE Office of Scientific and Technical Information (OSTI.GOV)
Beazley, D.M.; Lomdahl, P.S.
1996-09-01
We present a computational steering approach for controlling, analyzing, and visualizing very large scale molecular dynamics simulations involving tens to hundreds of millions of atoms. Our approach relies on extensible scripting languages and an easy to use tool for building extensions and modules. The system is extremely easy to modify, works with existing C code, is memory efficient, and can be used from inexpensive workstations and networks. We demonstrate how we have used this system to manipulate data from production MD simulations involving as many as 104 million atoms running on the CM-5 and Cray T3D. We also show howmore » this approach can be used to build systems that integrate common scripting languages (including Tcl/Tk, Perl, and Python), simulation code, user extensions, and commercial data analysis packages.« less
Performance of the OVERFLOW-MLP and LAURA-MLP CFD Codes on the NASA Ames 512 CPU Origin System
NASA Technical Reports Server (NTRS)
Taft, James R.
2000-01-01
The shared memory Multi-Level Parallelism (MLP) technique, developed last year at NASA Ames has been very successful in dramatically improving the performance of important NASA CFD codes. This new and very simple parallel programming technique was first inserted into the OVERFLOW production CFD code in FY 1998. The OVERFLOW-MLP code's parallel performance scaled linearly to 256 CPUs on the NASA Ames 256 CPU Origin 2000 system (steger). Overall performance exceeded 20.1 GFLOP/s, or about 4.5x the performance of a dedicated 16 CPU C90 system. All of this was achieved without any major modification to the original vector based code. The OVERFLOW-MLP code is now in production on the inhouse Origin systems as well as being used offsite at commercial aerospace companies. Partially as a result of this work, NASA Ames has purchased a new 512 CPU Origin 2000 system to further test the limits of parallel performance for NASA codes of interest. This paper presents the performance obtained from the latest optimization efforts on this machine for the LAURA-MLP and OVERFLOW-MLP codes. The Langley Aerothermodynamics Upwind Relaxation Algorithm (LAURA) code is a key simulation tool in the development of the next generation shuttle, interplanetary reentry vehicles, and nearly all "X" plane development. This code sustains about 4-5 GFLOP/s on a dedicated 16 CPU C90. At this rate, expected workloads would require over 100 C90 CPU years of computing over the next few calendar years. It is not feasible to expect that this would be affordable or available to the user community. Dramatic performance gains on cheaper systems are needed. This code is expected to be perhaps the largest consumer of NASA Ames compute cycles per run in the coming year.The OVERFLOW CFD code is extensively used in the government and commercial aerospace communities to evaluate new aircraft designs. It is one of the largest consumers of NASA supercomputing cycles and large simulations of highly resolved full aircraft are routinely undertaken. Typical large problems might require 100s of Cray C90 CPU hours to complete. The dramatic performance gains with the 256 CPU steger system are exciting. Obtaining results in hours instead of months is revolutionizing the way in which aircraft manufacturers are looking at future aircraft simulation work. Figure 2 below is a current state of the art plot of OVERFLOW-MLP performance on the 512 CPU Lomax system. As can be seen, the chart indicates that OVERFLOW-MLP continues to scale linearly with CPU count up to 512 CPUs on a large 35 million point full aircraft RANS simulation. At this point performance is such that a fully converged simulation of 2500 time steps is completed in less than 2 hours of elapsed time. Further work over the next few weeks will improve the performance of this code even further.The LAURA code has been converted to the MLP format as well. This code is currently being optimized for the 512 CPU system. Performance statistics indicate that the goal of 100 GFLOP/s will be achieved by year's end. This amounts to 20x the 16 CPU C90 result and strongly demonstrates the viability of the new parallel systems rapidly solving very large simulations in a production environment.
Experimental program for real gas flow code validation at NASA Ames Research Center
NASA Technical Reports Server (NTRS)
Deiwert, George S.; Strawa, Anthony W.; Sharma, Surendra P.; Park, Chul
1989-01-01
The experimental program for validating real gas hypersonic flow codes at NASA Ames Rsearch Center is described. Ground-based test facilities used include ballistic ranges, shock tubes and shock tunnels, arc jet facilities and heated-air hypersonic wind tunnels. Also included are large-scale computer systems for kinetic theory simulations and benchmark code solutions. Flight tests consist of the Aeroassist Flight Experiment, the Space Shuttle, Project Fire 2, and planetary probes such as Galileo, Pioneer Venus, and PAET.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Janjusic, Tommy; Kartsaklis, Christos
Memory scalability is an enduring problem and bottleneck that plagues many parallel codes. Parallel codes designed for High Performance Systems are typically designed over the span of several, and in some instances 10+, years. As a result, optimization practices which were appropriate for earlier systems may no longer be valid and thus require careful optimization consideration. Specifically, parallel codes whose memory footprint is a function of their scalability must be carefully considered for future exa-scale systems. In this paper we present a methodology and tool to study the memory scalability of parallel codes. Using our methodology we evaluate an applicationmore » s memory footprint as a function of scalability, which we coined memory efficiency, and describe our results. In particular, using our in-house tools we can pinpoint the specific application components which contribute to the application s overall memory foot-print (application data- structures, libraries, etc.).« less
Cui, Laizhong; Lu, Nan; Chen, Fu
2014-01-01
Most large-scale peer-to-peer (P2P) live streaming systems use mesh to organize peers and leverage pull scheduling to transmit packets for providing robustness in dynamic environment. The pull scheduling brings large packet delay. Network coding makes the push scheduling feasible in mesh P2P live streaming and improves the efficiency. However, it may also introduce some extra delays and coding computational overhead. To improve the packet delay, streaming quality, and coding overhead, in this paper are as follows. we propose a QoS driven push scheduling approach. The main contributions of this paper are: (i) We introduce a new network coding method to increase the content diversity and reduce the complexity of scheduling; (ii) we formulate the push scheduling as an optimization problem and transform it to a min-cost flow problem for solving it in polynomial time; (iii) we propose a push scheduling algorithm to reduce the coding overhead and do extensive experiments to validate the effectiveness of our approach. Compared with previous approaches, the simulation results demonstrate that packet delay, continuity index, and coding ratio of our system can be significantly improved, especially in dynamic environments. PMID:25114968
GPU Multi-Scale Particle Tracking and Multi-Fluid Simulations of the Radiation Belts
NASA Astrophysics Data System (ADS)
Ziemba, T.; Carscadden, J.; O'Donnell, D.; Winglee, R.; Harnett, E.; Cash, M.
2007-12-01
The properties of the radiation belts can vary dramatically under the influence of magnetic storms and storm-time substorms. The task of understanding and predicting radiation belt properties is made difficult because their properties determined by global processes as well as small-scale wave-particle interactions. A full solution to the problem will require major innovations in technique and computer hardware. The proposed work will demonstrates liked particle tracking codes with new multi-scale/multi-fluid global simulations that provide the first means to include small-scale processes within the global magnetospheric context. A large hurdle to the problem is having sufficient computer hardware that is able to handle the dissipate temporal and spatial scale sizes. A major innovation of the work is that the codes are designed to run of graphics processing units (GPUs). GPUs are intrinsically highly parallelized systems that provide more than an order of magnitude computing speed over a CPU based systems, for little more cost than a high end-workstation. Recent advancements in GPU technologies allow for full IEEE float specifications with performance up to several hundred GFLOPs per GPU and new software architectures have recently become available to ease the transition from graphics based to scientific applications. This allows for a cheap alternative to standard supercomputing methods and should increase the time to discovery. A demonstration of the code pushing more than 500,000 particles faster than real time is presented, and used to provide new insight into radiation belt dynamics.
NASA Astrophysics Data System (ADS)
Hori, T.; Ichimura, T.
2015-12-01
Here we propose a system for monitoring and forecasting of crustal activity, especially great interplate earthquake generation and its preparation processes in subduction zone. Basically, we model great earthquake generation as frictional instability on the subjecting plate boundary. So, spatio-temporal variation in slip velocity on the plate interface should be monitored and forecasted. Although, we can obtain continuous dense surface deformation data on land and partly at the sea bottom, the data obtained are not fully utilized for monitoring and forecasting. It is necessary to develop a physics-based data analysis system including (1) a structural model with the 3D geometry of the plate interface and the material property such as elasticity and viscosity, (2) calculation code for crustal deformation and seismic wave propagation using (1), (3) inverse analysis or data assimilation code both for structure and fault slip using (1)&(2). To accomplish this, it is at least necessary to develop highly reliable large-scale simulation code to calculate crustal deformation and seismic wave propagation for 3D heterogeneous structure. Actually, Ichimura et al. (2014, SC14) has developed unstructured FE non-linear seismic wave simulation code, which achieved physics-based urban earthquake simulation enhanced by 10.7 BlnDOF x 30 K time-step. Ichimura et al. (2013, GJI) has developed high fidelity FEM simulation code with mesh generator to calculate crustal deformation in and around Japan with complicated surface topography and subducting plate geometry for 1km mesh. Further, for inverse analyses, Errol et al. (2012, BSSA) has developed waveform inversion code for modeling 3D crustal structure, and Agata et al. (2015, this meeting) has improved the high fidelity FEM code to apply an adjoint method for estimating fault slip and asthenosphere viscosity. Hence, we have large-scale simulation and analysis tools for monitoring. Furthermore, we are developing the methods for forecasting the slip velocity variation on the plate interface. Basic concept is given in Hori et al. (2014, Oceanography) introducing ensemble based sequential data assimilation procedure. Although the prototype described there is for elastic half space model, we will apply it for 3D heterogeneous structure with the high fidelity FE model.
New Challenges in Computational Thermal Hydraulics
DOE Office of Scientific and Technical Information (OSTI.GOV)
Yadigaroglu, George; Lakehal, Djamel
New needs and opportunities drive the development of novel computational methods for the design and safety analysis of light water reactors (LWRs). Some new methods are likely to be three dimensional. Coupling is expected between system codes, computational fluid dynamics (CFD) modules, and cascades of computations at scales ranging from the macro- or system scale to the micro- or turbulence scales, with the various levels continuously exchanging information back and forth. The ISP-42/PANDA and the international SETH project provide opportunities for testing applications of single-phase CFD methods to LWR safety problems. Although industrial single-phase CFD applications are commonplace, computational multifluidmore » dynamics is still under development. However, first applications are appearing; the state of the art and its potential uses are discussed. The case study of condensation of steam/air mixtures injected from a downward-facing vent into a pool of water is a perfect illustration of a simulation cascade: At the top of the hierarchy of scales, system behavior can be modeled with a system code; at the central level, the volume-of-fluid method can be applied to predict large-scale bubbling behavior; at the bottom of the cascade, direct-contact condensation can be treated with direct numerical simulation, in which turbulent flow (in both the gas and the liquid), interfacial dynamics, and heat/mass transfer are directly simulated without resorting to models.« less
McCrea, Simon M
2009-01-01
Alexander Luria's model of the working brain consisting of three functional units was formulated through the examination of hundreds of focal brain-injury patients. Several psychometric instruments based on Luria's syndrome analysis and accompanying qualitative tasks have been developed since the 1970s. In the mid-1970s, JP Das and colleagues defined a specific cognitive processes model based directly on Luria's two coding units termed simultaneous and successive by studying diverse cross-cultural, ability, and socioeconomic strata. The cognitive assessment system is based on the PASS model of cognitive processes and consists of four composite scales of Planning-Attention-Simultaneous-Successive (PASS) devised by Naglieri and Das in 1997. Das and colleagues developed the two new scales of planning and attention to more closely model Luria's theory of higher cortical functions. In this paper a theoretical review of Luria's theory, Das and colleagues elaboration of Luria's model, and the neural correlates of PASS composite scales based on extant studies is summarized. A brief empirical study of the neuropsychological specificity of the PASS composite scales in a sample of 33 focal cortical stroke patients using cluster analysis is then discussed. Planning and simultaneous were sensitive to right hemisphere lesions. These findings were integrated with recent functional neuroimaging studies of PASS scales. In sum it was found that simultaneous is strongly dependent on dual bilateral occipitoparietal interhemispheric coordination whereas successive demonstrated left frontotemporal specificity with some evidence of interhemispheric coordination across the prefrontal cortex. Hence, support for the validity of the PASS composite scales was found as well as for the axiom of the independence of code content from code type originally specified in 1994 by Das, Naglieri, and Kirby.
A review and empirical study of the composite scales of the Das–Naglieri cognitive assessment system
McCrea, Simon M
2009-01-01
Alexander Luria’s model of the working brain consisting of three functional units was formulated through the examination of hundreds of focal brain-injury patients. Several psychometric instruments based on Luria’s syndrome analysis and accompanying qualitative tasks have been developed since the 1970s. In the mid-1970s, JP Das and colleagues defined a specific cognitive processes model based directly on Luria’s two coding units termed simultaneous and successive by studying diverse cross-cultural, ability, and socioeconomic strata. The cognitive assessment system is based on the PASS model of cognitive processes and consists of four composite scales of Planning–Attention–Simultaneous–Successive (PASS) devised by Naglieri and Das in 1997. Das and colleagues developed the two new scales of planning and attention to more closely model Luria’s theory of higher cortical functions. In this paper a theoretical review of Luria’s theory, Das and colleagues elaboration of Luria’s model, and the neural correlates of PASS composite scales based on extant studies is summarized. A brief empirical study of the neuropsychological specificity of the PASS composite scales in a sample of 33 focal cortical stroke patients using cluster analysis is then discussed. Planning and simultaneous were sensitive to right hemisphere lesions. These findings were integrated with recent functional neuroimaging studies of PASS scales. In sum it was found that simultaneous is strongly dependent on dual bilateral occipitoparietal interhemispheric coordination whereas successive demonstrated left frontotemporal specificity with some evidence of interhemispheric coordination across the prefrontal cortex. Hence, support for the validity of the PASS composite scales was found as well as for the axiom of the independence of code content from code type originally specified in 1994 by Das, Naglieri, and Kirby. PMID:22110322
Critiquing ';pore connectivity' as basis for in situ flow in geothermal systems
NASA Astrophysics Data System (ADS)
Kenedi, C. L.; Leary, P.; Malin, P.
2013-12-01
Geothermal system in situ flow systematics derived from detailed examination of grain-scale structures, fabrics, mineral alteration, and pore connectivity may be extremely misleading if/when extrapolated to reservoir-scale flow structure. In oil/gas field clastic reservoir operations, it is standard to assume that small scale studies of flow fabric - notably the Kozeny-Carman and Archie's Law treatments at the grain-scale and well-log/well-bore sampling of formations/reservoirs at the cm-m scale - are adequate to define the reservoir-scale flow properties. In the case of clastic reservoirs, however, a wide range of reservoir-scale data wholly discredits this extrapolation: Well-log data show that grain-scale fracture density fluctuation power scales inversely with spatial frequency k, S(k) ~ 1/k^β, 1.0 < β < 1.2, 1cycle/km < k < 1cycle/cm; the scaling is a ';universal' feature of well-logs (neutron porosity, sonic velocity, chemical abundance, mass density, resistivity, in many forms of clastic rock and instances of shale bodies, for both horizontal and vertical wells). Grain-scale fracture density correlates with in situ porosity; spatial fluctuations of porosity φ in well-core correlate with spatial fluctuations in the logarithm of well-core permeability, δφ ~ δlog(κ) with typical correlation coefficient ~ 85%; a similar relation is observed in consolidating sediments/clays, indicating a generic coupling between fluid pressure and solid deformation at pore sites. In situ macroscopic flow systems are lognormally distributed according to κ ~ κ0 exp(α(φ-φ0)), α >>1 an empirical parameter for degree of in situ fracture connectivity; the lognormal distribution applies to well-productivities in US oil fields and NZ geothermal fields, ';frack productivity' in oil/gas shale body reservoirs, ore grade distributions, and trace element abundances. Although presently available evidence for these properties in geothermal reservoirs is limited, there are indications that geothermal system flow essentially obeys the same ';universal' in situ flow rules as does clastic rock: Well-log data from Los Azufres, MX, show power-law scaling S(k) ~ 1/k^β, 1.2 < β < 1.4, for spatial frequency range 2cycles/km to 0.5cycle/m; higher β-values are likely due to the relatively fresh nature of geothermal systems; Well-core at Bulalo (PH) and Ohaaki (NZ) show statistically significant spatial correlation, δφ ~ δlog(κ) Well productivity at Ohaaki/Ngawha (NZ) and in geothermal systems elsewhere are lognormally distributed; K/Th/U abundances lognormally distributed in Los Azufres well-logs We therefore caution that small-scale evidence for in situ flow fabric in geothermal systems that is interpreted in terms of ';pore connectivity' may in fact not reflect how small-scale chemical processes are integrated into a large-scale geothermal flow structure. Rather such small scale studies should (perhaps) be considered in term of the above flow rules. These flow rules are easily incorporated into standard flow simulation codes, in particular the OPM = Open Porous Media open-source industry-standard flow code. Geochemical transport data relevant to geothermal systems can thus be expected to be well modeled by OPM or equivalent (e.g., INL/LANL) codes.
Deployment of the OSIRIS EM-PIC code on the Intel Knights Landing architecture
NASA Astrophysics Data System (ADS)
Fonseca, Ricardo
2017-10-01
Electromagnetic particle-in-cell (EM-PIC) codes such as OSIRIS have found widespread use in modelling the highly nonlinear and kinetic processes that occur in several relevant plasma physics scenarios, ranging from astrophysical settings to high-intensity laser plasma interaction. Being computationally intensive, these codes require large scale HPC systems, and a continuous effort in adapting the algorithm to new hardware and computing paradigms. In this work, we report on our efforts on deploying the OSIRIS code on the new Intel Knights Landing (KNL) architecture. Unlike the previous generation (Knights Corner), these boards are standalone systems, and introduce several new features, include the new AVX-512 instructions and on-package MCDRAM. We will focus on the parallelization and vectorization strategies followed, as well as memory management, and present a detailed performance evaluation of code performance in comparison with the CPU code. This work was partially supported by Fundaçã para a Ciência e Tecnologia (FCT), Portugal, through Grant No. PTDC/FIS-PLA/2940/2014.
Parallel Computation of the Regional Ocean Modeling System (ROMS)
DOE Office of Scientific and Technical Information (OSTI.GOV)
Wang, P; Song, Y T; Chao, Y
2005-04-05
The Regional Ocean Modeling System (ROMS) is a regional ocean general circulation modeling system solving the free surface, hydrostatic, primitive equations over varying topography. It is free software distributed world-wide for studying both complex coastal ocean problems and the basin-to-global scale ocean circulation. The original ROMS code could only be run on shared-memory systems. With the increasing need to simulate larger model domains with finer resolutions and on a variety of computer platforms, there is a need in the ocean-modeling community to have a ROMS code that can be run on any parallel computer ranging from 10 to hundreds ofmore » processors. Recently, we have explored parallelization for ROMS using the MPI programming model. In this paper, an efficient parallelization strategy for such a large-scale scientific software package, based on an existing shared-memory computing model, is presented. In addition, scientific applications and data-performance issues on a couple of SGI systems, including Columbia, the world's third-fastest supercomputer, are discussed.« less
Shiiba, Takuro; Kuga, Naoya; Kuroiwa, Yasuyoshi; Sato, Tatsuhiko
2017-10-01
We assessed the accuracy of mono-energetic electron and beta-emitting isotope dose-point kernels (DPKs) calculated using the particle and heavy ion transport code system (PHITS) for patient-specific dosimetry in targeted radionuclide treatment (TRT) and compared our data with published data. All mono-energetic and beta-emitting isotope DPKs calculated using PHITS, both in water and compact bone, were in good agreement with those in literature using other MC codes. PHITS provided reliable mono-energetic electron and beta-emitting isotope scaled DPKs for patient-specific dosimetry. Copyright © 2017 Elsevier Ltd. All rights reserved.
Lesko, Mehdi M; Woodford, Maralyn; White, Laura; O'Brien, Sarah J; Childs, Charmaine; Lecky, Fiona E
2010-08-06
The purpose of Abbreviated Injury Scale (AIS) is to code various types of Traumatic Brain Injuries (TBI) based on their anatomical location and severity. The Marshall CT Classification is used to identify those subgroups of brain injured patients at higher risk of deterioration or mortality. The purpose of this study is to determine whether and how AIS coding can be translated to the Marshall Classification Initially, a Marshall Class was allocated to each AIS code through cross-tabulation. This was agreed upon through several discussion meetings with experts from both fields (clinicians and AIS coders). Furthermore, in order to make this translation possible, some necessary assumptions with regards to coding and classification of mass lesions and brain swelling were essential which were all approved and made explicit. The proposed method involves two stages: firstly to determine all possible Marshall Classes which a given patient can attract based on allocated AIS codes; via cross-tabulation and secondly to assign one Marshall Class to each patient through an algorithm. This method can be easily programmed in computer softwares and it would enable future important TBI research programs using trauma registry data.
2010-01-01
Background The purpose of Abbreviated Injury Scale (AIS) is to code various types of Traumatic Brain Injuries (TBI) based on their anatomical location and severity. The Marshall CT Classification is used to identify those subgroups of brain injured patients at higher risk of deterioration or mortality. The purpose of this study is to determine whether and how AIS coding can be translated to the Marshall Classification Methods Initially, a Marshall Class was allocated to each AIS code through cross-tabulation. This was agreed upon through several discussion meetings with experts from both fields (clinicians and AIS coders). Furthermore, in order to make this translation possible, some necessary assumptions with regards to coding and classification of mass lesions and brain swelling were essential which were all approved and made explicit. Results The proposed method involves two stages: firstly to determine all possible Marshall Classes which a given patient can attract based on allocated AIS codes; via cross-tabulation and secondly to assign one Marshall Class to each patient through an algorithm. Conclusion This method can be easily programmed in computer softwares and it would enable future important TBI research programs using trauma registry data. PMID:20691038
NASA Astrophysics Data System (ADS)
Boyarinov, V. F.; Grol, A. V.; Fomichenko, P. A.; Ternovykh, M. Yu
2017-01-01
This work is aimed at improvement of HTGR neutron physics design calculations by application of uncertainty analysis with the use of cross-section covariance information. Methodology and codes for preparation of multigroup libraries of covariance information for individual isotopes from the basic 44-group library of SCALE-6 code system were developed. A 69-group library of covariance information in a special format for main isotopes and elements typical for high temperature gas cooled reactors (HTGR) was generated. This library can be used for estimation of uncertainties, associated with nuclear data, in analysis of HTGR neutron physics with design codes. As an example, calculations of one-group cross-section uncertainties for fission and capture reactions for main isotopes of the MHTGR-350 benchmark, as well as uncertainties of the multiplication factor (k∞) for the MHTGR-350 fuel compact cell model and fuel block model were performed. These uncertainties were estimated by the developed technology with the use of WIMS-D code and modules of SCALE-6 code system, namely, by TSUNAMI, KENO-VI and SAMS. Eight most important reactions on isotopes for MHTGR-350 benchmark were identified, namely: 10B(capt), 238U(n,γ), ν5, 235U(n,γ), 238U(el), natC(el), 235U(fiss)-235U(n,γ), 235U(fiss).
Automated target recognition using passive radar and coordinated flight models
NASA Astrophysics Data System (ADS)
Ehrman, Lisa M.; Lanterman, Aaron D.
2003-09-01
Rather than emitting pulses, passive radar systems rely on illuminators of opportunity, such as TV and FM radio, to illuminate potential targets. These systems are particularly attractive since they allow receivers to operate without emitting energy, rendering them covert. Many existing passive radar systems estimate the locations and velocities of targets. This paper focuses on adding an automatic target recognition (ATR) component to such systems. Our approach to ATR compares the Radar Cross Section (RCS) of targets detected by a passive radar system to the simulated RCS of known targets. To make the comparison as accurate as possible, the received signal model accounts for aircraft position and orientation, propagation losses, and antenna gain patterns. The estimated positions become inputs for an algorithm that uses a coordinated flight model to compute probable aircraft orientation angles. The Fast Illinois Solver Code (FISC) simulates the RCS of several potential target classes as they execute the estimated maneuvers. The RCS is then scaled by the Advanced Refractive Effects Prediction System (AREPS) code to account for propagation losses that occur as functions of altitude and range. The Numerical Electromagnetic Code (NEC2) computes the antenna gain pattern, so that the RCS can be further scaled. The Rician model compares the RCS of the illuminated aircraft with those of the potential targets. This comparison results in target identification.
1991-08-01
specifications are taken primarily from the 1983 version of the ASME Boiler and Pressure Vessel Code . Other design requirements were developea from standard safe...rules and practices of the American Society of Mechanical Engineers (ASME) Boiler and Pressure Vessel Code to provide a safe and reliable system
Guffanti, Marianne C.; Miller, Thomas
2013-01-01
An alert-level system for communicating volcano hazard information to the aviation industry was devised by the Alaska Volcano Observatory (AVO) during the 1989–1990 eruption of Redoubt Volcano. The system uses a simple, color-coded ranking that focuses on volcanic ash emissions: Green—normal background; Yellow—signs of unrest; Orange—precursory unrest or minor ash eruption; Red—major ash eruption imminent or underway. The color code has been successfully applied on a regional scale in Alaska for a sustained period. During 2002–2011, elevated color codes were assigned by AVO to 13 volcanoes, eight of which erupted; for that decade, one or more Alaskan volcanoes were at Yellow on 67 % of days and at Orange or Red on 12 % of days. As evidence of its utility, the color code system is integrated into procedures of agencies responsible for air-traffic management and aviation meteorology in Alaska. Furthermore, it is endorsed as a key part of globally coordinated protocols established by the International Civil Aviation Organization to provide warnings of ash hazards to aviation worldwide. The color code and accompanying structured message (called a Volcano Observatory Notice for Aviation) comprise an effective early-warning message system according to the United Nations International Strategy for Disaster Reduction. The aviation color code system currently is used in the United States, Russia, New Zealand, Iceland, and partially in the Philippines, Papua New Guinea, and Indonesia. Although there are some barriers to implementation, with continued education and outreach to Volcano Observatories worldwide, greater use of the aviation color code system is achievable.
Guffanti, Marianne; Miller, Thomas P.
2013-01-01
An alert-level system for communicating volcano hazard information to the aviation industry was devised by the Alaska Volcano Observatory (AVO) during the 1989–1990 eruption of Redoubt Volcano. The system uses a simple, color-coded ranking that focuses on volcanic ash emissions: Green—normal background; Yellow—signs of unrest; Orange—precursory unrest or minor ash eruption; Red—major ash eruption imminent or underway. The color code has been successfully applied on a regional scale in Alaska for a sustained period. During 2002–2011, elevated color codes were assigned by AVO to 13 volcanoes, eight of which erupted; for that decade, one or more Alaskan volcanoes were at Yellow on 67 % of days and at Orange or Red on 12 % of days. As evidence of its utility, the color code system is integrated into procedures of agencies responsible for air-traffic management and aviation meteorology in Alaska. Furthermore, it is endorsed as a key part of globally coordinated protocols established by the International Civil Aviation Organization to provide warnings of ash hazards to aviation worldwide. The color code and accompanying structured message (called a Volcano Observatory Notice for Aviation) comprise an effective early-warning message system according to the United Nations International Strategy for Disaster Reduction. The aviation color code system currently is used in the United States, Russia, New Zealand, Iceland, and partially in the Philippines, Papua New Guinea, and Indonesia. Although there are some barriers to implementation, with continued education and outreach to Volcano Observatories worldwide, greater use of the aviation color code system is achievable.
InterProScan 5: genome-scale protein function classification
Jones, Philip; Binns, David; Chang, Hsin-Yu; Fraser, Matthew; Li, Weizhong; McAnulla, Craig; McWilliam, Hamish; Maslen, John; Mitchell, Alex; Nuka, Gift; Pesseat, Sebastien; Quinn, Antony F.; Sangrador-Vegas, Amaia; Scheremetjew, Maxim; Yong, Siew-Yit; Lopez, Rodrigo; Hunter, Sarah
2014-01-01
Motivation: Robust large-scale sequence analysis is a major challenge in modern genomic science, where biologists are frequently trying to characterize many millions of sequences. Here, we describe a new Java-based architecture for the widely used protein function prediction software package InterProScan. Developments include improvements and additions to the outputs of the software and the complete reimplementation of the software framework, resulting in a flexible and stable system that is able to use both multiprocessor machines and/or conventional clusters to achieve scalable distributed data analysis. InterProScan is freely available for download from the EMBl-EBI FTP site and the open source code is hosted at Google Code. Availability and implementation: InterProScan is distributed via FTP at ftp://ftp.ebi.ac.uk/pub/software/unix/iprscan/5/ and the source code is available from http://code.google.com/p/interproscan/. Contact: http://www.ebi.ac.uk/support or interhelp@ebi.ac.uk or mitchell@ebi.ac.uk PMID:24451626
DOE Office of Scientific and Technical Information (OSTI.GOV)
Ward, Robert Cameron; Steiner, Don
2004-06-15
The generation of runaway electrons during a thermal plasma disruption is a concern for the safe and economical operation of a tokamak power system. Runaway electrons have high energy, 10 to 300 MeV, and may potentially cause extensive damage to plasma-facing components (PFCs) through large temperature increases, melting of metallic components, surface erosion, and possible burnout of coolant tubes. The EPQ code system was developed to simulate the thermal response of PFCs to a runaway electron impact. The EPQ code system consists of several parts: UNIX scripts that control the operation of an electron-photon Monte Carlo code to calculate themore » interaction of the runaway electrons with the plasma-facing materials; a finite difference code to calculate the thermal response, melting, and surface erosion of the materials; a code to process, scale, transform, and convert the electron Monte Carlo data to volumetric heating rates for use in the thermal code; and several minor and auxiliary codes for the manipulation and postprocessing of the data. The electron-photon Monte Carlo code used was Electron-Gamma-Shower (EGS), developed and maintained by the National Research Center of Canada. The Quick-Therm-Two-Dimensional-Nonlinear (QTTN) thermal code solves the two-dimensional cylindrical modified heat conduction equation using the Quickest third-order accurate and stable explicit finite difference method and is capable of tracking melting or surface erosion. The EPQ code system is validated using a series of analytical solutions and simulations of experiments. The verification of the QTTN thermal code with analytical solutions shows that the code with the Quickest method is better than 99.9% accurate. The benchmarking of the EPQ code system and QTTN versus experiments showed that QTTN's erosion tracking method is accurate within 30% and that EPQ is able to predict the occurrence of melting within the proper time constraints. QTTN and EPQ are verified and validated as able to calculate the temperature distribution, phase change, and surface erosion successfully.« less
Comprehensive Analysis Modeling of Small-Scale UAS Rotors
NASA Technical Reports Server (NTRS)
Russell, Carl R.; Sekula, Martin K.
2017-01-01
Multicopter unmanned aircraft systems (UAS), or drones, have continued their explosive growth in recent years. With this growth comes demand for increased performance as the limits of existing technologies are reached. In order to better design multicopter UAS aircraft, better performance prediction tools are needed. This paper presents the results of a study aimed at using the rotorcraft comprehensive analysis code CAMRAD II to model a multicopter UAS rotor in hover. Parametric studies were performed to determine the level of fidelity needed in the analysis code inputs to achieve results that match test data. Overall, the results show that CAMRAD II is well suited to model small-scale UAS rotors in hover. This paper presents the results of the parametric studies as well as recommendations for the application of comprehensive analysis codes to multicopter UAS rotors.
Efficient coding and detection of ultra-long IDs for visible light positioning systems.
Zhang, Hualong; Yang, Chuanchuan
2018-05-14
Visible light positioning (VLP) is a promising technique to complement Global Navigation Satellite System (GNSS) such as Global positioning system (GPS) and BeiDou Navigation Satellite System (BDS) which features the advantage of low-cost and high accuracy. The situation becomes even more crucial for indoor environments, where satellite signals are weak or even unavailable. For large-scale application of VLP, there would be a considerable number of Light emitting diode (LED) IDs, which bring forward the demand of long LED ID detection. In particular, to provision indoor localization globally, a convenient way is to program a unique ID into each LED during manufacture. This poses a big challenge for image sensors, such as the CMOS camera in everybody's hands since the long ID covers the span of multiple frames. In this paper, we investigate the detection of ultra-long ID using rolling shutter cameras. By analyzing the pattern of data loss in each frame, we proposed a novel coding technique to improve the efficiency of LED ID detection. We studied the performance of Reed-Solomon (RS) code in this system and designed a new coding method which considered the trade-off between performance and decoding complexity. Coding technique decreases the number of frames needed in data processing, significantly reduces the detection time, and improves the accuracy of detection. Numerical and experimental results show that the detected LED ID can be much longer with the coding technique. Besides, our proposed coding method is proved to achieve a performance close to that of RS code while the decoding complexity is much lower.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Carothers, Christopher D.; Meredith, Jeremy S.; Blanco, Marc
Performance modeling of extreme-scale applications on accurate representations of potential architectures is critical for designing next generation supercomputing systems because it is impractical to construct prototype systems at scale with new network hardware in order to explore designs and policies. However, these simulations often rely on static application traces that can be difficult to work with because of their size and lack of flexibility to extend or scale up without rerunning the original application. To address this problem, we have created a new technique for generating scalable, flexible workloads from real applications, we have implemented a prototype, called Durango, thatmore » combines a proven analytical performance modeling language, Aspen, with the massively parallel HPC network modeling capabilities of the CODES framework.Our models are compact, parameterized and representative of real applications with computation events. They are not resource intensive to create and are portable across simulator environments. We demonstrate the utility of Durango by simulating the LULESH application in the CODES simulation environment on several topologies and show that Durango is practical to use for simulation without loss of fidelity, as quantified by simulation metrics. During our validation of Durango's generated communication model of LULESH, we found that the original LULESH miniapp code had a latent bug where the MPI_Waitall operation was used incorrectly. This finding underscores the potential need for a tool such as Durango, beyond its benefits for flexible workload generation and modeling.Additionally, we demonstrate the efficacy of Durango's direct integration approach, which links Aspen into CODES as part of the running network simulation model. Here, Aspen generates the application-level computation timing events, which in turn drive the start of a network communication phase. Results show that Durango's performance scales well when executing both torus and dragonfly network models on up to 4K Blue Gene/Q nodes using 32K MPI ranks, Durango also avoids the overheads and complexities associated with extreme-scale trace files.« less
NEAMS Update. Quarterly Report for October - December 2011.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Bradley, K.
2012-02-16
The Advanced Modeling and Simulation Office within the DOE Office of Nuclear Energy (NE) has been charged with revolutionizing the design tools used to build nuclear power plants during the next 10 years. To accomplish this, the DOE has brought together the national laboratories, U.S. universities, and the nuclear energy industry to establish the Nuclear Energy Advanced Modeling and Simulation (NEAMS) Program. The mission of NEAMS is to modernize computer modeling of nuclear energy systems and improve the fidelity and validity of modeling results using contemporary software environments and high-performance computers. NEAMS will create a set of engineering-level codes aimedmore » at designing and analyzing the performance and safety of nuclear power plants and reactor fuels. The truly predictive nature of these codes will be achieved by modeling the governing phenomena at the spatial and temporal scales that dominate the behavior. These codes will be executed within a simulation environment that orchestrates code integration with respect to spatial meshing, computational resources, and execution to give the user a common 'look and feel' for setting up problems and displaying results. NEAMS is building upon a suite of existing simulation tools, including those developed by the federal Scientific Discovery through Advanced Computing and Advanced Simulation and Computing programs. NEAMS also draws upon existing simulation tools for materials and nuclear systems, although many of these are limited in terms of scale, applicability, and portability (their ability to be integrated into contemporary software and hardware architectures). NEAMS investments have directly and indirectly supported additional NE research and development programs, including those devoted to waste repositories, safeguarded separations systems, and long-term storage of used nuclear fuel. NEAMS is organized into two broad efforts, each comprising four elements. The quarterly highlights October-December 2011 are: (1) Version 1.0 of AMP, the fuel assembly performance code, was tested on the JAGUAR supercomputer and released on November 1, 2011, a detailed discussion of this new simulation tool is given; (2) A coolant sub-channel model and a preliminary UO{sub 2} smeared-cracking model were implemented in BISON, the single-pin fuel code, more information on how these models were developed and benchmarked is given; (3) The Object Kinetic Monte Carlo model was implemented to account for nucleation events in meso-scale simulations and a discussion of the significance of this advance is given; (4) The SHARP neutronics module, PROTEUS, was expanded to be applicable to all types of reactors, and a discussion of the importance of PROTEUS is given; (5) A plan has been finalized for integrating the high-fidelity, three-dimensional reactor code SHARP with both the systems-level code RELAP7 and the fuel assembly code AMP. This is a new initiative; (6) Work began to evaluate the applicability of AMP to the problem of dry storage of used fuel and to define a relevant problem to test the applicability; (7) A code to obtain phonon spectra from the force-constant matrix for a crystalline lattice has been completed. This important bridge between subcontinuum and continuum phenomena is discussed; (8) Benchmarking was begun on the meso-scale, finite-element fuels code MARMOT to validate its new variable splitting algorithm; (9) A very computationally demanding simulation of diffusion-driven nucleation of new microstructural features has been completed. An explanation of the difficulty of this simulation is given; (10) Experiments were conducted with deformed steel to validate a crystal plasticity finite-element code for bodycentered cubic iron; (11) The Capability Transfer Roadmap was completed and published as an internal laboratory technical report; (12) The AMP fuel assembly code input generator was integrated into the NEAMS Integrated Computational Environment (NiCE). More details on the planned NEAMS computing environment is given; and (13) The NEAMS program website (neams.energy.gov) is nearly ready to launch.« less
Tan, N C; Ang, A; Heng, D; Chen, J; Wong, H B
2007-01-01
The survey is aimed to describe the epidemiology of playground related injuries in Singapore based on the ICD-9, AIS/ ISS and PTS scoring systems, and mechanisms and causes of such injuries according to E codes and ICECI codes. A cross-sectional questionnaire survey examined children (< 16 years old), who sought treatment for or died of unintentional injuries in the ED of three hospitals, two primary care centers and the sole Forensic Medicine Department of Singapore. A data dictionary was compiled using guidelines from CDC/WHO. The ISS, AIS and PTS, ICD-9, ICECI v1 and E codes were used to describe the details of the injuries. 19,094 childhood injuries were recorded in the database, of which 1617 were playground injuries (8.5%). The injured children (mean age=6.8 years, SD 2.9 years) were predo-minantly male (M:F ratio = 1.71:1). Falls were the most frequent in-juries (70.7%) using ICECI. 25.0% of injuries involved radial and ulnar fractures (ICD-9 code). 99.4% of these injuries were minor, with PTS scores of 9-12. Children aged 6-10 years, were prone to upper limb injuries (71.1%) based on AIS. The use of international coding systems in injury surveillance facilitated standardisation of description and comparison of playground injuries.
Kazachenko, Sergey; Giovinazzo, Mark; Hall, Kyle Wm; Cann, Natalie M
2015-09-15
A custom code for molecular dynamics simulations has been designed to run on CUDA-enabled NVIDIA graphics processing units (GPUs). The double-precision code simulates multicomponent fluids, with intramolecular and intermolecular forces, coarse-grained and atomistic models, holonomic constraints, Nosé-Hoover thermostats, and the generation of distribution functions. Algorithms to compute Lennard-Jones and Gay-Berne interactions, and the electrostatic force using Ewald summations, are discussed. A neighbor list is introduced to improve scaling with respect to system size. Three test systems are examined: SPC/E water; an n-hexane/2-propanol mixture; and a liquid crystal mesogen, 2-(4-butyloxyphenyl)-5-octyloxypyrimidine. Code performance is analyzed for each system. With one GPU, a 33-119 fold increase in performance is achieved compared with the serial code while the use of two GPUs leads to a 69-287 fold improvement and three GPUs yield a 101-377 fold speedup. © 2015 Wiley Periodicals, Inc.
PARVMEC: An Efficient, Scalable Implementation of the Variational Moments Equilibrium Code
DOE Office of Scientific and Technical Information (OSTI.GOV)
Seal, Sudip K; Hirshman, Steven Paul; Wingen, Andreas
The ability to sustain magnetically confined plasma in a state of stable equilibrium is crucial for optimal and cost-effective operations of fusion devices like tokamaks and stellarators. The Variational Moments Equilibrium Code (VMEC) is the de-facto serial application used by fusion scientists to compute magnetohydrodynamics (MHD) equilibria and study the physics of three dimensional plasmas in confined configurations. Modern fusion energy experiments have larger system scales with more interactive experimental workflows, both demanding faster analysis turnaround times on computational workloads that are stressing the capabilities of sequential VMEC. In this paper, we present PARVMEC, an efficient, parallel version of itsmore » sequential counterpart, capable of scaling to thousands of processors on distributed memory machines. PARVMEC is a non-linear code, with multiple numerical physics modules, each with its own computational complexity. A detailed speedup analysis supported by scaling results on 1,024 cores of a Cray XC30 supercomputer is presented. Depending on the mode of PARVMEC execution, speedup improvements of one to two orders of magnitude are reported. PARVMEC equips fusion scientists for the first time with a state-of-theart capability for rapid, high fidelity analyses of magnetically confined plasmas at unprecedented scales.« less
Visualization Co-Processing of a CFD Simulation
NASA Technical Reports Server (NTRS)
Vaziri, Arsi
1999-01-01
OVERFLOW, a widely used CFD simulation code, is combined with a visualization system, pV3, to experiment with an environment for simulation/visualization co-processing on a SGI Origin 2000 computer(O2K) system. The shared memory version of the solver is used with the O2K 'pfa' preprocessor invoked to automatically discover parallelism in the source code. No other explicit parallelism is enabled. In order to study the scaling and performance of the visualization co-processing system, sample runs are made with different processor groups in the range of 1 to 254 processors. The data exchange between the visualization system and the simulation system is rapid enough for user interactivity when the problem size is small. This shared memory version of OVERFLOW, with minimal parallelization, does not scale well to an increasing number of available processors. The visualization task takes about 18 to 30% of the total processing time and does not appear to be a major contributor to the poor scaling. Improper load balancing and inter-processor communication overhead are contributors to this poor performance. Work is in progress which is aimed at obtaining improved parallel performance of the solver and removing the limitations of serial data transfer to pV3 by examining various parallelization/communication strategies, including the use of the explicit message passing.
Dispersion interactions in Density Functional Theory
NASA Astrophysics Data System (ADS)
Andrinopoulos, Lampros; Hine, Nicholas; Mostofi, Arash
2012-02-01
Semilocal functionals in Density Functional Theory (DFT) achieve high accuracy simulating a wide range of systems, but miss the effect of dispersion (vdW) interactions, important in weakly bound systems. We study two different methods to include vdW in DFT: First, we investigate a recent approach [1] to evaluate the vdW contribution to the total energy using maximally-localized Wannier functions. Using a set of simple dimers, we show that it has a number of shortcomings that hamper its predictive power; we then develop and implement a series of improvements [2] and obtain binding energies and equilibrium geometries in closer agreement to quantum-chemical coupled-cluster calculations. Second, we implement the vdW-DF functional [3], using Soler's method [4], within ONETEP [5], a linear-scaling DFT code, and apply it to a range of systems. This method within a linear-scaling DFT code allows the simulation of weakly bound systems of larger scale, such as organic/inorganic interfaces, biological systems and implicit solvation models. [1] P. Silvestrelli, JPC A 113, 5224 (2009). [2] L. Andrinopoulos et al, JCP 135, 154105 (2011). [3] M. Dion et al, PRL 92, 246401 (2004). [4] G. Rom'an-P'erez, J.M. Soler, PRL 103, 096102 (2009). [5] C. Skylaris et al, JCP 122, 084119 (2005).
Feasibility and Top Level Design of a Scalable Emergency Response System for Oceangoing Assets
2008-10-20
hazard response. The DC is responsible for the initial response. In a small-scale hazard situation, the DC will assign a Risk Assessment Code (RAC) and...Qualification Standard R&D Research and Development RAC Risk Assessment Code RADSAFE Radiological Safety RAM Rolling Airframe Missile RFID Radio...easily be used for other environmental remediation efforts including Superfund sites, decommissioned Navy vessels and Brownfield locations, among others
Validation and Intercomparison Studies Within GODAE
2009-09-01
unlimited. 13. SUPPLEMENTARY NOTES 20091228154 14. ABSTRACT During the Global Ocean Data Assimilation Experiment (GODAE), seven international... global -ocean and basin-scale forecasting systems of different countries in routine interaction and continuous operation, (2) to assess the quality and... Franchi , 7000 Public Affairs (Unclassified/ Unlimited Only), Code 7o30 4 Division, Code ^VtcV Vs-Jc \\ -Vi<-’/c ••>’ 3^v’.-:5, w. 3Uo|eri 1
NASA Astrophysics Data System (ADS)
Guan, Zhen; Pekurovsky, Dmitry; Luce, Jason; Thornton, Katsuyo; Lowengrub, John
The structural phase field crystal (XPFC) model can be used to model grain growth in polycrystalline materials at diffusive time-scales while maintaining atomic scale resolution. However, the governing equation of the XPFC model is an integral-partial-differential-equation (IPDE), which poses challenges in implementation onto high performance computing (HPC) platforms. In collaboration with the XSEDE Extended Collaborative Support Service, we developed a distributed memory HPC solver for the XPFC model, which combines parallel multigrid and P3DFFT. The performance benchmarking on the Stampede supercomputer indicates near linear strong and weak scaling for both multigrid and transfer time between multigrid and FFT modules up to 1024 cores. Scalability of the FFT module begins to decline at 128 cores, but it is sufficient for the type of problem we will be examining. We have demonstrated simulations using 1024 cores, and we expect to achieve 4096 cores and beyond. Ongoing work involves optimization of MPI/OpenMP-based codes for the Intel KNL Many-Core Architecture. This optimizes the code for coming pre-exascale systems, in particular many-core systems such as Stampede 2.0 and Cori 2 at NERSC, without sacrificing efficiency on other general HPC systems.
Cheng, Juan; Chen, Xun; Liu, Aiping; Peng, Hu
2015-01-01
Sign language recognition (SLR) is an important communication tool between the deaf and the external world. It is highly necessary to develop a worldwide continuous and large-vocabulary-scale SLR system for practical usage. In this paper, we propose a novel phonology- and radical-coded Chinese SLR framework to demonstrate the feasibility of continuous SLR using accelerometer (ACC) and surface electromyography (sEMG) sensors. The continuous Chinese characters, consisting of coded sign gestures, are first segmented into active segments using EMG signals by means of moving average algorithm. Then, features of each component are extracted from both ACC and sEMG signals of active segments (i.e., palm orientation represented by the mean and variance of ACC signals, hand movement represented by the fixed-point ACC sequence, and hand shape represented by both the mean absolute value (MAV) and autoregressive model coefficients (ARs)). Afterwards, palm orientation is first classified, distinguishing “Palm Downward” sign gestures from “Palm Inward” ones. Only the “Palm Inward” gestures are sent for further hand movement and hand shape recognition by dynamic time warping (DTW) algorithm and hidden Markov models (HMM) respectively. Finally, component recognition results are integrated to identify one certain coded gesture. Experimental results demonstrate that the proposed SLR framework with a vocabulary scale of 223 characters can achieve an averaged recognition accuracy of 96.01% ± 0.83% for coded gesture recognition tasks and 92.73% ± 1.47% for character recognition tasks. Besides, it demonstrats that sEMG signals are rather consistent for a given hand shape independent of hand movements. Hence, the number of training samples will not be significantly increased when the vocabulary scale increases, since not only the number of the completely new proposed coded gestures is constant and limited, but also the transition movement which connects successive signs needs no training samples to model even though the same coded gesture performed in different characters. This work opens up a possible new way to realize a practical Chinese SLR system. PMID:26389907
Cheng, Juan; Chen, Xun; Liu, Aiping; Peng, Hu
2015-09-15
Sign language recognition (SLR) is an important communication tool between the deaf and the external world. It is highly necessary to develop a worldwide continuous and large-vocabulary-scale SLR system for practical usage. In this paper, we propose a novel phonology- and radical-coded Chinese SLR framework to demonstrate the feasibility of continuous SLR using accelerometer (ACC) and surface electromyography (sEMG) sensors. The continuous Chinese characters, consisting of coded sign gestures, are first segmented into active segments using EMG signals by means of moving average algorithm. Then, features of each component are extracted from both ACC and sEMG signals of active segments (i.e., palm orientation represented by the mean and variance of ACC signals, hand movement represented by the fixed-point ACC sequence, and hand shape represented by both the mean absolute value (MAV) and autoregressive model coefficients (ARs)). Afterwards, palm orientation is first classified, distinguishing "Palm Downward" sign gestures from "Palm Inward" ones. Only the "Palm Inward" gestures are sent for further hand movement and hand shape recognition by dynamic time warping (DTW) algorithm and hidden Markov models (HMM) respectively. Finally, component recognition results are integrated to identify one certain coded gesture. Experimental results demonstrate that the proposed SLR framework with a vocabulary scale of 223 characters can achieve an averaged recognition accuracy of 96.01% ± 0.83% for coded gesture recognition tasks and 92.73% ± 1.47% for character recognition tasks. Besides, it demonstrats that sEMG signals are rather consistent for a given hand shape independent of hand movements. Hence, the number of training samples will not be significantly increased when the vocabulary scale increases, since not only the number of the completely new proposed coded gestures is constant and limited, but also the transition movement which connects successive signs needs no training samples to model even though the same coded gesture performed in different characters. This work opens up a possible new way to realize a practical Chinese SLR system.
Pore-scale simulation of CO2-water-rock interactions
NASA Astrophysics Data System (ADS)
Deng, H.; Molins, S.; Steefel, C. I.; DePaolo, D. J.
2017-12-01
In Geologic Carbon Storage (GCS) systems, the migration of scCO2 versus CO2-acidifed brine ultimately determines the extent of mineral trapping and caprock integrity, i.e. the long-term storage efficiency and security. While continuum scale multiphase reactive transport models are valuable for large scale investigations, they typically (over-)simplify pore-scale dynamics and cannot capture local heterogeneities that may be important. Therefore, pore-scale models are needed in order to provide mechanistic understanding of how fine scale structural variations and heterogeneous processes influence the transport and geochemistry in the context of multiphase flow, and to inform parameterization of continuum scale modeling. In this study, we investigate the interplay of different processes at pore scale (e.g. diffusion, reactions, and multiphase flow) through the coupling of a well-developed multiphase flow simulator with a sophisticated reactive transport code. The objectives are to understand where brine displaced by scCO2 will reside in a rough pore/fracture, and how the CO2-water-rock interactions may affect the redistribution of different phases. In addition, the coupled code will provide a platform for model testing in pore-scale multiphase reactive transport problems.
Clinical application of ICF key codes to evaluate patients with dysphagia following stroke
Dong, Yi; Zhang, Chang-Jie; Shi, Jie; Deng, Jinggui; Lan, Chun-Na
2016-01-01
Abstract This study was aimed to identify and evaluate the International Classification of Functioning (ICF) key codes for dysphagia in stroke patients. Thirty patients with dysphagia after stroke were enrolled in our study. To evaluate the ICF dysphagia scale, 6 scales were used as comparisons, namely the Barthel Index (BI), Repetitive Saliva Swallowing Test (RSST), Kubota Water Swallowing Test (KWST), Frenchay Dysarthria Assessment, Mini-Mental State Examination (MMSE), and the Montreal Cognitive Assessment (MoCA). Multiple regression analysis was performed to quantitate the relationship between the ICF scale and the other 7 scales. In addition, 60 ICF scales were analyzed by the least absolute shrinkage and selection operator (LASSO) method. A total of 21 ICF codes were identified, which were closely related with the other scales. These included 13 codes from Body Function, 1 from Body Structure, 3 from Activities and Participation, and 4 from Environmental Factors. A topographic network map with 30 ICF key codes was also generated to visualize their relationships. The number of ICF codes identified is in line with other well-established evaluation methods. The network topographic map generated here could be used as an instruction tool in future evaluations. We also found that attention functions and biting were critical codes of these scales, and could be used as treatment targets. PMID:27661012
Multi-scale computational modeling of developmental biology.
Setty, Yaki
2012-08-01
Normal development of multicellular organisms is regulated by a highly complex process in which a set of precursor cells proliferate, differentiate and move, forming over time a functioning tissue. To handle their complexity, developmental systems can be studied over distinct scales. The dynamics of each scale is determined by the collective activity of entities at the scale below it. I describe a multi-scale computational approach for modeling developmental systems and detail the methodology through a synthetic example of a developmental system that retains key features of real developmental systems. I discuss the simulation of the system as it emerges from cross-scale and intra-scale interactions and describe how an in silico study can be carried out by modifying these interactions in a way that mimics in vivo experiments. I highlight biological features of the results through a comparison with findings in Caenorhabditis elegans germline development and finally discuss about the applications of the approach in real developmental systems and propose future extensions. The source code of the model of the synthetic developmental system can be found in www.wisdom.weizmann.ac.il/~yaki/MultiScaleModel. yaki.setty@gmail.com Supplementary data are available at Bioinformatics online.
Measuring homework completion in behavioral activation.
Busch, Andrew M; Uebelacker, Lisa A; Kalibatseva, Zornitsa; Miller, Ivan W
2010-07-01
The aim of this study was to develop and validate an observer-based coding system for the characterization and completion of homework assignments during Behavioral Activation (BA). Existing measures of homework completion are generally unsophisticated, and there is no current measure of homework completion designed to capture the particularities of BA. The tested scale sought to capture the type of assignment, realm of functioning targeted, extent of completion, and assignment difficulty. Homework assignments were drawn from 12 (mean age = 48, 83% female) clients in two trials of a 10-session BA manual targeting treatment-resistant depression in primary care. The two coders demonstrated acceptable or better reliability on most codes, and unreliable codes were dropped from the proposed scale. In addition, correlations between homework completion and outcome were strong, providing some support for construct validity. Ultimately, this line of research aims to develop a user-friendly, reliable measure of BA homework completion that can be completed by a therapist during session.
Evolutionary Computation with Spatial Receding Horizon Control to Minimize Network Coding Resources
Leeson, Mark S.
2014-01-01
The minimization of network coding resources, such as coding nodes and links, is a challenging task, not only because it is a NP-hard problem, but also because the problem scale is huge; for example, networks in real world may have thousands or even millions of nodes and links. Genetic algorithms (GAs) have a good potential of resolving NP-hard problems like the network coding problem (NCP), but as a population-based algorithm, serious scalability and applicability problems are often confronted when GAs are applied to large- or huge-scale systems. Inspired by the temporal receding horizon control in control engineering, this paper proposes a novel spatial receding horizon control (SRHC) strategy as a network partitioning technology, and then designs an efficient GA to tackle the NCP. Traditional network partitioning methods can be viewed as a special case of the proposed SRHC, that is, one-step-wide SRHC, whilst the method in this paper is a generalized N-step-wide SRHC, which can make a better use of global information of network topologies. Besides the SRHC strategy, some useful designs are also reported in this paper. The advantages of the proposed SRHC and GA for the NCP are illustrated by extensive experiments, and they have a good potential of being extended to other large-scale complex problems. PMID:24883371
Defense Logistics Agency Disposition Services Afghanistan Disposal Process Needed Improvement
2013-11-08
audit, and management was proactive in correcting the deficiencies we identified. DLA DS eliminated backlogs, identified and corrected system ...problems, provided additional system training, corrected coding errors, added personnel to key positions, addressed scale issues, submitted debit...Service Automated Information System to the Reutilization Business Integration2 (RBI) solution. The implementation of RBI in Afghanistan occurred in
DOE Office of Scientific and Technical Information (OSTI.GOV)
Tome, Carlos N; Caro, J A; Lebensohn, R A
2010-01-01
Advancing the performance of Light Water Reactors, Advanced Nuclear Fuel Cycles, and Advanced Reactors, such as the Next Generation Nuclear Power Plants, requires enhancing our fundamental understanding of fuel and materials behavior under irradiation. The capability to accurately model the nuclear fuel systems to develop predictive tools is critical. Not only are fabrication and performance models needed to understand specific aspects of the nuclear fuel, fully coupled fuel simulation codes are required to achieve licensing of specific nuclear fuel designs for operation. The backbone of these codes, models, and simulations is a fundamental understanding and predictive capability for simulating themore » phase and microstructural behavior of the nuclear fuel system materials and matrices. In this paper we review the current status of the advanced modeling and simulation of nuclear reactor cladding, with emphasis on what is available and what is to be developed in each scale of the project, how we propose to pass information from one scale to the next, and what experimental information is required for benchmarking and advancing the modeling at each scale level.« less
A Fault-Tolerant Radiation-Robust Mass Storage Concept for Highly Scaled Flash Memory
NASA Astrophysics Data System (ADS)
Fuchs, Cristian M.; Trinitis, Carsten; Appel, Nicolas; Langer, Martin
2015-09-01
Future spacemissions will require vast amounts of data to be stored and processed aboard spacecraft. While satisfying operational mission requirements, storage systems must guarantee data integrity and recover damaged data throughout the mission. NAND-flash memories have become popular for space-borne high performance mass memory scenarios, though future storage concepts will rely upon highly scaled flash or other memory technologies. With modern flash memory, single bit erasure coding and RAID based concepts are insufficient. Thus, a fully run-time configurable, high performance, dependable storage concept, requiring a minimal set of logic or software. The solution is based on composite erasure coding and can be adjusted for altered mission duration or changing environmental conditions.
High Performance Computing for Modeling Wind Farms and Their Impact
NASA Astrophysics Data System (ADS)
Mavriplis, D.; Naughton, J. W.; Stoellinger, M. K.
2016-12-01
As energy generated by wind penetrates further into our electrical system, modeling of power production, power distribution, and the economic impact of wind-generated electricity is growing in importance. The models used for this work can range in fidelity from simple codes that run on a single computer to those that require high performance computing capabilities. Over the past several years, high fidelity models have been developed and deployed on the NCAR-Wyoming Supercomputing Center's Yellowstone machine. One of the primary modeling efforts focuses on developing the capability to compute the behavior of a wind farm in complex terrain under realistic atmospheric conditions. Fully modeling this system requires the simulation of continental flows to modeling the flow over a wind turbine blade, including down to the blade boundary level, fully 10 orders of magnitude in scale. To accomplish this, the simulations are broken up by scale, with information from the larger scales being passed to the lower scale models. In the code being developed, four scale levels are included: the continental weather scale, the local atmospheric flow in complex terrain, the wind plant scale, and the turbine scale. The current state of the models in the latter three scales will be discussed. These simulations are based on a high-order accurate dynamic overset and adaptive mesh approach, which runs at large scale on the NWSC Yellowstone machine. A second effort on modeling the economic impact of new wind development as well as improvement in wind plant performance and enhancements to the transmission infrastructure will also be discussed.
Terminal Area Simulation System User's Guide - Version 10.0
NASA Technical Reports Server (NTRS)
Switzer, George F.; Proctor, Fred H.
2014-01-01
The Terminal Area Simulation System (TASS) is a three-dimensional, time-dependent, large eddy simulation model that has been developed for studies of wake vortex and weather hazards to aviation, along with other atmospheric turbulence, and cloud-scale weather phenomenology. This document describes the source code for TASS version 10.0 and provides users with needed documentation to run the model. The source code is programed in Fortran language and is formulated to take advantage of vector and efficient multi-processor scaling for execution on massively-parallel supercomputer clusters. The code contains different initialization modules allowing the study of aircraft wake vortex interaction with the atmosphere and ground, atmospheric turbulence, atmospheric boundary layers, precipitating convective clouds, hail storms, gust fronts, microburst windshear, supercell and mesoscale convective systems, tornadic storms, and ring vortices. The model is able to operate in either two- or three-dimensions with equations numerically formulated on a Cartesian grid. The primary output from the TASS is time-dependent domain fields generated by the prognostic equations and diagnosed variables. This document will enable a user to understand the general logic of TASS, and will show how to configure and initialize the model domain. Also described are the formats of the input and output files, as well as the parameters that control the input and output.
Use of SCALE Continuous-Energy Monte Carlo Tools for Eigenvalue Sensitivity Coefficient Calculations
DOE Office of Scientific and Technical Information (OSTI.GOV)
Perfetti, Christopher M; Rearden, Bradley T
2013-01-01
The TSUNAMI code within the SCALE code system makes use of eigenvalue sensitivity coefficients for an extensive number of criticality safety applications, such as quantifying the data-induced uncertainty in the eigenvalue of critical systems, assessing the neutronic similarity between different critical systems, and guiding nuclear data adjustment studies. The need to model geometrically complex systems with improved fidelity and the desire to extend TSUNAMI analysis to advanced applications has motivated the development of a methodology for calculating sensitivity coefficients in continuous-energy (CE) Monte Carlo applications. The CLUTCH and Iterated Fission Probability (IFP) eigenvalue sensitivity methods were recently implemented in themore » CE KENO framework to generate the capability for TSUNAMI-3D to perform eigenvalue sensitivity calculations in continuous-energy applications. This work explores the improvements in accuracy that can be gained in eigenvalue and eigenvalue sensitivity calculations through the use of the SCALE CE KENO and CE TSUNAMI continuous-energy Monte Carlo tools as compared to multigroup tools. The CE KENO and CE TSUNAMI tools were used to analyze two difficult models of critical benchmarks, and produced eigenvalue and eigenvalue sensitivity coefficient results that showed a marked improvement in accuracy. The CLUTCH sensitivity method in particular excelled in terms of efficiency and computational memory requirements.« less
NASA Technical Reports Server (NTRS)
Liu, Nan-Suey
2001-01-01
A multi-disciplinary design/analysis tool for combustion systems is critical for optimizing the low-emission, high-performance combustor design process. Based on discussions between then NASA Lewis Research Center and the jet engine companies, an industry-government team was formed in early 1995 to develop the National Combustion Code (NCC), which is an integrated system of computer codes for the design and analysis of combustion systems. NCC has advanced features that address the need to meet designer's requirements such as "assured accuracy", "fast turnaround", and "acceptable cost". The NCC development team is comprised of Allison Engine Company (Allison), CFD Research Corporation (CFDRC), GE Aircraft Engines (GEAE), NASA Glenn Research Center (LeRC), and Pratt & Whitney (P&W). The "unstructured mesh" capability and "parallel computing" are fundamental features of NCC from its inception. The NCC system is composed of a set of "elements" which includes grid generator, main flow solver, turbulence module, turbulence and chemistry interaction module, chemistry module, spray module, radiation heat transfer module, data visualization module, and a post-processor for evaluating engine performance parameters. Each element may have contributions from several team members. Such a multi-source multi-element system needs to be integrated in a way that facilitates inter-module data communication, flexibility in module selection, and ease of integration. The development of the NCC beta version was essentially completed in June 1998. Technical details of the NCC elements are given in the Reference List. Elements such as the baseline flow solver, turbulence module, and the chemistry module, have been extensively validated; and their parallel performance on large-scale parallel systems has been evaluated and optimized. However the scalar PDF module and the Spray module, as well as their coupling with the baseline flow solver, were developed in a small-scale distributed computing environment. As a result, the validation of the NCC beta version as a whole was quite limited. Current effort has been focused on the validation of the integrated code and the evaluation/optimization of its overall performance on large-scale parallel systems.
A Comparison of Global Indexing Schemes to Facilitate Earth Science Data Management
NASA Astrophysics Data System (ADS)
Griessbaum, N.; Frew, J.; Rilee, M. L.; Kuo, K. S.
2017-12-01
Recent advances in database technology have led to systems optimized for managing petabyte-scale multidimensional arrays. These array databases are a good fit for subsets of the Earth's surface that can be projected into a rectangular coordinate system with acceptable geometric fidelity. However, for global analyses, array databases must address the same distortions and discontinuities that apply to map projections in general. The array database SciDB supports enormous databases spread across thousands of computing nodes. Additionally, the following SciDB characteristics are particularly germane to the coordinate system problem: SciDB efficiently stores and manipulates sparse (i.e. mostly empty) arrays. SciDB arrays have 64-bit indexes. SciDB supports user-defined data types, functions, and operators. We have implemented two geospatial indexing schemes in SciDB. The simplest uses two array dimensions to represent longitude and latitude. For representation as 64-bit integers, the coordinates are multiplied by a scale factor large enough to yield an appropriate Earth surface resolution (e.g., a scale factor of 100,000 yields a resolution of approximately 1m at the equator). Aside from the longitudinal discontinuity, the principal disadvantage of this scheme is its fixed scale factor. The second scheme uses a single array dimension to represent the bit-codes for locations in a hierarchical triangular mesh (HTM) coordinate system. A HTM maps the Earth's surface onto an octahedron, and then recursively subdivides each triangular face to the desired resolution. Earth surface locations are represented as the concatenation of an octahedron face code and a quadtree code within the face. Unlike our integerized lat-lon scheme, the HTM allow for objects of different size (e.g., pixels with differing resolutions) to be represented in the same indexing scheme. We present an evaluation of the relative utility of these two schemes for managing and analyzing MODIS swath data.
NASA Astrophysics Data System (ADS)
Timm, S.; Cooper, G.; Fuess, S.; Garzoglio, G.; Holzman, B.; Kennedy, R.; Grassano, D.; Tiradani, A.; Krishnamurthy, R.; Vinayagam, S.; Raicu, I.; Wu, H.; Ren, S.; Noh, S.-Y.
2017-10-01
The Fermilab HEPCloud Facility Project has as its goal to extend the current Fermilab facility interface to provide transparent access to disparate resources including commercial and community clouds, grid federations, and HPC centers. This facility enables experiments to perform the full spectrum of computing tasks, including data-intensive simulation and reconstruction. We have evaluated the use of the commercial cloud to provide elasticity to respond to peaks of demand without overprovisioning local resources. Full scale data-intensive workflows have been successfully completed on Amazon Web Services for two High Energy Physics Experiments, CMS and NOνA, at the scale of 58000 simultaneous cores. This paper describes the significant improvements that were made to the virtual machine provisioning system, code caching system, and data movement system to accomplish this work. The virtual image provisioning and contextualization service was extended to multiple AWS regions, and to support experiment-specific data configurations. A prototype Decision Engine was written to determine the optimal availability zone and instance type to run on, minimizing cost and job interruptions. We have deployed a scalable on-demand caching service to deliver code and database information to jobs running on the commercial cloud. It uses the frontiersquid server and CERN VM File System (CVMFS) clients on EC2 instances and utilizes various services provided by AWS to build the infrastructure (stack). We discuss the architecture and load testing benchmarks on the squid servers. We also describe various approaches that were evaluated to transport experimental data to and from the cloud, and the optimal solutions that were used for the bulk of the data transport. Finally, we summarize lessons learned from this scale test, and our future plans to expand and improve the Fermilab HEP Cloud Facility.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Timm, S.; Cooper, G.; Fuess, S.
The Fermilab HEPCloud Facility Project has as its goal to extend the current Fermilab facility interface to provide transparent access to disparate resources including commercial and community clouds, grid federations, and HPC centers. This facility enables experiments to perform the full spectrum of computing tasks, including data-intensive simulation and reconstruction. We have evaluated the use of the commercial cloud to provide elasticity to respond to peaks of demand without overprovisioning local resources. Full scale data-intensive workflows have been successfully completed on Amazon Web Services for two High Energy Physics Experiments, CMS and NOνA, at the scale of 58000 simultaneous cores.more » This paper describes the significant improvements that were made to the virtual machine provisioning system, code caching system, and data movement system to accomplish this work. The virtual image provisioning and contextualization service was extended to multiple AWS regions, and to support experiment-specific data configurations. A prototype Decision Engine was written to determine the optimal availability zone and instance type to run on, minimizing cost and job interruptions. We have deployed a scalable on-demand caching service to deliver code and database information to jobs running on the commercial cloud. It uses the frontiersquid server and CERN VM File System (CVMFS) clients on EC2 instances and utilizes various services provided by AWS to build the infrastructure (stack). We discuss the architecture and load testing benchmarks on the squid servers. We also describe various approaches that were evaluated to transport experimental data to and from the cloud, and the optimal solutions that were used for the bulk of the data transport. Finally, we summarize lessons learned from this scale test, and our future plans to expand and improve the Fermilab HEP Cloud Facility.« less
Error-correcting codes on scale-free networks
NASA Astrophysics Data System (ADS)
Kim, Jung-Hoon; Ko, Young-Jo
2004-06-01
We investigate the potential of scale-free networks as error-correcting codes. We find that irregular low-density parity-check codes with the highest performance known to date have degree distributions well fitted by a power-law function p (k) ˜ k-γ with γ close to 2, which suggests that codes built on scale-free networks with appropriate power exponents can be good error-correcting codes, with a performance possibly approaching the Shannon limit. We demonstrate for an erasure channel that codes with a power-law degree distribution of the form p (k) = C (k+α)-γ , with k⩾2 and suitable selection of the parameters α and γ , indeed have very good error-correction capabilities.
Harvey, India; Bolgan, Samuela; Mosca, Daniel; McLean, Colin; Rusconi, Elena
2016-01-01
Studies on hacking have typically focused on motivational aspects and general personality traits of the individuals who engage in hacking; little systematic research has been conducted on predispositions that may be associated not only with the choice to pursue a hacking career but also with performance in either naïve or expert populations. Here, we test the hypotheses that two traits that are typically enhanced in autism spectrum disorders-attention to detail and systemizing-may be positively related to both the choice of pursuing a career in information security and skilled performance in a prototypical hacking task (i.e., crypto-analysis or code-breaking). A group of naïve participants and of ethical hackers completed the Autism Spectrum Quotient, including an attention to detail scale, and the Systemizing Quotient (Baron-Cohen et al., 2001, 2003). They were also tested with behavioral tasks involving code-breaking and a control task involving security X-ray image interpretation. Hackers reported significantly higher systemizing and attention to detail than non-hackers. We found a positive relation between self-reported systemizing (but not attention to detail) and code-breaking skills in both hackers and non-hackers, whereas attention to detail (but not systemizing) was related with performance in the X-ray screening task in both groups, as previously reported with naïve participants (Rusconi et al., 2015). We discuss the theoretical and translational implications of our findings.
A comparison of KABCO and AIS injury severity metrics using CODES linked data.
Burch, Cynthia; Cook, Lawrence; Dischinger, Patricia
2014-01-01
The research objective is to compare the consistency of distributions between crash assigned (KABCO) and hospital assigned (Abbreviated Injury Scale, AIS) injury severity scoring systems for 2 states. The hypothesis is that AIS scores will be more consistent between the 2 studied states (Maryland and Utah) than KABCO. The analysis involved Crash Outcome Data Evaluation System (CODES) data from 2 states, Maryland and Utah, for years 2006-2008. Crash report and hospital inpatient data were linked probabilistically and International Classification of Diseases (CMS 2013) codes from hospital records were translated into AIS codes. KABCO scores from police crash reports were compared to those AIS scores within and between the 2 study states. Maryland appears to have the more severe crash report KABCO scoring for injured crash participants, with close to 50 percent of all injured persons being coded as a level B or worse, and Utah observes approximately 40 percent in this group. When analyzing AIS scores, some fluctuation was seen within states over time, but the distribution of MAIS is much more comparable between states. Maryland had approximately 85 percent of hospitalized injured cases coded as MAIS = 1 or minor. In Utah this percentage was close to 80 percent for all 3 years. This is quite different from the KABCO distributions, where Maryland had a smaller percentage of cases in the lowest injury severity category as compared to Utah. This analysis examines the distribution of 2 injury severity metrics different in both design and collection and found that both classifications are consistent within each state from 2006 to 2008. However, the distribution of both KABCO and Maximum Abbreviated Injury Scale (MAIS) varies between the states. MAIS was found to be more consistent between states than KABCO.
HACC: Simulating sky surveys on state-of-the-art supercomputing architectures
NASA Astrophysics Data System (ADS)
Habib, Salman; Pope, Adrian; Finkel, Hal; Frontiere, Nicholas; Heitmann, Katrin; Daniel, David; Fasel, Patricia; Morozov, Vitali; Zagaris, George; Peterka, Tom; Vishwanath, Venkatram; Lukić, Zarija; Sehrish, Saba; Liao, Wei-keng
2016-01-01
Current and future surveys of large-scale cosmic structure are associated with a massive and complex datastream to study, characterize, and ultimately understand the physics behind the two major components of the 'Dark Universe', dark energy and dark matter. In addition, the surveys also probe primordial perturbations and carry out fundamental measurements, such as determining the sum of neutrino masses. Large-scale simulations of structure formation in the Universe play a critical role in the interpretation of the data and extraction of the physics of interest. Just as survey instruments continue to grow in size and complexity, so do the supercomputers that enable these simulations. Here we report on HACC (Hardware/Hybrid Accelerated Cosmology Code), a recently developed and evolving cosmology N-body code framework, designed to run efficiently on diverse computing architectures and to scale to millions of cores and beyond. HACC can run on all current supercomputer architectures and supports a variety of programming models and algorithms. It has been demonstrated at scale on Cell- and GPU-accelerated systems, standard multi-core node clusters, and Blue Gene systems. HACC's design allows for ease of portability, and at the same time, high levels of sustained performance on the fastest supercomputers available. We present a description of the design philosophy of HACC, the underlying algorithms and code structure, and outline implementation details for several specific architectures. We show selected accuracy and performance results from some of the largest high resolution cosmological simulations so far performed, including benchmarks evolving more than 3.6 trillion particles.
HACC: Simulating sky surveys on state-of-the-art supercomputing architectures
DOE Office of Scientific and Technical Information (OSTI.GOV)
Habib, Salman; Pope, Adrian; Finkel, Hal
2016-01-01
Current and future surveys of large-scale cosmic structure are associated with a massive and complex datastream to study, characterize, and ultimately understand the physics behind the two major components of the ‘Dark Universe’, dark energy and dark matter. In addition, the surveys also probe primordial perturbations and carry out fundamental measurements, such as determining the sum of neutrino masses. Large-scale simulations of structure formation in the Universe play a critical role in the interpretation of the data and extraction of the physics of interest. Just as survey instruments continue to grow in size and complexity, so do the supercomputers thatmore » enable these simulations. Here we report on HACC (Hardware/Hybrid Accelerated Cosmology Code), a recently developed and evolving cosmology N-body code framework, designed to run efficiently on diverse computing architectures and to scale to millions of cores and beyond. HACC can run on all current supercomputer architectures and supports a variety of programming models and algorithms. It has been demonstrated at scale on Cell- and GPU-accelerated systems, standard multi-core node clusters, and Blue Gene systems. HACC’s design allows for ease of portability, and at the same time, high levels of sustained performance on the fastest supercomputers available. We present a description of the design philosophy of HACC, the underlying algorithms and code structure, and outline implementation details for several specific architectures. We show selected accuracy and performance results from some of the largest high resolution cosmological simulations so far performed, including benchmarks evolving more than 3.6 trillion particles.« less
Time Dependent Data Mining in RAVEN
DOE Office of Scientific and Technical Information (OSTI.GOV)
Cogliati, Joshua Joseph; Chen, Jun; Patel, Japan Ketan
RAVEN is a generic software framework to perform parametric and probabilistic analysis based on the response of complex system codes. The goal of this type of analyses is to understand the response of such systems in particular with respect their probabilistic behavior, to understand their predictability and drivers or lack of thereof. Data mining capabilities are the cornerstones to perform such deep learning of system responses. For this reason static data mining capabilities were added last fiscal year (FY 15). In real applications, when dealing with complex multi-scale, multi-physics systems it seems natural that, during transients, the relevance of themore » different scales, and physics, would evolve over time. For these reasons the data mining capabilities have been extended allowing their application over time. In this writing it is reported a description of the new RAVEN capabilities implemented with several simple analytical tests to explain their application and highlight the proper implementation. The report concludes with the application of those newly implemented capabilities to the analysis of a simulation performed with the Bison code.« less
Analysis of measurement system as the mechatronics system
NASA Astrophysics Data System (ADS)
Giniotis, V.; Grattan, K. T. V.; Rybokas, M.; Bručas, D.
2010-07-01
The paper deals with the mechatronic arrangement for angle measuring system application. The objects to be measured are the circular raster scales, rotary encoders and coded scales. The task of the measuring system is to determine the bias of angle measuring standard as the circular scale and to use the results for the error correction and accuracy improvement of metal cutting machines, coordinate measuring machines, robots, etc. The technical solutions are given with the application of active materials for smart piezoactuators implemented into the several positions of angular measuring equipment. Mechatronic measuring system is analysed as complex integrated system and some of its elements can be used as separate units. All these functional elements are described and commented in the paper with the diagrams and graphs of errors and examples of microdisplacement devices using the mechatronic elements.
Exploring Asynchronous Many-Task Runtime Systems toward Extreme Scales
DOE Office of Scientific and Technical Information (OSTI.GOV)
Knight, Samuel; Baker, Gavin Matthew; Gamell, Marc
2015-10-01
Major exascale computing reports indicate a number of software challenges to meet the dramatic change of system architectures in near future. While several-orders-of-magnitude increase in parallelism is the most commonly cited of those, hurdles also include performance heterogeneity of compute nodes across the system, increased imbalance between computational capacity and I/O capabilities, frequent system interrupts, and complex hardware architectures. Asynchronous task-parallel programming models show a great promise in addressing these issues, but are not yet fully understood nor developed su ciently for computational science and engineering application codes. We address these knowledge gaps through quantitative and qualitative exploration of leadingmore » candidate solutions in the context of engineering applications at Sandia. In this poster, we evaluate MiniAero code ported to three leading candidate programming models (Charm++, Legion and UINTAH) to examine the feasibility of these models that permits insertion of new programming model elements into an existing code base.« less
The implementation of an aeronautical CFD flow code onto distributed memory parallel systems
NASA Astrophysics Data System (ADS)
Ierotheou, C. S.; Forsey, C. R.; Leatham, M.
2000-04-01
The parallelization of an industrially important in-house computational fluid dynamics (CFD) code for calculating the airflow over complex aircraft configurations using the Euler or Navier-Stokes equations is presented. The code discussed is the flow solver module of the SAUNA CFD suite. This suite uses a novel grid system that may include block-structured hexahedral or pyramidal grids, unstructured tetrahedral grids or a hybrid combination of both. To assist in the rapid convergence to a solution, a number of convergence acceleration techniques are employed including implicit residual smoothing and a multigrid full approximation storage scheme (FAS). Key features of the parallelization approach are the use of domain decomposition and encapsulated message passing to enable the execution in parallel using a single programme multiple data (SPMD) paradigm. In the case where a hybrid grid is used, a unified grid partitioning scheme is employed to define the decomposition of the mesh. The parallel code has been tested using both structured and hybrid grids on a number of different distributed memory parallel systems and is now routinely used to perform industrial scale aeronautical simulations. Copyright
José, Marco V.; Govezensky, Tzipe; García, José A.; Bobadilla, Juan R.
2009-01-01
Herein two genetic codes from which the primeval RNA code could have originated the standard genetic code (SGC) are derived. One of them, called extended RNA code type I, consists of all codons of the type RNY (purine-any base-pyrimidine) plus codons obtained by considering the RNA code but in the second (NYR type) and third (YRN type) reading frames. The extended RNA code type II, comprises all codons of the type RNY plus codons that arise from transversions of the RNA code in the first (YNY type) and third (RNR) nucleotide bases. In order to test if putative nucleotide sequences in the RNA World and in both extended RNA codes, share the same scaling and statistical properties to those encountered in current prokaryotes, we used the genomes of four Eubacteria and three Archaeas. For each prokaryote, we obtained their respective genomes obeying the RNA code or the extended RNA codes types I and II. In each case, we estimated the scaling properties of triplet sequences via a renormalization group approach, and we calculated the frequency distributions of distances for each codon. Remarkably, the scaling properties of the distance series of some codons from the RNA code and most codons from both extended RNA codes turned out to be identical or very close to the scaling properties of codons of the SGC. To test for the robustness of these results, we show, via computer simulation experiments, that random mutations of current genomes, at the rates of 10−10 per site per year during three billions of years, were not enough for destroying the observed patterns. Therefore, we conclude that most current prokaryotes may still contain relics of the primeval RNA World and that both extended RNA codes may well represent two plausible evolutionary paths between the RNA code and the current SGC. PMID:19183813
NASA Astrophysics Data System (ADS)
Benettin, Paolo; Bertuzzo, Enrico
2018-04-01
This paper presents the tran-SAS
package, which includes a set of codes to model solute transport and water residence times through a hydrological system. The model is based on a catchment-scale approach that aims at reproducing the integrated response of the system at one of its outlets. The codes are implemented in MATLAB and are meant to be easy to edit, so that users with minimal programming knowledge can adapt them to the desired application. The problem of large-scale solute transport has both theoretical and practical implications. On the one side, the ability to represent the ensemble of water flow trajectories through a heterogeneous system helps unraveling streamflow generation processes and allows us to make inferences on plant-water interactions. On the other side, transport models are a practical tool that can be used to estimate the persistence of solutes in the environment. The core of the package is based on the implementation of an age master equation (ME), which is solved using general StorAge Selection (SAS) functions. The age ME is first converted into a set of ordinary differential equations, each addressing the transport of an individual precipitation input through the catchment, and then it is discretized using an explicit numerical scheme. Results show that the implementation is efficient and allows the model to run in short times. The numerical accuracy is critically evaluated and it is shown to be satisfactory in most cases of hydrologic interest. Additionally, a higher-order implementation is provided within the package to evaluate and, if necessary, to improve the numerical accuracy of the results. The codes can be used to model streamflow age and solute concentration, but a number of additional outputs can be obtained by editing the codes to further advance the ability to understand and model catchment transport processes.
Characterization of Proxy Application Performance on Advanced Architectures. UMT2013, MCB, AMG2013
DOE Office of Scientific and Technical Information (OSTI.GOV)
Howell, Louis H.; Gunney, Brian T.; Bhatele, Abhinav
2015-10-09
Three codes were tested at LLNL as part of a Tri-Lab effort to make detailed assessments of several proxy applications on various advanced architectures, with the eventual goal of extending these assessments to codes of programmatic interest running more realistic simulations. Teams from Sandia and Los Alamos tested proxy apps of their own. The focus in this report is on the LLNL codes UMT2013, MCB, and AMG2013. We present weak and strong MPI scaling results and studies of OpenMP efficiency on a large BG/Q system at LLNL, with comparison against similar tests on an Intel Sandy Bridge TLCC2 system. Themore » hardware counters on BG/Q provide detailed information on many aspects of on-node performance, while information from the mpiP tool gives insight into the reasons for the differing scaling behavior on these two different architectures. Results from three more speculative tests are also included: one that exploits NVRAM as extended memory, one that studies performance under a power bound, and one that illustrates the effects of changing the torus network mapping on BG/Q.« less
NASA Astrophysics Data System (ADS)
Yan, Hui; Wang, K. G.; Jones, Jim E.
2016-06-01
A parallel algorithm for large-scale three-dimensional phase-field simulations of phase coarsening is developed and implemented on high-performance architectures. From the large-scale simulations, a new kinetics in phase coarsening in the region of ultrahigh volume fraction is found. The parallel implementation is capable of harnessing the greater computer power available from high-performance architectures. The parallelized code enables increase in three-dimensional simulation system size up to a 5123 grid cube. Through the parallelized code, practical runtime can be achieved for three-dimensional large-scale simulations, and the statistical significance of the results from these high resolution parallel simulations are greatly improved over those obtainable from serial simulations. A detailed performance analysis on speed-up and scalability is presented, showing good scalability which improves with increasing problem size. In addition, a model for prediction of runtime is developed, which shows a good agreement with actual run time from numerical tests.
Asymptotic Expansion Homogenization for Multiscale Nuclear Fuel Analysis
DOE Office of Scientific and Technical Information (OSTI.GOV)
Hales, J. D.; Tonks, M. R.; Chockalingam, K.
2015-03-01
Engineering scale nuclear fuel performance simulations can benefit by utilizing high-fidelity models running at a lower length scale. Lower length-scale models provide a detailed view of the material behavior that is used to determine the average material response at the macroscale. These lower length-scale calculations may provide insight into material behavior where experimental data is sparse or nonexistent. This multiscale approach is especially useful in the nuclear field, since irradiation experiments are difficult and expensive to conduct. The lower length-scale models complement the experiments by influencing the types of experiments required and by reducing the total number of experiments needed.more » This multiscale modeling approach is a central motivation in the development of the BISON-MARMOT fuel performance codes at Idaho National Laboratory. These codes seek to provide more accurate and predictive solutions for nuclear fuel behavior. One critical aspect of multiscale modeling is the ability to extract the relevant information from the lower length-scale sim- ulations. One approach, the asymptotic expansion homogenization (AEH) technique, has proven to be an effective method for determining homogenized material parameters. The AEH technique prescribes a system of equations to solve at the microscale that are used to compute homogenized material constants for use at the engineering scale. In this work, we employ AEH to explore the effect of evolving microstructural thermal conductivity and elastic constants on nuclear fuel performance. We show that the AEH approach fits cleanly into the BISON and MARMOT codes and provides a natural, multidimensional homogenization capability.« less
GW/Bethe-Salpeter calculations for charged and model systems from real-space DFT
NASA Astrophysics Data System (ADS)
Strubbe, David A.
GW and Bethe-Salpeter (GW/BSE) calculations use mean-field input from density-functional theory (DFT) calculations to compute excited states of a condensed-matter system. Many parts of a GW/BSE calculation are efficiently performed in a plane-wave basis, and extensive effort has gone into optimizing and parallelizing plane-wave GW/BSE codes for large-scale computations. Most straightforwardly, plane-wave DFT can be used as a starting point, but real-space DFT is also an attractive starting point: it is systematically convergeable like plane waves, can take advantage of efficient domain parallelization for large systems, and is well suited physically for finite and especially charged systems. The flexibility of a real-space grid also allows convenient calculations on non-atomic model systems. I will discuss the interfacing of a real-space (TD)DFT code (Octopus, www.tddft.org/programs/octopus) with a plane-wave GW/BSE code (BerkeleyGW, www.berkeleygw.org), consider performance issues and accuracy, and present some applications to simple and paradigmatic systems that illuminate fundamental properties of these approximations in many-body perturbation theory.
NASA Technical Reports Server (NTRS)
Mehta, Manish; Seaford, Mark; Kovarik, Brian; Dufrene, Aaron; Solly, Nathan
2014-01-01
ATA-002 Technical Team has successfully designed, developed, tested and assessed the SLS Pathfinder propulsion systems for the Main Base Heating Test Program. Major Outcomes of the Pathfinder Test Program: Reach 90% of full-scale chamber pressure Achieved all engine/motor design parameter requirements Reach steady plume flow behavior in less than 35 msec Steady chamber pressure for 60 to 100 msec during engine/motor operation Similar model engine/motor performance to full-scale SLS system Mitigated nozzle throat and combustor thermal erosion Test data shows good agreement with numerical prediction codes Next phase of the ATA-002 Test Program Design & development of the SLS OML for the Main Base Heating Test Tweak BSRM design to optimize performance Tweak CS-REM design to increase robustness MSFC Aerosciences and CUBRC have the capability to develop sub-scale propulsion systems to meet desired performance requirements for short-duration testing.
2006-06-01
The authors thank Denise Aylor (613) and Erick Satchell (613) for performing the cavitation erosion measurements and JoAnn Burkholder (North Carolina...20376 CODE 613 (AYLOR) 1 CODE 613 (SATCHELL) 1 COMMANDER CODE 617 (LEE, JOHN ) 1 NAVAL SURFACE WARFARE CENTER CODE 617 (BRIZZOLARA) 10 DAHLGREN...WUN-FOGLE) 10 CODE 702 (STRASBORG) 1 DEFENSE TECHNICAL INFORMATION CODE 3442 (TIC) 1 CENTER 8725 JOHN KINGMAN ROAD SUITE 0944 FORT BELVOIR VA 22060
NASA Technical Reports Server (NTRS)
Baumeister, Joseph F.
1994-01-01
A non-flowing, electrically heated test rig was developed to verify computer codes that calculate radiant energy propagation from nozzle geometries that represent aircraft propulsion nozzle systems. Since there are a variety of analysis tools used to evaluate thermal radiation propagation from partially enclosed nozzle surfaces, an experimental benchmark test case was developed for code comparison. This paper briefly describes the nozzle test rig and the developed analytical nozzle geometry used to compare the experimental and predicted thermal radiation results. A major objective of this effort was to make available the experimental results and the analytical model in a format to facilitate conversion to existing computer code formats. For code validation purposes this nozzle geometry represents one validation case for one set of analysis conditions. Since each computer code has advantages and disadvantages based on scope, requirements, and desired accuracy, the usefulness of this single nozzle baseline validation case can be limited for some code comparisons.
Harvey, India; Bolgan, Samuela; Mosca, Daniel; McLean, Colin; Rusconi, Elena
2016-01-01
Studies on hacking have typically focused on motivational aspects and general personality traits of the individuals who engage in hacking; little systematic research has been conducted on predispositions that may be associated not only with the choice to pursue a hacking career but also with performance in either naïve or expert populations. Here, we test the hypotheses that two traits that are typically enhanced in autism spectrum disorders—attention to detail and systemizing—may be positively related to both the choice of pursuing a career in information security and skilled performance in a prototypical hacking task (i.e., crypto-analysis or code-breaking). A group of naïve participants and of ethical hackers completed the Autism Spectrum Quotient, including an attention to detail scale, and the Systemizing Quotient (Baron-Cohen et al., 2001, 2003). They were also tested with behavioral tasks involving code-breaking and a control task involving security X-ray image interpretation. Hackers reported significantly higher systemizing and attention to detail than non-hackers. We found a positive relation between self-reported systemizing (but not attention to detail) and code-breaking skills in both hackers and non-hackers, whereas attention to detail (but not systemizing) was related with performance in the X-ray screening task in both groups, as previously reported with naïve participants (Rusconi et al., 2015). We discuss the theoretical and translational implications of our findings. PMID:27242491
NASA Astrophysics Data System (ADS)
Yang, Qianli; Pitkow, Xaq
2015-03-01
Most interesting natural sensory stimuli are encoded in the brain in a form that can only be decoded nonlinearly. But despite being a core function of the brain, nonlinear population codes are rarely studied and poorly understood. Interestingly, the few existing models of nonlinear codes are inconsistent with known architectural features of the brain. In particular, these codes have information content that scales with the size of the cortical population, even if that violates the data processing inequality by exceeding the amount of information entering the sensory system. Here we provide a valid theory of nonlinear population codes by generalizing recent work on information-limiting correlations in linear population codes. Although these generalized, nonlinear information-limiting correlations bound the performance of any decoder, they also make decoding more robust to suboptimal computation, allowing many suboptimal decoders to achieve nearly the same efficiency as an optimal decoder. Although these correlations are extremely difficult to measure directly, particularly for nonlinear codes, we provide a simple, practical test by which one can use choice-related activity in small populations of neurons to determine whether decoding is suboptimal or optimal and limited by correlated noise. We conclude by describing an example computation in the vestibular system where this theory applies. QY and XP was supported by a grant from the McNair foundation.
Digital Systems Validation Handbook. Volume 2. Chapter 18. Avionic Data Bus Integration Technology
1993-11-01
interaction between a digital data bus and an avionic system. Very Large Scale Integration (VLSI) ICs and multiversion software, which make up digital...1984, the Sperry Corporation developed a fault tolerant system which employed multiversion programming, voting, and monitoring for error detection and...formulate all the significant behavior of a system. MULTIVERSION PROGRAMMING. N-version programming. N-VERSION PROGRAMMING. The independent coding of a
TRANSITION FROM KINETIC TO MHD BEHAVIOR IN A COLLISIONLESS PLASMA
DOE Office of Scientific and Technical Information (OSTI.GOV)
Parashar, Tulasi N.; Matthaeus, William H.; Shay, Michael A.
The study of kinetic effects in heliospheric plasmas requires representation of dynamics at sub-proton scales, but in most cases the system is driven by magnetohydrodynamic (MHD) activity at larger scales. The latter requirement challenges available computational resources, which raises the question of how large such a system must be to exhibit MHD traits at large scales while kinetic behavior is accurately represented at small scales. Here we study this implied transition from kinetic to MHD-like behavior using particle-in-cell (PIC) simulations, initialized using an Orszag–Tang Vortex. The PIC code treats protons, as well as electrons, kinetically, and we address the questionmore » of interest by examining several different indicators of MHD-like behavior.« less
Trinity Phase 2 Open Science: CTH
DOE Office of Scientific and Technical Information (OSTI.GOV)
Ruggirello, Kevin Patrick; Vogler, Tracy
CTH is an Eulerian hydrocode developed by Sandia National Laboratories (SNL) to solve a wide range of shock wave propagation and material deformation problems. Adaptive mesh refinement is also used to improve efficiency for problems with a wide range of spatial scales. The code has a history of running on a variety of computing platforms ranging from desktops to massively parallel distributed-data systems. For the Trinity Phase 2 Open Science campaign, CTH was used to study mesoscale simulations of the hypervelocity penetration of granular SiC powders. The simulations were compared to experimental data. A scaling study of CTH up tomore » 8192 KNL nodes was also performed, and several improvements were made to the code to improve the scalability.« less
DOE Office of Scientific and Technical Information (OSTI.GOV)
Makwana, K. D., E-mail: kirit.makwana@gmx.com; Cattaneo, F.; Zhdankin, V.
Simulations of decaying magnetohydrodynamic (MHD) turbulence are performed with a fluid and a kinetic code. The initial condition is an ensemble of long-wavelength, counter-propagating, shear-Alfvén waves, which interact and rapidly generate strong MHD turbulence. The total energy is conserved and the rate of turbulent energy decay is very similar in both codes, although the fluid code has numerical dissipation, whereas the kinetic code has kinetic dissipation. The inertial range power spectrum index is similar in both the codes. The fluid code shows a perpendicular wavenumber spectral slope of k{sub ⊥}{sup −1.3}. The kinetic code shows a spectral slope of k{submore » ⊥}{sup −1.5} for smaller simulation domain, and k{sub ⊥}{sup −1.3} for larger domain. We estimate that collisionless damping mechanisms in the kinetic code can account for the dissipation of the observed nonlinear energy cascade. Current sheets are geometrically characterized. Their lengths and widths are in good agreement between the two codes. The length scales linearly with the driving scale of the turbulence. In the fluid code, their thickness is determined by the grid resolution as there is no explicit diffusivity. In the kinetic code, their thickness is very close to the skin-depth, irrespective of the grid resolution. This work shows that kinetic codes can reproduce the MHD inertial range dynamics at large scales, while at the same time capturing important kinetic physics at small scales.« less
NASA Astrophysics Data System (ADS)
Ehrman, Lisa M.
2005-07-01
Rather than emitting pulses, passive radar systems rely on "illuminators of opportunity," such as TV and FM radio, to illuminate potential targets. These systems are attractive since they allow receivers to operate without emitting energy, rendering them covert. Until recently, most of the research regarding passive radar has focused on detecting and tracking targets. This dissertation focuses on extending the capabilities of passive radar systems to include automatic target recognition. The target recognition algorithm described in this dissertation uses the radar cross section (RCS) of potential targets, collected over a short period of time, as the key information for target recognition. To make the simulated RCS as accurate as possible, the received signal model accounts for aircraft position and orientation, propagation losses, and antenna gain patterns. An extended Kalman filter (EKF) estimates the target's orientation (and uncertainty in the estimate) from velocity measurements obtained from the passive radar tracker. Coupling the aircraft orientation and state with the known antenna locations permits computation of the incident and observed azimuth and elevation angles. The Fast Illinois Solver Code (FISC) simulates the RCS of potential target classes as a function of these angles. Thus, the approximated incident and observed angles allow the appropriate RCS to be extracted from a database of FISC results. Using this process, the RCS of each aircraft in the target class is simulated as though each is executing the same maneuver as the target detected by the system. Two additional scaling processes are required to transform the RCS into a power profile (magnitude only) simulating the signal in the receiver. First, the RCS is scaled by the Advanced Refractive Effects Prediction System (AREPS) code to account for propagation losses that occur as functions of altitude and range. Then, the Numerical Electromagnetic Code (NEC2) computes the antenna gain pattern, further scaling the RCS. A Rician likelihood model compares the scaled RCS of the illuminated aircraft with those of the potential targets. To improve the robustness of the result, the algorithm jointly optimizes over feasible orientation profiles and target types via dynamic programming.
The Design of PSB-VVER Experiments Relevant to Accident Management
NASA Astrophysics Data System (ADS)
Nevo, Alessandro Del; D'Auria, Francesco; Mazzini, Marino; Bykov, Michael; Elkin, Ilya V.; Suslov, Alexander
Experimental programs carried-out in integral test facilities are relevant for validating the best estimate thermal-hydraulic codes(1), which are used for accident analyses, design of accident management procedures, licensing of nuclear power plants, etc. The validation process, in fact, is based on well designed experiments. It consists in the comparison of the measured and calculated parameters and the determination whether a computer code has an adequate capability in predicting the major phenomena expected to occur in the course of transient and/or accidents. University of Pisa was responsible of the numerical design of the 12 experiments executed in PSB-VVER facility (2), operated at Electrogorsk Research and Engineering Center (Russia), in the framework of the TACIS 2.03/97 Contract 3.03.03 Part A, EC financed (3). The paper describes the methodology adopted at University of Pisa, starting form the scenarios foreseen in the final test matrix until the execution of the experiments. This process considers three key topics: a) the scaling issue and the simulation, with unavoidable distortions, of the expected performance of the reference nuclear power plants; b) the code assessment process involving the identification of phenomena challenging the code models; c) the features of the concerned integral test facility (scaling limitations, control logics, data acquisition system, instrumentation, etc.). The activities performed in this respect are discussed, and emphasis is also given to the relevance of the thermal losses to the environment. This issue affects particularly the small scaled facilities and has relevance on the scaling approach related to the power and volume of the facility.
Fully kinetic 3D simulations of the Hermean magnetosphere under realistic conditions: a new approach
NASA Astrophysics Data System (ADS)
Amaya, Jorge; Gonzalez-Herrero, Diego; Lembège, Bertrand; Lapenta, Giovanni
2017-04-01
Simulations of the magnetosphere of planets are usually performed using the MHD and the hybrid approaches. However, these two methods still rely on approximations for the computation of the pressure tensor, and require the neutrality of the plasma at every point of the domain by construction. These approximations undermine the role of electrons on the emergence of plasma features in the magnetosphere of planets. The high mobility of electrons, their characteristic time and space scales, and the lack of perfect neutrality, are the source of many observed phenomena in the magnetospheres, including the turbulence energy cascade, the magnetic reconnection, the particle acceleration in the shock front and the formation of current systems around the magnetosphere. Fully kinetic codes are extremely demanding of computing time, and have been unable to perform simulations of the full magnetosphere at the real scales of a planet with realistic plasma conditions. This is caused by two main reasons: 1) explicit codes must resolve the electron scales limiting the time and space discretisation, and 2) current versions of semi-implicit codes are unstable for cell sizes larger than a few Debye lengths. In this work we present new simulations performed with ECsim, an Energy Conserving semi-implicit method [1], that can overcome these two barriers. We compare the solutions obtained with ECsim with the solutions obtained by the classic semi-implicit code iPic3D [2]. The new simulations with ECsim demand a larger computational effort, but the time and space discretisations are larger than those in iPic3D allowing for a faster simulation time of the full planetary environment. The new code, ECsim, can reach a resolution allowing the capture of significant large scale physics without loosing kinetic electron information, such as wave-electron interaction and non-Maxwellian electron velocity distributions [3]. The code is able to better capture the thickness of the different boundary layers of the magnetosphere of Mercury. Electron kinetics are consistent with the spatial and temporal scale resolutions. Simulations are compared with measurements from the MESSENGER spacecraft showing a better fit when compared against the classic fully kinetic code iPic3D. These results show that the new generation of Energy Conserving semi-implicit codes can be used for an accurate analysis and interpretation of particle data from magnetospheric missions like BepiColombo and MMS, including electron velocity distributions and electron temperature anisotropies. [1] Lapenta, G. (2016). Exactly Energy Conserving Implicit Moment Particle in Cell Formulation. arXiv preprint arXiv:1602.06326. [2] Markidis, S., & Lapenta, G. (2010). Multi-scale simulations of plasma with iPIC3D. Mathematics and Computers in Simulation, 80(7), 1509-1519. [3] Lapenta, G., Gonzalez-Herrero, D., & Boella, E. (2016). Multiple scale kinetic simulations with the energy conserving semi implicit particle in cell (PIC) method. arXiv preprint arXiv:1612.08289.
Metabolic Free Energy and Biological Codes: A 'Data Rate Theorem' Aging Model.
Wallace, Rodrick
2015-06-01
A famous argument by Maturana and Varela (Autopoiesis and cognition. Reidel, Dordrecht, 1980) holds that the living state is cognitive at every scale and level of organization. Since it is possible to associate many cognitive processes with 'dual' information sources, pathologies can sometimes be addressed using statistical models based on the Shannon Coding, the Shannon-McMillan Source Coding, the Rate Distortion, and the Data Rate Theorems, which impose necessary conditions on information transmission and system control. Deterministic-but-for-error biological codes do not directly invoke cognition, but may be essential subcomponents within larger cognitive processes. A formal argument, however, places such codes within a similar framework, with metabolic free energy serving as a 'control signal' stabilizing biochemical code-and-translator dynamics in the presence of noise. Demand beyond available energy supply triggers punctuated destabilization of the coding channel, affecting essential biological functions. Aging, normal or prematurely driven by psychosocial or environmental stressors, must interfere with the routine operation of such mechanisms, initiating the chronic diseases associated with senescence. Amyloid fibril formation, intrinsically disordered protein logic gates, and cell surface glycan/lectin 'kelp bed' logic gates are reviewed from this perspective. The results generalize beyond coding machineries having easily recognizable symmetry modes, and strip a layer of mathematical complication from the study of phase transitions in nonequilibrium biological systems.
Android platform based smartphones for a logistical remote association repair framework.
Lien, Shao-Fan; Wang, Chun-Chieh; Su, Juhng-Perng; Chen, Hong-Ming; Wu, Chein-Hsing
2014-06-25
The maintenance of large-scale systems is an important issue for logistics support planning. In this paper, we developed a Logistical Remote Association Repair Framework (LRARF) to aid repairmen in keeping the system available. LRARF includes four subsystems: smart mobile phones, a Database Management System (DBMS), a Maintenance Support Center (MSC) and wireless networks. The repairman uses smart mobile phones to capture QR-codes and the images of faulty circuit boards. The captured QR-codes and images are transmitted to the DBMS so the invalid modules can be recognized via the proposed algorithm. In this paper, the Linear Projective Transform (LPT) is employed for fast QR-code calibration. Moreover, the ANFIS-based data mining system is used for module identification and searching automatically for the maintenance manual corresponding to the invalid modules. The inputs of the ANFIS-based data mining system are the QR-codes and image features; the output is the module ID. DBMS also transmits the maintenance manual back to the maintenance staff. If modules are not recognizable, the repairmen and center engineers can obtain the relevant information about the invalid modules through live video. The experimental results validate the applicability of the Android-based platform in the recognition of invalid modules. In addition, the live video can also be recorded synchronously on the MSC for later use.
Learning Short Binary Codes for Large-scale Image Retrieval.
Liu, Li; Yu, Mengyang; Shao, Ling
2017-03-01
Large-scale visual information retrieval has become an active research area in this big data era. Recently, hashing/binary coding algorithms prove to be effective for scalable retrieval applications. Most existing hashing methods require relatively long binary codes (i.e., over hundreds of bits, sometimes even thousands of bits) to achieve reasonable retrieval accuracies. However, for some realistic and unique applications, such as on wearable or mobile devices, only short binary codes can be used for efficient image retrieval due to the limitation of computational resources or bandwidth on these devices. In this paper, we propose a novel unsupervised hashing approach called min-cost ranking (MCR) specifically for learning powerful short binary codes (i.e., usually the code length shorter than 100 b) for scalable image retrieval tasks. By exploring the discriminative ability of each dimension of data, MCR can generate one bit binary code for each dimension and simultaneously rank the discriminative separability of each bit according to the proposed cost function. Only top-ranked bits with minimum cost-values are then selected and grouped together to compose the final salient binary codes. Extensive experimental results on large-scale retrieval demonstrate that MCR can achieve comparative performance as the state-of-the-art hashing algorithms but with significantly shorter codes, leading to much faster large-scale retrieval.
A multiphysics and multiscale software environment for modeling astrophysical systems
NASA Astrophysics Data System (ADS)
Portegies Zwart, Simon; McMillan, Steve; Harfst, Stefan; Groen, Derek; Fujii, Michiko; Nualláin, Breanndán Ó.; Glebbeek, Evert; Heggie, Douglas; Lombardi, James; Hut, Piet; Angelou, Vangelis; Banerjee, Sambaran; Belkus, Houria; Fragos, Tassos; Fregeau, John; Gaburov, Evghenii; Izzard, Rob; Jurić, Mario; Justham, Stephen; Sottoriva, Andrea; Teuben, Peter; van Bever, Joris; Yaron, Ofer; Zemp, Marcel
2009-05-01
We present MUSE, a software framework for combining existing computational tools for different astrophysical domains into a single multiphysics, multiscale application. MUSE facilitates the coupling of existing codes written in different languages by providing inter-language tools and by specifying an interface between each module and the framework that represents a balance between generality and computational efficiency. This approach allows scientists to use combinations of codes to solve highly coupled problems without the need to write new codes for other domains or significantly alter their existing codes. MUSE currently incorporates the domains of stellar dynamics, stellar evolution and stellar hydrodynamics for studying generalized stellar systems. We have now reached a "Noah's Ark" milestone, with (at least) two available numerical solvers for each domain. MUSE can treat multiscale and multiphysics systems in which the time- and size-scales are well separated, like simulating the evolution of planetary systems, small stellar associations, dense stellar clusters, galaxies and galactic nuclei. In this paper we describe three examples calculated using MUSE: the merger of two galaxies, the merger of two evolving stars, and a hybrid N-body simulation. In addition, we demonstrate an implementation of MUSE on a distributed computer which may also include special-purpose hardware, such as GRAPEs or GPUs, to accelerate computations. The current MUSE code base is publicly available as open source at http://muse.li.
Source Code Analysis Laboratory (SCALe)
2012-04-01
Versus Flagged Nonconformities (FNC) Software System TP/FNC Ratio Mozilla Firefox version 2.0 6/12 50% Linux kernel version 2.6.15 10/126 8...is inappropriately tuned for analysis of the Linux kernel, which has anomalous results. Customizing SCALe to work with software for a particular...servers support a collection of virtual machines (VMs) that can be configured to support analysis in various environments, such as Windows XP and Linux . A
Real science at the petascale.
Saksena, Radhika S; Boghosian, Bruce; Fazendeiro, Luis; Kenway, Owain A; Manos, Steven; Mazzeo, Marco D; Sadiq, S Kashif; Suter, James L; Wright, David; Coveney, Peter V
2009-06-28
We describe computational science research that uses petascale resources to achieve scientific results at unprecedented scales and resolution. The applications span a wide range of domains, from investigation of fundamental problems in turbulence through computational materials science research to biomedical applications at the forefront of HIV/AIDS research and cerebrovascular haemodynamics. This work was mainly performed on the US TeraGrid 'petascale' resource, Ranger, at Texas Advanced Computing Center, in the first half of 2008 when it was the largest computing system in the world available for open scientific research. We have sought to use this petascale supercomputer optimally across application domains and scales, exploiting the excellent parallel scaling performance found on up to at least 32 768 cores for certain of our codes in the so-called 'capability computing' category as well as high-throughput intermediate-scale jobs for ensemble simulations in the 32-512 core range. Furthermore, this activity provides evidence that conventional parallel programming with MPI should be successful at the petascale in the short to medium term. We also report on the parallel performance of some of our codes on up to 65 636 cores on the IBM Blue Gene/P system at the Argonne Leadership Computing Facility, which has recently been named the fastest supercomputer in the world for open science.
Arabian, Sandra S; Marcus, Michael; Captain, Kevin; Pomphrey, Michelle; Breeze, Janis; Wolfe, Jennefer; Bugaev, Nikolay; Rabinovici, Reuven
2015-09-01
Analyses of data aggregated in state and national trauma registries provide the platform for clinical, research, development, and quality improvement efforts in trauma systems. However, the interhospital variability and accuracy in data abstraction and coding have not yet been directly evaluated. This multi-institutional, Web-based, anonymous study examines interhospital variability and accuracy in data coding and scoring by registrars. Eighty-two American College of Surgeons (ACS)/state-verified Level I and II trauma centers were invited to determine different data elements including diagnostic, procedure, and Abbreviated Injury Scale (AIS) coding as well as selected National Trauma Data Bank definitions for the same fictitious case. Variability and accuracy in data entries were assessed by the maximal percent agreement among the registrars for the tested data elements, and 95% confidence intervals were computed to compare this level of agreement to the ideal value of 100%. Variability and accuracy in all elements were compared (χ testing) based on Trauma Quality Improvement Program (TQIP) membership, level of trauma center, ACS verification, and registrar's certifications. Fifty registrars (61%) completed the survey. The overall accuracy for all tested elements was 64%. Variability was noted in all examined parameters except for the place of occurrence code in all groups and the lower extremity AIS code in Level II trauma centers and in the Certified Specialist in Trauma Registry- and Certified Abbreviated Injury Scale Specialist-certified registrar groups. No differences in variability were noted when groups were compared based on TQIP membership, level of center, ACS verification, and registrar's certifications, except for prehospital Glasgow Coma Scale (GCS), where TQIP respondents agreed more than non-TQIP centers (p = 0.004). There is variability and inaccuracy in interhospital data coding and scoring of injury information. This finding casts doubt on the validity of registry data used in all aspects of trauma care and injury surveillance.
JDFTx: Software for joint density-functional theory
Sundararaman, Ravishankar; Letchworth-Weaver, Kendra; Schwarz, Kathleen A.; ...
2017-11-14
Density-functional theory (DFT) has revolutionized computational prediction of atomic-scale properties from first principles in physics, chemistry and materials science. Continuing development of new methods is necessary for accurate predictions of new classes of materials and properties, and for connecting to nano- and mesoscale properties using coarse-grained theories. JDFTx is a fully-featured open-source electronic DFT software designed specifically to facilitate rapid development of new theories, models and algorithms. Using an algebraic formulation as an abstraction layer, compact C++11 code automatically performs well on diverse hardware including GPUs (Graphics Processing Units). This code hosts the development of joint density-functional theory (JDFT) thatmore » combines electronic DFT with classical DFT and continuum models of liquids for first-principles calculations of solvated and electrochemical systems. In addition, the modular nature of the code makes it easy to extend and interface with, facilitating the development of multi-scale toolkits that connect to ab initio calculations, e.g. photo-excited carrier dynamics combining electron and phonon calculations with electromagnetic simulations.« less
JDFTx: Software for joint density-functional theory
DOE Office of Scientific and Technical Information (OSTI.GOV)
Sundararaman, Ravishankar; Letchworth-Weaver, Kendra; Schwarz, Kathleen A.
Density-functional theory (DFT) has revolutionized computational prediction of atomic-scale properties from first principles in physics, chemistry and materials science. Continuing development of new methods is necessary for accurate predictions of new classes of materials and properties, and for connecting to nano- and mesoscale properties using coarse-grained theories. JDFTx is a fully-featured open-source electronic DFT software designed specifically to facilitate rapid development of new theories, models and algorithms. Using an algebraic formulation as an abstraction layer, compact C++11 code automatically performs well on diverse hardware including GPUs (Graphics Processing Units). This code hosts the development of joint density-functional theory (JDFT) thatmore » combines electronic DFT with classical DFT and continuum models of liquids for first-principles calculations of solvated and electrochemical systems. In addition, the modular nature of the code makes it easy to extend and interface with, facilitating the development of multi-scale toolkits that connect to ab initio calculations, e.g. photo-excited carrier dynamics combining electron and phonon calculations with electromagnetic simulations.« less
Computing with scale-invariant neural representations
NASA Astrophysics Data System (ADS)
Howard, Marc; Shankar, Karthik
The Weber-Fechner law is perhaps the oldest quantitative relationship in psychology. Consider the problem of the brain representing a function f (x) . Different neurons have receptive fields that support different parts of the range, such that the ith neuron has a receptive field at xi. Weber-Fechner scaling refers to the finding that the width of the receptive field scales with xi as does the difference between the centers of adjacent receptive fields. Weber-Fechner scaling is exponentially resource-conserving. Neurophysiological evidence suggests that neural representations obey Weber-Fechner scaling in the visual system and perhaps other systems as well. We describe an optimality constraint that is solved by Weber-Fechner scaling, providing an information-theoretic rationale for this principle of neural coding. Weber-Fechner scaling can be generated within a mathematical framework using the Laplace transform. Within this framework, simple computations such as translation, correlation and cross-correlation can be accomplished. This framework can in principle be extended to provide a general computational language for brain-inspired cognitive computation on scale-invariant representations. Supported by NSF PHY 1444389 and the BU Initiative for the Physics and Mathematics of Neural Systems,.
Monte Carlo capabilities of the SCALE code system
Rearden, Bradley T.; Petrie, Jr., Lester M.; Peplow, Douglas E.; ...
2014-09-12
SCALE is a broadly used suite of tools for nuclear systems modeling and simulation that provides comprehensive, verified and validated, user-friendly capabilities for criticality safety, reactor physics, radiation shielding, and sensitivity and uncertainty analysis. For more than 30 years, regulators, licensees, and research institutions around the world have used SCALE for nuclear safety analysis and design. SCALE provides a “plug-and-play” framework that includes three deterministic and three Monte Carlo radiation transport solvers that can be selected based on the desired solution, including hybrid deterministic/Monte Carlo simulations. SCALE includes the latest nuclear data libraries for continuous-energy and multigroup radiation transport asmore » well as activation, depletion, and decay calculations. SCALE’s graphical user interfaces assist with accurate system modeling, visualization, and convenient access to desired results. SCALE 6.2 will provide several new capabilities and significant improvements in many existing features, especially with expanded continuous-energy Monte Carlo capabilities for criticality safety, shielding, depletion, and sensitivity and uncertainty analysis. Finally, an overview of the Monte Carlo capabilities of SCALE is provided here, with emphasis on new features for SCALE 6.2.« less
From Physics Model to Results: An Optimizing Framework for Cross-Architecture Code Generation
Blazewicz, Marek; Hinder, Ian; Koppelman, David M.; ...
2013-01-01
Starting from a high-level problem description in terms of partial differential equations using abstract tensor notation, the Chemora framework discretizes, optimizes, and generates complete high performance codes for a wide range of compute architectures. Chemora extends the capabilities of Cactus, facilitating the usage of large-scale CPU/GPU systems in an efficient manner for complex applications, without low-level code tuning. Chemora achieves parallelism through MPI and multi-threading, combining OpenMP and CUDA. Optimizations include high-level code transformations, efficient loop traversal strategies, dynamically selected data and instruction cache usage strategies, and JIT compilation of GPU code tailored to the problem characteristics. The discretization ismore » based on higher-order finite differences on multi-block domains. Chemora's capabilities are demonstrated by simulations of black hole collisions. This problem provides an acid test of the framework, as the Einstein equations contain hundreds of variables and thousands of terms.« less
Role of Utility and Inference in the Evolution of Functional Information
Sharov, Alexei A.
2009-01-01
Functional information means an encoded network of functions in living organisms from molecular signaling pathways to an organism’s behavior. It is represented by two components: code and an interpretation system, which together form a self-sustaining semantic closure. Semantic closure allows some freedom between components because small variations of the code are still interpretable. The interpretation system consists of inference rules that control the correspondence between the code and the function (phenotype) and determines the shape of the fitness landscape. The utility factor operates at multiple time scales: short-term selection drives evolution towards higher survival and reproduction rate within a given fitness landscape, and long-term selection favors those fitness landscapes that support adaptability and lead to evolutionary expansion of certain lineages. Inference rules make short-term selection possible by shaping the fitness landscape and defining possible directions of evolution, but they are under control of the long-term selection of lineages. Communication normally occurs within a set of agents with compatible interpretation systems, which I call communication system. Functional information cannot be directly transferred between communication systems with incompatible inference rules. Each biological species is a genetic communication system that carries unique functional information together with inference rules that determine evolutionary directions and constraints. This view of the relation between utility and inference can resolve the conflict between realism/positivism and pragmatism. Realism overemphasizes the role of inference in evolution of human knowledge because it assumes that logic is embedded in reality. Pragmatism substitutes usefulness for truth and therefore ignores the advantage of inference. The proposed concept of evolutionary pragmatism rejects the idea that logic is embedded in reality; instead, inference rules are constructed within each communication system to represent reality and they evolve towards higher adaptability on a long time scale. PMID:20160960
NR-code: Nonlinear reconstruction code
NASA Astrophysics Data System (ADS)
Yu, Yu; Pen, Ue-Li; Zhu, Hong-Ming
2018-04-01
NR-code applies nonlinear reconstruction to the dark matter density field in redshift space and solves for the nonlinear mapping from the initial Lagrangian positions to the final redshift space positions; this reverses the large-scale bulk flows and improves the precision measurement of the baryon acoustic oscillations (BAO) scale.
Performance of the fusion code GYRO on four generations of Cray computers
DOE Office of Scientific and Technical Information (OSTI.GOV)
Fahey, Mark R
2014-01-01
GYRO is a code used for the direct numerical simulation of plasma microturbulence. It has been ported to a variety of modern MPP platforms including several modern commodity clusters, IBM SPs, and Cray XC, XT, and XE series machines. We briefly describe the mathematical structure of the equations, the data layout, and the redistribution scheme. Also, while the performance and scaling of GYRO on many of these systems has been shown before, here we show the comparative performance and scaling on four generations of Cray supercomputers including the newest addition - the Cray XC30. The more recently added hybrid OpenMP/MPImore » imple- mentation also shows a great deal of promise on custom HPC systems that utilize fast CPUs and proprietary interconnects. Four machines of varying sizes were used in the experiment, all of which are located at the National Institute for Computational Sciences at the University of Tennessee at Knoxville and Oak Ridge National Laboratory. The advantages, limitations, and performance of using each system are discussed.« less
A microkernel design for component-based parallel numerical software systems.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Balay, S.
1999-01-13
What is the minimal software infrastructure and what type of conventions are needed to simplify development of sophisticated parallel numerical application codes using a variety of software components that are not necessarily available as source code? We propose an opaque object-based model where the objects are dynamically loadable from the file system or network. The microkernel required to manage such a system needs to include, at most: (1) a few basic services, namely--a mechanism for loading objects at run time via dynamic link libraries, and consistent schemes for error handling and memory management; and (2) selected methods that all objectsmore » share, to deal with object life (destruction, reference counting, relationships), and object observation (viewing, profiling, tracing). We are experimenting with these ideas in the context of extensible numerical software within the ALICE (Advanced Large-scale Integrated Computational Environment) project, where we are building the microkernel to manage the interoperability among various tools for large-scale scientific simulations. This paper presents some preliminary observations and conclusions from our work with microkernel design.« less
2013 R&D 100 Award: âMiniappsâ Bolster High Performance Computing
Belak, Jim; Richards, David
2018-06-12
Two Livermore computer scientists served on a Sandia National Laboratories-led team that developed Mantevo Suite 1.0, the first integrated suite of small software programs, also called "miniapps," to be made available to the high performance computing (HPC) community. These miniapps facilitate the development of new HPC systems and the applications that run on them. Miniapps (miniature applications) serve as stripped down surrogates for complex, full-scale applications that can require a great deal of time and effort to port to a new HPC system because they often consist of hundreds of thousands of lines of code. The miniapps are a prototype that contains some or all of the essentials of the real application but with many fewer lines of code, making the miniapp more versatile for experimentation. This allows researchers to more rapidly explore options and optimize system design, greatly improving the chances the full-scale application will perform successfully. These miniapps have become essential tools for exploring complex design spaces because they can reliably predict the performance of full applications.
A transient FETI methodology for large-scale parallel implicit computations in structural mechanics
NASA Technical Reports Server (NTRS)
Farhat, Charbel; Crivelli, Luis; Roux, Francois-Xavier
1992-01-01
Explicit codes are often used to simulate the nonlinear dynamics of large-scale structural systems, even for low frequency response, because the storage and CPU requirements entailed by the repeated factorizations traditionally found in implicit codes rapidly overwhelm the available computing resources. With the advent of parallel processing, this trend is accelerating because explicit schemes are also easier to parallelize than implicit ones. However, the time step restriction imposed by the Courant stability condition on all explicit schemes cannot yet -- and perhaps will never -- be offset by the speed of parallel hardware. Therefore, it is essential to develop efficient and robust alternatives to direct methods that are also amenable to massively parallel processing because implicit codes using unconditionally stable time-integration algorithms are computationally more efficient when simulating low-frequency dynamics. Here we present a domain decomposition method for implicit schemes that requires significantly less storage than factorization algorithms, that is several times faster than other popular direct and iterative methods, that can be easily implemented on both shared and local memory parallel processors, and that is both computationally and communication-wise efficient. The proposed transient domain decomposition method is an extension of the method of Finite Element Tearing and Interconnecting (FETI) developed by Farhat and Roux for the solution of static problems. Serial and parallel performance results on the CRAY Y-MP/8 and the iPSC-860/128 systems are reported and analyzed for realistic structural dynamics problems. These results establish the superiority of the FETI method over both the serial/parallel conjugate gradient algorithm with diagonal scaling and the serial/parallel direct method, and contrast the computational power of the iPSC-860/128 parallel processor with that of the CRAY Y-MP/8 system.
Purple L1 Milestone Review Panel TotalView Debugger Functionality and Performance for ASC Purple
DOE Office of Scientific and Technical Information (OSTI.GOV)
Wolfe, M
2006-12-12
ASC code teams require a robust software debugging tool to help developers quickly find bugs in their codes and get their codes running. Development debugging commonly runs up to 512 processes. Production jobs run up to full ASC Purple scale, and at times require introspection while running. Developers want a debugger that runs on all their development and production platforms and that works with all compilers and runtimes used with ASC codes. The TotalView Multiprocess Debugger made by Etnus was specified for ASC Purple to address this needed capability. The ASC Purple environment builds on the environment seen by TotalViewmore » on ASCI White. The debugger must now operate with the Power5 CPU, Federation switch, AIX 5.3 operating system including large pages, IBM compilers 7 and 9, POE 4.2 parallel environment, and rs6000 SLURM resource manager. Users require robust, basic debugger functionality with acceptable performance at development debugging scale. A TotalView installation must be provided at the beginning of the early user access period that meets these requirements. A functional enhancement, fast conditional data watchpoints, and a scalability enhancement, capability up to 8192 processes, are to be demonstrated.« less
Makwana, K. D.; Zhdankin, V.; Li, H.; ...
2015-04-10
We performed simulations of decaying magnetohydrodynamic (MHD) turbulence with a fluid and a kinetic code. The initial condition is an ensemble of long-wavelength, counter-propagating, shear-Alfvén waves, which interact and rapidly generate strong MHD turbulence. The total energy is conserved and the rate of turbulent energy decay is very similar in both codes, although the fluid code has numerical dissipation, whereas the kinetic code has kinetic dissipation. The inertial range power spectrum index is similar in both the codes. The fluid code shows a perpendicular wavenumber spectral slope of k-1.3⊥k⊥-1.3. The kinetic code shows a spectral slope of k-1.5⊥k⊥-1.5 for smallermore » simulation domain, and k-1.3⊥k⊥-1.3 for larger domain. We then estimate that collisionless damping mechanisms in the kinetic code can account for the dissipation of the observed nonlinear energy cascade. Current sheets are geometrically characterized. Their lengths and widths are in good agreement between the two codes. The length scales linearly with the driving scale of the turbulence. In the fluid code, their thickness is determined by the grid resolution as there is no explicit diffusivity. In the kinetic code, their thickness is very close to the skin-depth, irrespective of the grid resolution. Finally, this work shows that kinetic codes can reproduce the MHD inertial range dynamics at large scales, while at the same time capturing important kinetic physics at small scales.« less
DOE Office of Scientific and Technical Information (OSTI.GOV)
Makwana, K. D.; Zhdankin, V.; Li, H.
We performed simulations of decaying magnetohydrodynamic (MHD) turbulence with a fluid and a kinetic code. The initial condition is an ensemble of long-wavelength, counter-propagating, shear-Alfvén waves, which interact and rapidly generate strong MHD turbulence. The total energy is conserved and the rate of turbulent energy decay is very similar in both codes, although the fluid code has numerical dissipation, whereas the kinetic code has kinetic dissipation. The inertial range power spectrum index is similar in both the codes. The fluid code shows a perpendicular wavenumber spectral slope of k-1.3⊥k⊥-1.3. The kinetic code shows a spectral slope of k-1.5⊥k⊥-1.5 for smallermore » simulation domain, and k-1.3⊥k⊥-1.3 for larger domain. We then estimate that collisionless damping mechanisms in the kinetic code can account for the dissipation of the observed nonlinear energy cascade. Current sheets are geometrically characterized. Their lengths and widths are in good agreement between the two codes. The length scales linearly with the driving scale of the turbulence. In the fluid code, their thickness is determined by the grid resolution as there is no explicit diffusivity. In the kinetic code, their thickness is very close to the skin-depth, irrespective of the grid resolution. Finally, this work shows that kinetic codes can reproduce the MHD inertial range dynamics at large scales, while at the same time capturing important kinetic physics at small scales.« less
pycola: N-body COLA method code
NASA Astrophysics Data System (ADS)
Tassev, Svetlin; Eisenstein, Daniel J.; Wandelt, Benjamin D.; Zaldarriagag, Matias
2015-09-01
pycola is a multithreaded Python/Cython N-body code, implementing the Comoving Lagrangian Acceleration (COLA) method in the temporal and spatial domains, which trades accuracy at small-scales to gain computational speed without sacrificing accuracy at large scales. This is especially useful for cheaply generating large ensembles of accurate mock halo catalogs required to study galaxy clustering and weak lensing. The COLA method achieves its speed by calculating the large-scale dynamics exactly using LPT while letting the N-body code solve for the small scales, without requiring it to capture exactly the internal dynamics of halos.
Zonfrillo, Mark R; Weaver, Ashley A; Gillich, Patrick J; Price, Janet P; Stitzel, Joel D
2015-01-01
There has been a longstanding desire for a map to convert International Classification of Diseases (ICD) injury codes to Abbreviated Injury Scale (AIS) codes to reflect the severity of those diagnoses. The Association for the Advancement of Automotive Medicine (AAAM) was tasked by European Union representatives to create a categorical map classifying diagnoses codes as serious injury (Abbreviated Injury Scale [AIS] 3+), minor/moderate injury (AIS 1/2), or indeterminate. This study's objective was to map injury-related ICD-9-CM (clinical modification) and ICD-10-CM codes to these severity categories. Approximately 19,000 ICD codes were mapped, including injuries from the following categories: amputations, blood vessel injury, burns, crushing injury, dislocations/sprains/strains, foreign body, fractures, internal organ, nerve/spinal cord injury, intracranial, laceration, open wounds, and superficial injury/contusion. Two parallel activities were completed to create the maps: (1) An in-person expert panel and (2) an electronic survey. The panel consisted of expert users of AIS and ICD from North America, the United Kingdom, and Australia. The panel met in person for 5 days, with follow-up virtual meetings to create and revise the maps. Additional qualitative data were documented to resolve potential discrepancies in mapping. The electronic survey was completed by 95 injury coding professionals from North America, Spain, Australia, and New Zealand over 12 weeks. ICD-to-AIS maps were created for: ICD-9-CM and ICD-10-CM. Both maps indicated whether the corresponding AIS 2005/Update 2008 severity score for each ICD code was AIS 3+, 1/2, or indeterminable. Though some ICD codes could be mapped to multiple AIS codes, the maximum severity of all potentially mapped injuries determined the final severity categorization. The in-person panel consisted of 13 experts, with 11 Certified AIS specialists (CAISS) with a median of 8 years and an average of 15 years of coding experience. Consensus was reached for AIS severity categorization for all injury-related ICD codes. There were 95 survey respondents, with a median of 8 years of injury coding experience. Approximately 15 survey responses were collected per ICD code. Results from the 2 activities were compared, and any discrepancies were resolved using additional qualitative and quantitative data from the in-person panel and survey results, respectively. Robust maps of ICD-9-CM and ICD-10-CM injury codes to AIS severity categories (3+ versus <3) were successfully created from an in-person panel discussion and electronic survey. These maps provide a link between the common ICD diagnostic lexicons and the AIS severity coding system and are of value to injury researchers, public health scientists, and epidemiologists using large databases without available AIS coding.
Architectural Visualization of C/C++ Source Code for Program Comprehension
DOE Office of Scientific and Technical Information (OSTI.GOV)
Panas, T; Epperly, T W; Quinlan, D
2006-09-01
Structural and behavioral visualization of large-scale legacy systems to aid program comprehension is still a major challenge. The challenge is even greater when applications are implemented in flexible and expressive languages such as C and C++. In this paper, we consider visualization of static and dynamic aspects of large-scale scientific C/C++ applications. For our investigation, we reuse and integrate specialized analysis and visualization tools. Furthermore, we present a novel layout algorithm that permits a compressive architectural view of a large-scale software system. Our layout is unique in that it allows traditional program visualizations, i.e., graph structures, to be seen inmore » relation to the application's file structure.« less
RMG An Open Source Electronic Structure Code for Multi-Petaflops Calculations
NASA Astrophysics Data System (ADS)
Briggs, Emil; Lu, Wenchang; Hodak, Miroslav; Bernholc, Jerzy
RMG (Real-space Multigrid) is an open source, density functional theory code for quantum simulations of materials. It solves the Kohn-Sham equations on real-space grids, which allows for natural parallelization via domain decomposition. Either subspace or Davidson diagonalization, coupled with multigrid methods, are used to accelerate convergence. RMG is a cross platform open source package which has been used in the study of a wide range of systems, including semiconductors, biomolecules, and nanoscale electronic devices. It can optionally use GPU accelerators to improve performance on systems where they are available. The recently released versions (>2.0) support multiple GPU's per compute node, have improved performance and scalability, enhanced accuracy and support for additional hardware platforms. New versions of the code are regularly released at http://www.rmgdft.org. The releases include binaries for Linux, Windows and MacIntosh systems, automated builds for clusters using cmake, as well as versions adapted to the major supercomputing installations and platforms. Several recent, large-scale applications of RMG will be discussed.
Collisionless stellar hydrodynamics as an efficient alternative to N-body methods
NASA Astrophysics Data System (ADS)
Mitchell, Nigel L.; Vorobyov, Eduard I.; Hensler, Gerhard
2013-01-01
The dominant constituents of the Universe's matter are believed to be collisionless in nature and thus their modelling in any self-consistent simulation is extremely important. For simulations that deal only with dark matter or stellar systems, the conventional N-body technique is fast, memory efficient and relatively simple to implement. However when extending simulations to include the effects of gas physics, mesh codes are at a distinct disadvantage compared to Smooth Particle Hydrodynamics (SPH) codes. Whereas implementing the N-body approach into SPH codes is fairly trivial, the particle-mesh technique used in mesh codes to couple collisionless stars and dark matter to the gas on the mesh has a series of significant scientific and technical limitations. These include spurious entropy generation resulting from discreteness effects, poor load balancing and increased communication overhead which spoil the excellent scaling in massively parallel grid codes. In this paper we propose the use of the collisionless Boltzmann moment equations as a means to model the collisionless material as a fluid on the mesh, implementing it into the massively parallel FLASH Adaptive Mesh Refinement (AMR) code. This approach which we term `collisionless stellar hydrodynamics' enables us to do away with the particle-mesh approach and since the parallelization scheme is identical to that used for the hydrodynamics, it preserves the excellent scaling of the FLASH code already demonstrated on peta-flop machines. We find that the classic hydrodynamic equations and the Boltzmann moment equations can be reconciled under specific conditions, allowing us to generate analytic solutions for collisionless systems using conventional test problems. We confirm the validity of our approach using a suite of demanding test problems, including the use of a modified Sod shock test. By deriving the relevant eigenvalues and eigenvectors of the Boltzmann moment equations, we are able to use high order accurate characteristic tracing methods with Riemann solvers to generate numerical solutions which show excellent agreement with our analytic solutions. We conclude by demonstrating the ability of our code to model complex phenomena by simulating the evolution of a two-armed spiral galaxy whose properties agree with those predicted by the swing amplification theory.
ELEFANT: a user-friendly multipurpose geodynamics code
NASA Astrophysics Data System (ADS)
Thieulot, C.
2014-07-01
A new finite element code for the solution of the Stokes and heat transport equations is presented. It has purposely been designed to address geological flow problems in two and three dimensions at crustal and lithospheric scales. The code relies on the Marker-in-Cell technique and Lagrangian markers are used to track materials in the simulation domain which allows recording of the integrated history of deformation; their (number) density is variable and dynamically adapted. A variety of rheologies has been implemented including nonlinear thermally activated dislocation and diffusion creep and brittle (or plastic) frictional models. The code is built on the Arbitrary Lagrangian Eulerian kinematic description: the computational grid deforms vertically and allows for a true free surface while the computational domain remains of constant width in the horizontal direction. The solution to the large system of algebraic equations resulting from the finite element discretisation and linearisation of the set of coupled partial differential equations to be solved is obtained by means of the efficient parallel direct solver MUMPS whose performance is thoroughly tested, or by means of the WISMP and AGMG iterative solvers. The code accuracy is assessed by means of many geodynamically relevant benchmark experiments which highlight specific features or algorithms, e.g., the implementation of the free surface stabilisation algorithm, the (visco-)plastic rheology implementation, the temperature advection, the capacity of the code to handle large viscosity contrasts. A two-dimensional application to salt tectonics presented as case study illustrates the potential of the code to model large scale high resolution thermo-mechanically coupled free surface flows.
Measuring Attitudes That Bias Teacher Observation.
ERIC Educational Resources Information Center
Marston, Paul T.; Clements, Richard O.
It was hypothesized that individual differences in observer attitudes might affect the way observers code teacher behavior. A questionnaire assessing educational attitudes was given to 22 subjects who were trained on 3 classroom observation systems: the Flanders Interaction Analysis, the Classroom Observation Scales, and the Classroom Observation…
NASA Astrophysics Data System (ADS)
Aleksandrov, A. P.; Berezovoy, A. N.; Galper, A. M.; Grachev, V. M.; Dmitrenko, V. V.; Kirillov-Ugryumov, V. G.; Lebedev, V. V.; Lyakhov, V. A.; Moiseyev, A. A.; Ulin, S. Y.
1985-09-01
Coding collimators are used to improve the angular resolution of gamma-ray telescopes at energies above 50 MeV. However, the interaction of cosmic rays with the collimation material can lead to the appearance of a gamma-ray background flux which can have a deleterious effect on measurement efficiency. An experiment was performed on the Salyut-6-Soyuz spacecraft system with the Elena-F small-scale gamma-ray telescope in order to measure the magnitude of this background. It is shown that, even at a zenith angle of approximately zero degrees (the angle at which the gamma-ray observations are made), the coding collimator has only an insignificant effect on the background conditions.
Tools and techniques for estimating high intensity RF effects
NASA Astrophysics Data System (ADS)
Zacharias, Richard L.; Pennock, Steve T.; Poggio, Andrew J.; Ray, Scott L.
1992-01-01
Tools and techniques for estimating and measuring coupling and component disturbance for avionics and electronic controls are described. A finite-difference-time-domain (FD-TD) modeling code, TSAR, used to predict coupling is described. This code can quickly generate a mesh model to represent the test object. Some recent applications as well as the advantages and limitations of using such a code are described. Facilities and techniques for making low-power coupling measurements and for making direct injection test measurements of device disturbance are also described. Some scaling laws for coupling and device effects are presented. A method for extrapolating these low-power test results to high-power full-system effects are presented.
Avionic Data Bus Integration Technology
1991-12-01
address the hardware-software interaction between a digital data bus and an avionic system. Very Large Scale Integration (VLSI) ICs and multiversion ...the SCP. In 1984, the Sperry Corporation developed a fault tolerant system which employed multiversion programming, voting, and monitoring for error... MULTIVERSION PROGRAMMING. N-version programming. 226 N-VERSION PROGRAMMING. The independent coding of a number, N, of redundant computer programs that
GROMACS 4: Algorithms for Highly Efficient, Load-Balanced, and Scalable Molecular Simulation.
Hess, Berk; Kutzner, Carsten; van der Spoel, David; Lindahl, Erik
2008-03-01
Molecular simulation is an extremely useful, but computationally very expensive tool for studies of chemical and biomolecular systems. Here, we present a new implementation of our molecular simulation toolkit GROMACS which now both achieves extremely high performance on single processors from algorithmic optimizations and hand-coded routines and simultaneously scales very well on parallel machines. The code encompasses a minimal-communication domain decomposition algorithm, full dynamic load balancing, a state-of-the-art parallel constraint solver, and efficient virtual site algorithms that allow removal of hydrogen atom degrees of freedom to enable integration time steps up to 5 fs for atomistic simulations also in parallel. To improve the scaling properties of the common particle mesh Ewald electrostatics algorithms, we have in addition used a Multiple-Program, Multiple-Data approach, with separate node domains responsible for direct and reciprocal space interactions. Not only does this combination of algorithms enable extremely long simulations of large systems but also it provides that simulation performance on quite modest numbers of standard cluster nodes.
Barone, Lavinia; Guiducci, Valentina
2009-07-01
Mental representations of attachment in a sample of adults with Eating Disorders (ED) were assessed using the Adult Attachment Interview (AAI). Sixty subjects participated in the study: 30 non-clinical and 30 clinical. The results obtained showed a specific distribution of attachment patterns in the clinical sample: 10% Free/Autonomous (F), 47% Insecure-Dismissing (Ds), 17% Insecure-Entangled/Preoccupied (E) and about 26% disorganized (CC/U). The two samples differed in their attachment pattern distribution and were significantly different on some coding system scales. Further information was obtained by analyzing differences between the three ED subtypes considered (i.e. Anorexia Nervosa, Bulimia Nervosa and Binge Eating Disorder) and by investigating the differential role of the two parental figures in the definition of attachment representations. Results showed potential benefits in using the AAI coding system scales in addition to the main classifications in order to understand better the developmental issues involved in these disorders. Implications for developmental research and clinical nosology are discussed.
Multiscale integral analysis of a HT leakage in a fusion nuclear power plant
NASA Astrophysics Data System (ADS)
Velarde, M.; Fradera, J.; Perlado, J. M.; Zamora, I.; Martínez-Saban, E.; Colomer, C.; Briani, P.
2016-05-01
The present work presents an example of the application of an integral methodology based on a multiscale analysis that covers the whole tritium cycle within a nuclear fusion power plant, from a micro scale, analyzing key components where tritium is leaked through permeation, to a macro scale, considering its atmospheric transport. A leakage from the Nuclear Power Plants, (NPP) primary to the secondary side of a heat exchanger (HEX) is considered for the present example. Both primary and secondary loop coolants are assumed to be He. Leakage is placed inside the HEX, leaking tritium in elementary tritium (HT) form to the secondary loop where it permeates through the piping structural material to the exterior. The Heating Ventilation and Air Conditioning (HVAC) system removes the leaked tritium towards the NPP exhaust. The HEX is modelled with system codes and coupled to Computational Fluid Dynamic (CFD) to account for tritium dispersion inside the nuclear power plants buildings and in site environment. Finally, tritium dispersion is calculated with an atmospheric transport code and a dosimetry analysis is carried out. Results show how the implemented methodology is capable of assessing the impact of tritium from the microscale to the atmospheric scale including the dosimetric aspect.
Density Functional O(N) Calculations
NASA Astrophysics Data System (ADS)
Ordejón, Pablo
1998-03-01
We have developed a scheme for performing Density Functional Theory calculations with O(N) scaling.(P. Ordejón, E. Artacho and J. M. Soler, Phys. Rev. B, 53), 10441 (1996) The method uses arbitrarily flexible and complete Atomic Orbitals (AO) basis sets. This gives a wide range of choice, from extremely fast calculations with minimal basis sets, to greatly accurate calculations with complete sets. The size-efficiency of AO bases, together with the O(N) scaling of the algorithm, allow the application of the method to systems with many hundreds of atoms, in single processor workstations. I will present the SIESTA code,(D. Sanchez-Portal, P. Ordejón, E. Artacho and J. M. Soler, Int. J. Quantum Chem., 65), 453 (1997) in which the method is implemented, with several LDA, LSD and GGA functionals available, and using norm-conserving, non-local pseudopotentials (in the Kleinman-Bylander form) to eliminate the core electrons. The calculation of static properties such as energies, forces, pressure, stress and magnetic moments, as well as molecular dynamics (MD) simulations capabilities (including variable cell shape, constant temperature and constant pressure MD) are fully implemented. I will also show examples of the accuracy of the method, and applications to large-scale materials and biomolecular systems.
Influence of temperature fluctuations on infrared limb radiance: a new simulation code
NASA Astrophysics Data System (ADS)
Rialland, Valérie; Chervet, Patrick
2006-08-01
Airborne infrared limb-viewing detectors may be used as surveillance sensors in order to detect dim military targets. These systems' performances are limited by the inhomogeneous background in the sensor field of view which impacts strongly on target detection probability. This background clutter, which results from small-scale fluctuations of temperature, density or pressure must therefore be analyzed and modeled. Few existing codes are able to model atmospheric structures and their impact on limb-observed radiance. SAMM-2 (SHARC-4 and MODTRAN4 Merged), the Air Force Research Laboratory (AFRL) background radiance code can be used to in order to predict the radiance fluctuation as a result of a normalized temperature fluctuation, as a function of the line-of-sight. Various realizations of cluttered backgrounds can then be computed, based on these transfer functions and on a stochastic temperature field. The existing SIG (SHARC Image Generator) code was designed to compute the cluttered background which would be observed from a space-based sensor. Unfortunately, this code was not able to compute accurate scenes as seen by an airborne sensor especially for lines-of-sight close to the horizon. Recently, we developed a new code called BRUTE3D and adapted to our configuration. This approach is based on a method originally developed in the SIG model. This BRUTE3D code makes use of a three-dimensional grid of temperature fluctuations and of the SAMM-2 transfer functions to synthesize an image of radiance fluctuations according to sensor characteristics. This paper details the working principles of the code and presents some output results. The effects of the small-scale temperature fluctuations on infrared limb radiance as seen by an airborne sensor are highlighted.
Developing Information Power Grid Based Algorithms and Software
NASA Technical Reports Server (NTRS)
Dongarra, Jack
1998-01-01
This was an exploratory study to enhance our understanding of problems involved in developing large scale applications in a heterogeneous distributed environment. It is likely that the large scale applications of the future will be built by coupling specialized computational modules together. For example, efforts now exist to couple ocean and atmospheric prediction codes to simulate a more complete climate system. These two applications differ in many respects. They have different grids, the data is in different unit systems and the algorithms for inte,-rating in time are different. In addition the code for each application is likely to have been developed on different architectures and tend to have poor performance when run on an architecture for which the code was not designed, if it runs at all. Architectural differences may also induce differences in data representation which effect precision and convergence criteria as well as data transfer issues. In order to couple such dissimilar codes some form of translation must be present. This translation should be able to handle interpolation from one grid to another as well as construction of the correct data field in the correct units from available data. Even if a code is to be developed from scratch, a modular approach will likely be followed in that standard scientific packages will be used to do the more mundane tasks such as linear algebra or Fourier transform operations. This approach allows the developers to concentrate on their science rather than becoming experts in linear algebra or signal processing. Problems associated with this development approach include difficulties associated with data extraction and translation from one module to another, module performance on different nodal architectures, and others. In addition to these data and software issues there exists operational issues such as platform stability and resource management.
Medicine, material science and security: the versatility of the coded-aperture approach.
Munro, P R T; Endrizzi, M; Diemoz, P C; Hagen, C K; Szafraniec, M B; Millard, T P; Zapata, C E; Speller, R D; Olivo, A
2014-03-06
The principal limitation to the widespread deployment of X-ray phase imaging in a variety of applications is probably versatility. A versatile X-ray phase imaging system must be able to work with polychromatic and non-microfocus sources (for example, those currently used in medical and industrial applications), have physical dimensions sufficiently large to accommodate samples of interest, be insensitive to environmental disturbances (such as vibrations and temperature variations), require only simple system set-up and maintenance, and be able to perform quantitative imaging. The coded-aperture technique, based upon the edge illumination principle, satisfies each of these criteria. To date, we have applied the technique to mammography, materials science, small-animal imaging, non-destructive testing and security. In this paper, we outline the theory of coded-aperture phase imaging and show an example of how the technique may be applied to imaging samples with a practically important scale.
Parallelization of KENO-Va Monte Carlo code
NASA Astrophysics Data System (ADS)
Ramón, Javier; Peña, Jorge
1995-07-01
KENO-Va is a code integrated within the SCALE system developed by Oak Ridge that solves the transport equation through the Monte Carlo Method. It is being used at the Consejo de Seguridad Nuclear (CSN) to perform criticality calculations for fuel storage pools and shipping casks. Two parallel versions of the code: one for shared memory machines and other for distributed memory systems using the message-passing interface PVM have been generated. In both versions the neutrons of each generation are tracked in parallel. In order to preserve the reproducibility of the results in both versions, advanced seeds for random numbers were used. The CONVEX C3440 with four processors and shared memory at CSN was used to implement the shared memory version. A FDDI network of 6 HP9000/735 was employed to implement the message-passing version using proprietary PVM. The speedup obtained was 3.6 in both cases.
User's manual for the BNW-II optimization code for dry/wet-cooled power plants
DOE Office of Scientific and Technical Information (OSTI.GOV)
Braun, D.J.; Bamberger, J.A.; Braun, D.J.
1978-05-01
The User's Manual describes how to operate BNW-II, a computer code developed by the Pacific Northwest Laboratory (PNL) as a part of its activities under the Department of Energy (DOE) Dry Cooling Enhancement Program. The computer program offers a comprehensive method of evaluating the cost savings potential of dry/wet-cooled heat rejection systems. Going beyond simple ''figure-of-merit'' cooling tower optimization, this method includes such items as the cost of annual replacement capacity, and the optimum split between plant scale-up and replacement capacity, as well as the purchase and operating costs of all major heat rejection components. Hence the BNW-II code ismore » a useful tool for determining potential cost savings of new dry/wet surfaces, new piping, or other components as part of an optimized system for a dry/wet-cooled plant.« less
GenomeDiagram: a python package for the visualization of large-scale genomic data.
Pritchard, Leighton; White, Jennifer A; Birch, Paul R J; Toth, Ian K
2006-03-01
We present GenomeDiagram, a flexible, open-source Python module for the visualization of large-scale genomic, comparative genomic and other data with reference to a single chromosome or other biological sequence. GenomeDiagram may be used to generate publication-quality vector graphics, rastered images and in-line streamed graphics for webpages. The package integrates with datatypes from the BioPython project, and is available for Windows, Linux and Mac OS X systems. GenomeDiagram is freely available as source code (under GNU Public License) at http://bioinf.scri.ac.uk/lp/programs.html, and requires Python 2.3 or higher, and recent versions of the ReportLab and BioPython packages. A user manual, example code and images are available at http://bioinf.scri.ac.uk/lp/programs.html.
Simulating the Thermal Response of High Explosives on Time Scales of Days to Microseconds
NASA Astrophysics Data System (ADS)
Yoh, Jack J.; McClelland, Matthew A.
2004-07-01
We present an overview of computational techniques for simulating the thermal cookoff of high explosives using a multi-physics hydrodynamics code, ALE3D. Recent improvements to the code have aided our computational capability in modeling the response of energetic materials systems exposed to extreme thermal environments, such as fires. We consider an idealized model process for a confined explosive involving the transition from slow heating to rapid deflagration in which the time scale changes from days to hundreds of microseconds. The heating stage involves thermal expansion and decomposition according to an Arrhenius kinetics model while a pressure-dependent burn model is employed during the explosive phase. We describe and demonstrate the numerical strategies employed to make the transition from slow to fast dynamics.
Performance Analysis, Modeling and Scaling of HPC Applications and Tools
DOE Office of Scientific and Technical Information (OSTI.GOV)
Bhatele, Abhinav
2016-01-13
E cient use of supercomputers at DOE centers is vital for maximizing system throughput, mini- mizing energy costs and enabling science breakthroughs faster. This requires complementary e orts along several directions to optimize the performance of scienti c simulation codes and the under- lying runtimes and software stacks. This in turn requires providing scalable performance analysis tools and modeling techniques that can provide feedback to physicists and computer scientists developing the simulation codes and runtimes respectively. The PAMS project is using time allocations on supercomputers at ALCF, NERSC and OLCF to further the goals described above by performing research alongmore » the following fronts: 1. Scaling Study of HPC applications; 2. Evaluation of Programming Models; 3. Hardening of Performance Tools; 4. Performance Modeling of Irregular Codes; and 5. Statistical Analysis of Historical Performance Data. We are a team of computer and computational scientists funded by both DOE/NNSA and DOE/ ASCR programs such as ECRP, XStack (Traleika Glacier, PIPER), ExaOSR (ARGO), SDMAV II (MONA) and PSAAP II (XPACC). This allocation will enable us to study big data issues when analyzing performance on leadership computing class systems and to assist the HPC community in making the most e ective use of these resources.« less
Engineering large-scale agent-based systems with consensus
NASA Technical Reports Server (NTRS)
Bokma, A.; Slade, A.; Kerridge, S.; Johnson, K.
1994-01-01
The paper presents the consensus method for the development of large-scale agent-based systems. Systems can be developed as networks of knowledge based agents (KBA) which engage in a collaborative problem solving effort. The method provides a comprehensive and integrated approach to the development of this type of system. This includes a systematic analysis of user requirements as well as a structured approach to generating a system design which exhibits the desired functionality. There is a direct correspondence between system requirements and design components. The benefits of this approach are that requirements are traceable into design components and code thus facilitating verification. The use of the consensus method with two major test applications showed it to be successful and also provided valuable insight into problems typically associated with the development of large systems.
Perceptual scale expansion: an efficient angular coding strategy for locomotor space.
Durgin, Frank H; Li, Zhi
2011-08-01
Whereas most sensory information is coded on a logarithmic scale, linear expansion of a limited range may provide a more efficient coding for the angular variables important to precise motor control. In four experiments, we show that the perceived declination of gaze, like the perceived orientation of surfaces, is coded on a distorted scale. The distortion seems to arise from a nearly linear expansion of the angular range close to horizontal/straight ahead and is evident in explicit verbal and nonverbal measures (Experiments 1 and 2), as well as in implicit measures of perceived gaze direction (Experiment 4). The theory is advanced that this scale expansion (by a factor of about 1.5) may serve a functional goal of coding efficiency for angular perceptual variables. The scale expansion of perceived gaze declination is accompanied by a corresponding expansion of perceived optical slants in the same range (Experiments 3 and 4). These dual distortions can account for the explicit misperception of distance typically obtained by direct report and exocentric matching, while allowing for accurate spatial action to be understood as the result of calibration.
Perceptual Scale Expansion: An Efficient Angular Coding Strategy for Locomotor Space
Durgin, Frank H.; Li, Zhi
2011-01-01
Whereas most sensory information is coded in a logarithmic scale, linear expansion of a limited range may provide a more efficient coding for angular variables important to precise motor control. In four experiments it is shown that the perceived declination of gaze, like the perceived orientation of surfaces is coded on a distorted scale. The distortion seems to arise from a nearly linear expansion of the angular range close to horizontal/straight ahead and is evident in explicit verbal and non-verbal measures (Experiments 1 and 2) and in implicit measures of perceived gaze direction (Experiment 4). The theory is advanced that this scale expansion (by a factor of about 1.5) may serve a functional goal of coding efficiency for angular perceptual variables. The scale expansion of perceived gaze declination is accompanied by a corresponding expansion of perceived optical slants in the same range (Experiments 3 and 4). These dual distortions can account for the explicit misperception of distance typically obtained by direct report and exocentric matching while allowing accurate spatial action to be understood as the result of calibration. PMID:21594732
The linearly scaling 3D fragment method for large scale electronic structure calculations
DOE Office of Scientific and Technical Information (OSTI.GOV)
Zhao, Zhengji; Meza, Juan; Lee, Byounghak
2009-07-28
The Linearly Scaling three-dimensional fragment (LS3DF) method is an O(N) ab initio electronic structure method for large-scale nano material simulations. It is a divide-and-conquer approach with a novel patching scheme that effectively cancels out the artificial boundary effects, which exist in all divide-and-conquer schemes. This method has made ab initio simulations of thousand-atom nanosystems feasible in a couple of hours, while retaining essentially the same accuracy as the direct calculation methods. The LS3DF method won the 2008 ACM Gordon Bell Prize for algorithm innovation. Our code has reached 442 Tflop/s running on 147,456 processors on the Cray XT5 (Jaguar) atmore » OLCF, and has been run on 163,840 processors on the Blue Gene/P (Intrepid) at ALCF, and has been applied to a system containing 36,000 atoms. In this paper, we will present the recent parallel performance results of this code, and will apply the method to asymmetric CdSe/CdS core/shell nanorods, which have potential applications in electronic devices and solar cells.« less
The Linearly Scaling 3D Fragment Method for Large Scale Electronic Structure Calculations
DOE Office of Scientific and Technical Information (OSTI.GOV)
Zhao, Zhengji; Meza, Juan; Lee, Byounghak
2009-06-26
The Linearly Scaling three-dimensional fragment (LS3DF) method is an O(N) ab initio electronic structure method for large-scale nano material simulations. It is a divide-and-conquer approach with a novel patching scheme that effectively cancels out the artificial boundary effects, which exist in all divide-and-conquer schemes. This method has made ab initio simulations of thousand-atom nanosystems feasible in a couple of hours, while retaining essentially the same accuracy as the direct calculation methods. The LS3DF method won the 2008 ACM Gordon Bell Prize for algorithm innovation. Our code has reached 442 Tflop/s running on 147,456 processors on the Cray XT5 (Jaguar) atmore » OLCF, and has been run on 163,840 processors on the Blue Gene/P (Intrepid) at ALCF, and has been applied to a system containing 36,000 atoms. In this paper, we will present the recent parallel performance results of this code, and will apply the method to asymmetric CdSe/CdS core/shell nanorods, which have potential applications in electronic devices and solar cells.« less
DOE Office of Scientific and Technical Information (OSTI.GOV)
Bowman, S.M.
1995-01-01
The requirements of ANSI/ANS 8.1 specify that calculational methods for away-from-reactor criticality safety analyses be validated against experimental measurements. If credit for the negative reactivity of the depleted (or spent) fuel isotopics is desired, it is necessary to benchmark computational methods against spent fuel critical configurations. This report summarizes a portion of the ongoing effort to benchmark away-from-reactor criticality analysis methods using critical configurations from commercial pressurized-water reactors. The analysis methodology selected for all the calculations reported herein is based on the codes and data provided in the SCALE-4 code system. The isotopic densities for the spent fuel assemblies inmore » the critical configurations were calculated using the SAS2H analytical sequence of the SCALE-4 system. The sources of data and the procedures for deriving SAS2H input parameters are described in detail. The SNIKR code module was used to extract the necessary isotopic densities from the SAS2H results and provide the data in the format required by the SCALE criticality analysis modules. The CSASN analytical sequence in SCALE-4 was used to perform resonance processing of the cross sections. The KENO V.a module of SCALE-4 was used to calculate the effective multiplication factor (k{sub eff}) of each case. The SCALE-4 27-group burnup library containing ENDF/B-IV (actinides) and ENDF/B-V (fission products) data was used for all the calculations. This volume of the report documents the SCALE system analysis of three reactor critical configurations for the Sequoyah Unit 2 Cycle 3. This unit and cycle were chosen because of the relevance in spent fuel benchmark applications: (1) the unit had a significantly long downtime of 2.7 years during the middle of cycle (MOC) 3, and (2) the core consisted entirely of burned fuel at the MOC restart. The first benchmark critical calculation was the MOC restart at hot, full-power (HFP) critical conditions. The other two benchmark critical calculations were the beginning-of-cycle (BOC) startup at both hot, zero-power (HZP) and HFP critical conditions. These latter calculations were used to check for consistency in the calculated results for different burnups and downtimes. The k{sub eff} results were in the range of 1.00014 to 1.00259 with a standard deviation of less than 0.001.« less
Ion beams in radiotherapy - from tracks to treatment planning
NASA Astrophysics Data System (ADS)
Krämer, M.; Scifoni, E.; Wälzlein, C.; Durante, M.
2012-07-01
Several dozen clinical sites around the world apply beams of fast light ions for radiotherapeutical purposes. Thus there is a vested interest in the various physical and radiobiological processes governing the interaction of ion beams with matter, specifically living systems. We discuss the various modelling steps which lead from basic interactions to the application in actual patient treatment planning. The nano- and microscopic scale is covered by sample calculations with our TRAX code. On the macroscopic scale we feature the TRiP98 treatment planning system, which was clinically used in GSI's radiotherapy pilot project.
A validated non-linear Kelvin-Helmholtz benchmark for numerical hydrodynamics
NASA Astrophysics Data System (ADS)
Lecoanet, D.; McCourt, M.; Quataert, E.; Burns, K. J.; Vasil, G. M.; Oishi, J. S.; Brown, B. P.; Stone, J. M.; O'Leary, R. M.
2016-02-01
The non-linear evolution of the Kelvin-Helmholtz instability is a popular test for code verification. To date, most Kelvin-Helmholtz problems discussed in the literature are ill-posed: they do not converge to any single solution with increasing resolution. This precludes comparisons among different codes and severely limits the utility of the Kelvin-Helmholtz instability as a test problem. The lack of a reference solution has led various authors to assert the accuracy of their simulations based on ad hoc proxies, e.g. the existence of small-scale structures. This paper proposes well-posed two-dimensional Kelvin-Helmholtz problems with smooth initial conditions and explicit diffusion. We show that in many cases numerical errors/noise can seed spurious small-scale structure in Kelvin-Helmholtz problems. We demonstrate convergence to a reference solution using both ATHENA, a Godunov code, and DEDALUS, a pseudo-spectral code. Problems with constant initial density throughout the domain are relatively straightforward for both codes. However, problems with an initial density jump (which are the norm in astrophysical systems) exhibit rich behaviour and are more computationally challenging. In the latter case, ATHENA simulations are prone to an instability of the inner rolled-up vortex; this instability is seeded by grid-scale errors introduced by the algorithm, and disappears as resolution increases. Both ATHENA and DEDALUS exhibit late-time chaos. Inviscid simulations are riddled with extremely vigorous secondary instabilities which induce more mixing than simulations with explicit diffusion. Our results highlight the importance of running well-posed test problems with demonstrated convergence to a reference solution. To facilitate future comparisons, we include as supplementary material the resolved, converged solutions to the Kelvin-Helmholtz problems in this paper in machine-readable form.
Exclusively visual analysis of classroom group interactions
NASA Astrophysics Data System (ADS)
Tucker, Laura; Scherr, Rachel E.; Zickler, Todd; Mazur, Eric
2016-12-01
Large-scale audiovisual data that measure group learning are time consuming to collect and analyze. As an initial step towards scaling qualitative classroom observation, we qualitatively coded classroom video using an established coding scheme with and without its audio cues. We find that interrater reliability is as high when using visual data only—without audio—as when using both visual and audio data to code. Also, interrater reliability is high when comparing use of visual and audio data to visual-only data. We see a small bias to code interactions as group discussion when visual and audio data are used compared with video-only data. This work establishes that meaningful educational observation can be made through visual information alone. Further, it suggests that after initial work to create a coding scheme and validate it in each environment, computer-automated visual coding could drastically increase the breadth of qualitative studies and allow for meaningful educational analysis on a far greater scale.
ogs6 - a new concept for porous-fractured media simulations
NASA Astrophysics Data System (ADS)
Naumov, Dmitri; Bilke, Lars; Fischer, Thomas; Rink, Karsten; Wang, Wenqing; Watanabe, Norihiro; Kolditz, Olaf
2015-04-01
OpenGeoSys (OGS) is a scientific open-source initiative for numerical simulation of thermo-hydro-mechanical/chemical (THMC) processes in porous and fractured media, continuously developed since the mid-eighties. The basic concept is to provide a flexible numerical framework for solving coupled multi-field problems. OGS is targeting mainly on applications in environmental geoscience, e.g. in the fields of contaminant hydrology, water resources management, waste deposits, or geothermal energy systems, but it has also been successfully applied to new topics in energy storage recently. OGS is actively participating several international benchmarking initiatives, e.g. DECOVALEX (waste management), CO2BENCH (CO2 storage and sequestration), SeSBENCH (reactive transport processes) and HM-Intercomp (coupled hydrosystems). Despite the broad applicability of OGS in geo-, hydro- and energy-sciences, several shortcomings became obvious concerning the computational efficiency as well as the code structure became too sophisticated for further efficient development. OGS-5 was designed for object-oriented FEM applications. However, in many multi-field problems a certain flexibility of tailored numerical schemes is essential. Therefore, a new concept was designed to overcome existing bottlenecks. The paradigms for ogs6 are: - Flexibility of numerical schemes (FEM#FVM#FDM), - Computational efficiency (PetaScale ready), - Developer- and user-friendly. ogs6 has a module-oriented architecture based on thematic libraries (e.g. MeshLib, NumLib) on the large scale and uses object-oriented approach for the small scale interfaces. Usage of a linear algebra library (Eigen3) for the mathematical operations together with the ISO C++11 standard increases the expressiveness of the code and makes it more developer-friendly. The new C++ standard also makes the template meta-programming technique code used for compile-time optimizations more compact. We have transitioned the main code development to the GitHub code hosting system (https://github.com/ufz/ogs). The very flexible revision control system Git in combination with issue tracking, developer feedback and the code review options improve the code quality and the development process in general. The continuous testing procedure of the benchmarks as it was established for OGS-5 is maintained. Additionally unit testing, which is automatically triggered by any code changes, is executed by two continuous integration frameworks (Jenkins CI, Travis CI) which build and test the code on different operating systems (Windows, Linux, Mac OS), in multiple configurations and with different compilers (GCC, Clang, Visual Studio). To improve the testing possibilities further, XML based file input formats are introduced helping with automatic validation of the user contributed benchmarks. The first ogs6 prototype version 6.0.1 has been implemented for solving generic elliptic problems. Next steps are envisaged to transient, non-linear and coupled problems. Literature: [1] Kolditz O, Shao H, Wang W, Bauer S (eds) (2014): Thermo-Hydro-Mechanical-Chemical Processes in Fractured Porous Media: Modelling and Benchmarking - Closed Form Solutions. In: Terrestrial Environmental Sciences, Vol. 1, Springer, Heidelberg, ISBN 978-3-319-11893-2, 315pp. http://www.springer.com/earth+sciences+and+geography/geology/book/978-3-319-11893-2 [2] Naumov D (2015): Computational Fluid Dynamics in Unconsolidated Sediments: Model Generation and Discrete Flow Simulations, PhD thesis, Technische Universität Dresden.
Validation of the WIMSD4M cross-section generation code with benchmark results
DOE Office of Scientific and Technical Information (OSTI.GOV)
Leal, L.C.; Deen, J.R.; Woodruff, W.L.
1995-02-01
The WIMSD4 code has been adopted for cross-section generation in support of the Reduced Enrichment for Research and Test (RERTR) program at Argonne National Laboratory (ANL). Subsequently, the code has undergone several updates, and significant improvements have been achieved. The capability of generating group-collapsed micro- or macroscopic cross sections from the ENDF/B-V library and the more recent evaluation, ENDF/B-VI, in the ISOTXS format makes the modified version of the WIMSD4 code, WIMSD4M, very attractive, not only for the RERTR program, but also for the reactor physics community. The intent of the present paper is to validate the procedure to generatemore » cross-section libraries for reactor analyses and calculations utilizing the WIMSD4M code. To do so, the results of calculations performed with group cross-section data generated with the WIMSD4M code will be compared against experimental results. These results correspond to calculations carried out with thermal reactor benchmarks of the Oak Ridge National Laboratory(ORNL) unreflected critical spheres, the TRX critical experiments, and calculations of a modified Los Alamos highly-enriched heavy-water moderated benchmark critical system. The benchmark calculations were performed with the discrete-ordinates transport code, TWODANT, using WIMSD4M cross-section data. Transport calculations using the XSDRNPM module of the SCALE code system are also included. In addition to transport calculations, diffusion calculations with the DIF3D code were also carried out, since the DIF3D code is used in the RERTR program for reactor analysis and design. For completeness, Monte Carlo results of calculations performed with the VIM and MCNP codes are also presented.« less
Beta Testing of Persistent Passive Acoustic Monitors
2012-10-01
three platforms provide the capability to work over a wide range of spatial and temporal scales. Hardware and software integration of the DMONs in...closely with Richard M. Ead (Sensors and Sonar Systems Department, Naval Undersea Warfare Center, NUWC Code 1535), Ted Ioannides (PS 4013) and Dave
Simulating Coupling Complexity in Space Plasmas: First Results from a new code
NASA Astrophysics Data System (ADS)
Kryukov, I.; Zank, G. P.; Pogorelov, N. V.; Raeder, J.; Ciardo, G.; Florinski, V. A.; Heerikhuisen, J.; Li, G.; Petrini, F.; Shematovich, V. I.; Winske, D.; Shaikh, D.; Webb, G. M.; Yee, H. M.
2005-12-01
The development of codes that embrace 'coupling complexity' via the self-consistent incorporation of multiple physical scales and multiple physical processes in models has been identified by the NRC Decadal Survey in Solar and Space Physics as a crucial necessary development in simulation/modeling technology for the coming decade. The National Science Foundation, through its Information Technology Research (ITR) Program, is supporting our efforts to develop a new class of computational code for plasmas and neutral gases that integrates multiple scales and multiple physical processes and descriptions. We are developing a highly modular, parallelized, scalable code that incorporates multiple scales by synthesizing 3 simulation technologies: 1) Computational fluid dynamics (hydrodynamics or magneto-hydrodynamics-MHD) for the large-scale plasma; 2) direct Monte Carlo simulation of atoms/neutral gas, and 3) transport code solvers to model highly energetic particle distributions. We are constructing the code so that a fourth simulation technology, hybrid simulations for microscale structures and particle distributions, can be incorporated in future work, but for the present, this aspect will be addressed at a test-particle level. This synthesis we will provide a computational tool that will advance our understanding of the physics of neutral and charged gases enormously. Besides making major advances in basic plasma physics and neutral gas problems, this project will address 3 Grand Challenge space physics problems that reflect our research interests: 1) To develop a temporal global heliospheric model which includes the interaction of solar and interstellar plasma with neutral populations (hydrogen, helium, etc., and dust), test-particle kinetic pickup ion acceleration at the termination shock, anomalous cosmic ray production, interaction with galactic cosmic rays, while incorporating the time variability of the solar wind and the solar cycle. 2) To develop a coronal mass ejection and interplanetary shock propagation model for the inner and outer heliosphere, including, at a test-particle level, wave-particle interactions and particle acceleration at traveling shock waves and compression regions. 3) To develop an advanced Geospace General Circulation Model (GGCM) capable of realistically modeling space weather events, in particular the interaction with CMEs and geomagnetic storms. Furthermore, by implementing scalable run-time supports and sophisticated off- and on-line prediction algorithms, we anticipate important advances in the development of automatic and intelligent system software to optimize a wide variety of 'embedded' computations on parallel computers. Finally, public domain MHD and hydrodynamic codes had a transforming effect on space and astrophysics. We expect that our new generation, open source, public domain multi-scale code will have a similar transformational effect in a variety of disciplines, opening up new classes of problems to physicists and engineers alike.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Fox, P.B.; Yatabe, M.
1987-01-01
In this report the Nuclear Criticality Safety Analytical Methods Resource Center describes a new interactive version of CESAR, a critical experiments storage and retrieval program available on the Nuclear Criticality Information System (NCIS) database at Lawrence Livermore National Laboratory. The original version of CESAR did not include interactive search capabilities. The CESAR database was developed to provide a convenient, readily accessible means of storing and retrieving code input data for the SCALE Criticality Safety Analytical Sequences and the codes comprising those sequences. The database includes data for both cross section preparation and criticality safety calculations. 3 refs., 1 tab.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Fox, P.B.; Yatabe, M.
1987-01-01
The Nuclear Criticality Safety Analytical Methods Resource Center announces the availability of a new interactive version of CESAR, a critical experiments storage and retrieval program available on the Nuclear Criticality Information System (NCIS) data base at Lawrence Livermore National Laboratory. The original version of CESAR did not include interactive search capabilities. The CESAR data base was developed to provide a convenient, readily accessible means of storing and retrieving code input data for the SCALE criticality safety analytical sequences and the codes comprising those sequences. The data base includes data for both cross-section preparation and criticality safety calculations.
Android Platform Based Smartphones for a Logistical Remote Association Repair Framework
Lien, Shao-Fan; Wang, Chun-Chieh; Su, Juhng-Perng; Chen, Hong-Ming; Wu, Chein-Hsing
2014-01-01
The maintenance of large-scale systems is an important issue for logistics support planning. In this paper, we developed a Logistical Remote Association Repair Framework (LRARF) to aid repairmen in keeping the system available. LRARF includes four subsystems: smart mobile phones, a Database Management System (DBMS), a Maintenance Support Center (MSC) and wireless networks. The repairman uses smart mobile phones to capture QR-codes and the images of faulty circuit boards. The captured QR-codes and images are transmitted to the DBMS so the invalid modules can be recognized via the proposed algorithm. In this paper, the Linear Projective Transform (LPT) is employed for fast QR-code calibration. Moreover, the ANFIS-based data mining system is used for module identification and searching automatically for the maintenance manual corresponding to the invalid modules. The inputs of the ANFIS-based data mining system are the QR-codes and image features; the output is the module ID. DBMS also transmits the maintenance manual back to the maintenance staff. If modules are not recognizable, the repairmen and center engineers can obtain the relevant information about the invalid modules through live video. The experimental results validate the applicability of the Android-based platform in the recognition of invalid modules. In addition, the live video can also be recorded synchronously on the MSC for later use. PMID:24967603
Dental students consistency in applying the ICDAS system within paediatric dentistry.
Foley, J I
2012-12-01
To examine dental students' consistency in utilising the International Caries Detection and Assessment System (ICDAS) one and three months after training. A prospective study. All clinical dental students (Year Two: BDS2; Year Three: BDS3; Year Four: BDS4) as part of their education in Paediatric Dentistry at Aberdeen Dental School (n = 56) received baseline training by two "gold-standard" examiners and were advised to complete the 90-minute ICDAS e-learning program. Study One: One month later, the occlusal surface of 40 extracted primary and permanent molar teeth were examined and assigned both a caries (0-6 scale) and restorative code (0-9 scale). Study Two: The same teeth were examined three months later. Kappa statistics were used to determine inter- and intra-examiner reliability at baseline and after three months. In total, 31 students (BDS2: n = 9; BDS3: n = 8; BDS4: n = 14) completed both examinations. The inter-examiner reliability kappa scores for restoration codes for Study One and Study Two were: BDS2: 0.47 and 0.38; BDS3: 0.61 and 0.52 and BDS4: 0.56 and 0.52. The caries scores for the two studies were: BDS2: 0.31 and 0.20; BDS3: 0.45 and 0.32 and BDS4: 0.35 and 0.34. The intra-examiner reliability range for restoration codes were: BDS2: 0.20 to 0.55; BDS3: 0.34 to 0.72 and BDS4: 0.28 to 0.80. The intra-examiner reliability range for caries codes were: BDS2: 0.35 to 0.62; BDS3: 0.22 to 0.53 and BDS4: 0.22 to 0.65. The consistency of ICDAS codes varied between students and also, between year groups. In general, consistency was greater for restoration codes.
NASA Astrophysics Data System (ADS)
Sanan, P.; Tackley, P. J.; Gerya, T.; Kaus, B. J. P.; May, D.
2017-12-01
StagBL is an open-source parallel solver and discretization library for geodynamic simulation,encapsulating and optimizing operations essential to staggered-grid finite volume Stokes flow solvers.It provides a parallel staggered-grid abstraction with a high-level interface in C and Fortran.On top of this abstraction, tools are available to define boundary conditions and interact with particle systems.Tools and examples to efficiently solve Stokes systems defined on the grid are provided in small (direct solver), medium (simple preconditioners), and large (block factorization and multigrid) model regimes.By working directly with leading application codes (StagYY, I3ELVIS, and LaMEM) and providing an API and examples to integrate with others, StagBL aims to become a community tool supplying scalable, portable, reproducible performance toward novel science in regional- and planet-scale geodynamics and planetary science.By implementing kernels used by many research groups beneath a uniform abstraction layer, the library will enable optimization for modern hardware, thus reducing community barriers to large- or extreme-scale parallel simulation on modern architectures. In particular, the library will include CPU-, Manycore-, and GPU-optimized variants of matrix-free operators and multigrid components.The common layer provides a framework upon which to introduce innovative new tools.StagBL will leverage p4est to provide distributed adaptive meshes, and incorporate a multigrid convergence analysis tool.These options, in addition to a wealth of solver options provided by an interface to PETSc, will make the most modern solution techniques available from a common interface. StagBL in turn provides a PETSc interface, DMStag, to its central staggered grid abstraction.We present public version 0.5 of StagBL, including preliminary integration with application codes and demonstrations with its own demonstration application, StagBLDemo. Central to StagBL is the notion of an uninterrupted pipeline from toy/teaching codes to high-performance, extreme-scale solves. StagBLDemo replicates the functionality of an advanced MATLAB-style regional geodynamics code, thus providing users with a concrete procedure to exceed the performance and scalability limitations of smaller-scale tools.
An Evaluation of Operational Airspace Sectorization Integrated System (OASIS) Advisory Tool
NASA Technical Reports Server (NTRS)
Lee, Paul U.; Mogford, Richard H.; Bridges, Wayne; Buckley, Nathan; Evans, Mark; Gujral, Vimmy; Lee, Hwasoo; Peknik, Daniel; Preston, William
2013-01-01
In January 2013, a human-in-the-loop evaluation of the Operational Airspace Sectorization Integrated System (OASIS) was conducted in the Airspace Operations Laboratory of the Human Systems Integration Division (Code TH) in conjunction with the Aviation Systems Division (Code AF). The development of OASIS is a major activity of the Dynamic Airspace Configuration (DAC) research focus area within the Aeronautics Research Mission Directorate (ARMD) Airspace Systems Program. OASIS is an advisory tool to assist Federal Aviation Administration (FAA) En Route Area Supervisors in their planning of sector combinedecombine operations as well as opening closing of Data-side (D-side) control positions. These advisory solutions are tailored to the predicted traffic demand over the next few hours. During the experiment, eight retired FAA personnel served as participants for a part-task evaluation of OASIS functionality, covering the user interface as well as the underlying algorithm. Participants gave positive feedback on both the user interface and the algorithm solutions for airspace configuration, including an excellent average rating of 94 on the tool usability scales. They also suggested various enhancements to the OASIS tool, which will be incorporated into the next tool development cycle for the full-scale human-in-the-loop evaluation to be conducted later this year.
FOG: Fighting the Achilles' Heel of Gossip Protocols with Fountain Codes
NASA Astrophysics Data System (ADS)
Champel, Mary-Luc; Kermarrec, Anne-Marie; Le Scouarnec, Nicolas
Gossip protocols are well known to provide reliable and robust dissemination protocols in highly dynamic systems. Yet, they suffer from high redundancy in the last phase of the dissemination. In this paper, we combine fountain codes (rateless erasure-correcting codes) together with gossip protocols for a robust and fast content dissemination in large-scale dynamic systems. The use of fountain enables to eliminate the unnecessary redundancy of gossip protocols. We propose the design of FOG, which fully exploits the first exponential growth phase (where the data is disseminated exponentially fast) of gossip protocols while avoiding the need for the shrinking phase by using fountain codes. FOG voluntarily increases the number of disseminations but limits those disseminations to the exponential growth phase. In addition, FOG creates a split-graph overlay that splits the peers between encoders and forwarders. Forwarder peers become encoders as soon as they have received the whole content. In order to benefit even further and quicker from encoders, FOG biases the dissemination towards the most advanced peers to make them complete earlier.
Assessment of the TRACE Reactor Analysis Code Against Selected PANDA Transient Data
DOE Office of Scientific and Technical Information (OSTI.GOV)
Zavisca, M.; Ghaderi, M.; Khatib-Rahbar, M.
2006-07-01
The TRACE (TRAC/RELAP Advanced Computational Engine) code is an advanced, best-estimate thermal-hydraulic program intended to simulate the transient behavior of light-water reactor systems, using a two-fluid (steam and water, with non-condensable gas), seven-equation representation of the conservation equations and flow-regime dependent constitutive relations in a component-based model with one-, two-, or three-dimensional elements, as well as solid heat structures and logical elements for the control system. The U.S. Nuclear Regulatory Commission is currently supporting the development of the TRACE code and its assessment against a variety of experimental data pertinent to existing and evolutionary reactor designs. This paper presents themore » results of TRACE post-test prediction of P-series of experiments (i.e., tests comprising the ISP-42 blind and open phases) conducted at the PANDA large-scale test facility in 1990's. These results show reasonable agreement with the reported test results, indicating good performance of the code and relevant underlying thermal-hydraulic and heat transfer models. (authors)« less
Riva Crugnola, Cristina; Ierardi, Elena; Canevini, Maria Paola
2018-02-01
The study evaluated reflective functioning (RF), maternal attachment, mind-mindedness, and emotional availability among 44 adolescent mother-infant dyads and 41 adult mother-infant dyads. At infant age 3 months, mother-infant interaction was coded with the mind-mindedness coding system and Emotional Availability Scales; mother attachment and RF were evaluated with the Adult Attachment Interview (AAI). Adolescent mothers (vs. adult mothers) were more insecure and had lower RF; they were also less sensitive, more intrusive and hostile, and less structuring of their infant's activity; they used fewer attuned mind-related comments and fewer mind-related comments appropriate to infant development. In adult mothers, the Mother Idealizing and Lack of Memory AAI scales were correlated to non-attuned mind-related comments and the Father Anger scale to negative mind-related comments. In adult mothers, RF was associated with sensitivity. This was not the case with adolescent mothers. In both groups of mothers, there were no associations between sensitivity and mind-mindedness.
Huffhines, Lindsay; Tunno, Angela M; Cho, Bridget; Hambrick, Erin P; Campos, Ilse; Lichty, Brittany; Jackson, Yo
2016-08-01
State social service agency case files are a common mechanism for obtaining information about a child's maltreatment history, yet these documents are often challenging for researchers to access, and then to process in a manner consistent with the requirements of social science research designs. Specifically, accessing and navigating case files is an extensive undertaking, and a task that many researchers have had to maneuver with little guidance. Even after the files are in hand and the research questions and relevant variables have been clarified, case file information about a child's maltreatment exposure can be idiosyncratic, vague, inconsistent, and incomplete, making coding such information into useful variables for statistical analyses difficult. The Modified Maltreatment Classification System (MMCS) is a popular tool used to guide the process, and though comprehensive, this coding system cannot cover all idiosyncrasies found in case files. It is not clear from the literature how researchers implement this system while accounting for issues outside of the purview of the MMCS or that arise during MMCS use. Finally, a large yet reliable file coding team is essential to the process, however, the literature lacks training guidelines and methods for establishing reliability between coders. In an effort to move the field toward a common approach, the purpose of the present discussion is to detail the process used by one large-scale study of child maltreatment, the Studying Pathways to Adjustment and Resilience in Kids (SPARK) project, a longitudinal study of resilience in youth in foster care. The article addresses each phase of case file coding, from accessing case files, to identifying how to measure constructs of interest, to dealing with exceptions to the coding system, to coding variables reliably, to training large teams of coders and monitoring for fidelity. Implications for a comprehensive and efficient approach to case file coding are discussed.
Huffhines, Lindsay; Tunno, Angela M.; Cho, Bridget; Hambrick, Erin P.; Campos, Ilse; Lichty, Brittany; Jackson, Yo
2016-01-01
State social service agency case files are a common mechanism for obtaining information about a child’s maltreatment history, yet these documents are often challenging for researchers to access, and then to process in a manner consistent with the requirements of social science research designs. Specifically, accessing and navigating case files is an extensive undertaking, and a task that many researchers have had to maneuver with little guidance. Even after the files are in hand and the research questions and relevant variables have been clarified, case file information about a child’s maltreatment exposure can be idiosyncratic, vague, inconsistent, and incomplete, making coding such information into useful variables for statistical analyses difficult. The Modified Maltreatment Classification System (MMCS) is a popular tool used to guide the process, and though comprehensive, this coding system cannot cover all idiosyncrasies found in case files. It is not clear from the literature how researchers implement this system while accounting for issues outside of the purview of the MMCS or that arise during MMCS use. Finally, a large yet reliable file coding team is essential to the process, however, the literature lacks training guidelines and methods for establishing reliability between coders. In an effort to move the field toward a common approach, the purpose of the present discussion is to detail the process used by one large-scale study of child maltreatment, the Studying Pathways to Adjustment and Resilience in Kids (SPARK) project, a longitudinal study of resilience in youth in foster care. The article addresses each phase of case file coding, from accessing case files, to identifying how to measure constructs of interest, to dealing with exceptions to the coding system, to coding variables reliably, to training large teams of coders and monitoring for fidelity. Implications for a comprehensive and efficient approach to case file coding are discussed. PMID:28138207
Turbulence dissipation challenge: particle-in-cell simulations
NASA Astrophysics Data System (ADS)
Roytershteyn, V.; Karimabadi, H.; Omelchenko, Y.; Germaschewski, K.
2015-12-01
We discuss application of three particle in cell (PIC) codes to the problems relevant to turbulence dissipation challenge. VPIC is a fully kinetic code extensively used to study a variety of diverse problems ranging from laboratory plasmas to astrophysics. PSC is a flexible fully kinetic code offering a variety of algorithms that can be advantageous to turbulence simulations, including high order particle shapes, dynamic load balancing, and ability to efficiently run on Graphics Processing Units (GPUs). Finally, HYPERS is a novel hybrid (kinetic ions+fluid electrons) code, which utilizes asynchronous time advance and a number of other advanced algorithms. We present examples drawn both from large-scale turbulence simulations and from the test problems outlined by the turbulence dissipation challenge. Special attention is paid to such issues as the small-scale intermittency of inertial range turbulence, mode content of the sub-proton range of scales, the formation of electron-scale current sheets and the role of magnetic reconnection, as well as numerical challenges of applying PIC codes to simulations of astrophysical turbulence.
Medical image classification based on multi-scale non-negative sparse coding.
Zhang, Ruijie; Shen, Jian; Wei, Fushan; Li, Xiong; Sangaiah, Arun Kumar
2017-11-01
With the rapid development of modern medical imaging technology, medical image classification has become more and more important in medical diagnosis and clinical practice. Conventional medical image classification algorithms usually neglect the semantic gap problem between low-level features and high-level image semantic, which will largely degrade the classification performance. To solve this problem, we propose a multi-scale non-negative sparse coding based medical image classification algorithm. Firstly, Medical images are decomposed into multiple scale layers, thus diverse visual details can be extracted from different scale layers. Secondly, for each scale layer, the non-negative sparse coding model with fisher discriminative analysis is constructed to obtain the discriminative sparse representation of medical images. Then, the obtained multi-scale non-negative sparse coding features are combined to form a multi-scale feature histogram as the final representation for a medical image. Finally, SVM classifier is combined to conduct medical image classification. The experimental results demonstrate that our proposed algorithm can effectively utilize multi-scale and contextual spatial information of medical images, reduce the semantic gap in a large degree and improve medical image classification performance. Copyright © 2017 Elsevier B.V. All rights reserved.
An Analysis of the Defense Acquisition Strategy for Unmanned Systems
2013-11-20
Product Service Code RAA Rapid Acquisition Authority RCS Radar Cross Section REF Rapid Equipping Force RFID Radio Frequency Identification RDT...the radio frequency identification (RFID) chip also provides a useful basis for comparison. WWII served as the proving ground for RFID technology...enabling miniaturized Free Space Optical Communications systems capable of scaling across data rates, distances, and platforms and integrating with radio
Cache Coherence Protocols for Large-Scale Multiprocessors
1990-09-01
and is compared with the other protocols for large-scale machines. In later analysis, this coherence method is designated by the acronym OCPD , which...private read misses 2 6 6 ( OCPD ) private write misses 2 6 6 Table 4.2: Transaction Types and Costs. the performance of the memory system. These...methodologies. Figure 4-2 shows the processor utiliza- tions of the Weather program, with special code in the dyn-nic post-mortem sched- 94 OCPD DlrINB
KENO-VI Primer: A Primer for Criticality Calculations with SCALE/KENO-VI Using GeeWiz
DOE Office of Scientific and Technical Information (OSTI.GOV)
Bowman, Stephen M
2008-09-01
The SCALE (Standardized Computer Analyses for Licensing Evaluation) computer software system developed at Oak Ridge National Laboratory is widely used and accepted around the world for criticality safety analyses. The well-known KENO-VI three-dimensional Monte Carlo criticality computer code is one of the primary criticality safety analysis tools in SCALE. The KENO-VI primer is designed to help a new user understand and use the SCALE/KENO-VI Monte Carlo code for nuclear criticality safety analyses. It assumes that the user has a college education in a technical field. There is no assumption of familiarity with Monte Carlo codes in general or with SCALE/KENO-VImore » in particular. The primer is designed to teach by example, with each example illustrating two or three features of SCALE/KENO-VI that are useful in criticality analyses. The primer is based on SCALE 6, which includes the Graphically Enhanced Editing Wizard (GeeWiz) Windows user interface. Each example uses GeeWiz to provide the framework for preparing input data and viewing output results. Starting with a Quickstart section, the primer gives an overview of the basic requirements for SCALE/KENO-VI input and allows the user to quickly run a simple criticality problem with SCALE/KENO-VI. The sections that follow Quickstart include a list of basic objectives at the beginning that identifies the goal of the section and the individual SCALE/KENO-VI features that are covered in detail in the sample problems in that section. Upon completion of the primer, a new user should be comfortable using GeeWiz to set up criticality problems in SCALE/KENO-VI. The primer provides a starting point for the criticality safety analyst who uses SCALE/KENO-VI. Complete descriptions are provided in the SCALE/KENO-VI manual. Although the primer is self-contained, it is intended as a companion volume to the SCALE/KENO-VI documentation. (The SCALE manual is provided on the SCALE installation DVD.) The primer provides specific examples of using SCALE/KENO-VI for criticality analyses; the SCALE/KENO-VI manual provides information on the use of SCALE/KENO-VI and all its modules. The primer also contains an appendix with sample input files.« less
NASA Technical Reports Server (NTRS)
Watson, Andrew B.
1990-01-01
All vision systems, both human and machine, transform the spatial image into a coded representation. Particular codes may be optimized for efficiency or to extract useful image features. Researchers explored image codes based on primary visual cortex in man and other primates. Understanding these codes will advance the art in image coding, autonomous vision, and computational human factors. In cortex, imagery is coded by features that vary in size, orientation, and position. Researchers have devised a mathematical model of this transformation, called the Hexagonal oriented Orthogonal quadrature Pyramid (HOP). In a pyramid code, features are segregated by size into layers, with fewer features in the layers devoted to large features. Pyramid schemes provide scale invariance, and are useful for coarse-to-fine searching and for progressive transmission of images. The HOP Pyramid is novel in three respects: (1) it uses a hexagonal pixel lattice, (2) it uses oriented features, and (3) it accurately models most of the prominent aspects of primary visual cortex. The transform uses seven basic features (kernels), which may be regarded as three oriented edges, three oriented bars, and one non-oriented blob. Application of these kernels to non-overlapping seven-pixel neighborhoods yields six oriented, high-pass pyramid layers, and one low-pass (blob) layer.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Woods, Brian; Gutowska, Izabela; Chiger, Howard
Computer simulations of nuclear reactor thermal-hydraulic phenomena are often used in the design and licensing of nuclear reactor systems. In order to assess the accuracy of these computer simulations, computer codes and methods are often validated against experimental data. This experimental data must be of sufficiently high quality in order to conduct a robust validation exercise. In addition, this experimental data is generally collected at experimental facilities that are of a smaller scale than the reactor systems that are being simulated due to cost considerations. Therefore, smaller scale test facilities must be designed and constructed in such a fashion tomore » ensure that the prototypical behavior of a particular nuclear reactor system is preserved. The work completed through this project has resulted in scaling analyses and conceptual design development for a test facility capable of collecting code validation data for the following high temperature gas reactor systems and events— 1. Passive natural circulation core cooling system, 2. pebble bed gas reactor concept, 3. General Atomics Energy Multiplier Module reactor, and 4. prismatic block design steam-water ingress event. In the event that code validation data for these systems or events is needed in the future, significant progress in the design of an appropriate integral-type test facility has already been completed as a result of this project. Where applicable, the next step would be to begin the detailed design development and material procurement. As part of this project applicable scaling analyses were completed and test facility design requirements developed. Conceptual designs were developed for the implementation of these design requirements at the Oregon State University (OSU) High Temperature Test Facility (HTTF). The original HTTF is based on a ¼-scale model of a high temperature gas reactor concept with the capability for both forced and natural circulation flow through a prismatic core with an electrical heat source. The peak core region temperature capability is 1400°C. As part of this project, an inventory of test facilities that could be used for these experimental programs was completed. Several of these facilities showed some promise, however, upon further investigation it became clear that only the OSU HTTF had the power and/or peak temperature limits that would allow for the experimental programs envisioned herein. Thus the conceptual design and feasibility study development focused on examining the feasibility of configuring the current HTTF to collect validation data for these experimental programs. In addition to the scaling analyses and conceptual design development, a test plan was developed for the envisioned modified test facility. This test plan included a discussion on an appropriate shakedown test program as well as the specific matrix tests. Finally, a feasibility study was completed to determine the cost and schedule considerations that would be important to any test program developed to investigate these designs and events.« less
Once-through integral system (OTIS): Final report
DOE Office of Scientific and Technical Information (OSTI.GOV)
Gloudemans, J R
1986-09-01
A scaled experimental facility, designated the once-through integral system (OTIS), was used to acquire post-small break loss-of-coolant accident (SBLOCA) data for benchmarking system codes. OTIS was also used to investigate the application of the Abnormal Transient Operating Guidelines (ATOG) used in the Babcock and Wilcox (B and W) designed nuclear steam supply system (NSSS) during the course of an SBLOCA. OTIS was a single-loop facility with a plant to model power scale factor of 1686. OTIS maintained the key elevations, approximate component volumes, and loop flow resistances, and simulated the major component phenomena of a B and W raised-loop nuclearmore » plant. A test matrix consisting of 15 tests divided into four categories was performed. The largest group contained 10 tests and was defined to parametrically obtain an extensive set of plant-typical experimental data for code benchmarking. Parameters such as leak size, leak location, and high-pressure injection (HPI) shut-off head were individually varied. The remaining categories were specified to study the impact of the ATOGs (2 tests), to note the effect of guard heater operation on observed phenomena (2 tests), and to provide a data set for comparison with previous test experience (1 test). A summary of the test results and a detailed discussion of Test 220100 is presented. Test 220100 was the nominal or reference test for the parametric studies. This test was performed with a scaled 10-cm/sup 2/ leak located in the cold leg suction piping.« less
The O*Net Jobs Classification System: A Primer for Family Researchers
ERIC Educational Resources Information Center
Crouter, Ann C.; Lanza, Stephanie T.; Pirretti, Amy; Goodman, W. Benjamin; Neebe, Eloise
2006-01-01
We introduce family researchers to the Occupational Information Network, or O*Net, an electronic database on the work characteristics of over 950 occupations. The paper here is a practical primer that covers data collection, selecting occupational characteristics, coding occupations, scale creation, and construct validity, with empirical…
Compressed Scaling of Abstract Numerosity Representations in Adult Humans and Monkeys
ERIC Educational Resources Information Center
Merten, Katharina; Nieder, Andreas
2009-01-01
There is general agreement that nonverbal animals and humans endowed with language possess an evolutionary precursor system for representing and comparing numerical values. However, whether nonverbal numerical representations in human and nonhuman primates are quantitatively similar and whether linear or logarithmic coding underlies such magnitude…
Discovering and Mitigating Software Vulnerabilities through Large-Scale Collaboration
ERIC Educational Resources Information Center
Zhao, Mingyi
2016-01-01
In today's rapidly digitizing society, people place their trust in a wide range of digital services and systems that deliver latest news, process financial transactions, store sensitive information, etc. However, this trust does not have a solid foundation, because software code that supports this digital world has security vulnerabilities. These…
NASA Astrophysics Data System (ADS)
Rundle, J.; Rundle, P.; Donnellan, A.; Li, P.
2003-12-01
We consider the problem of the complex dynamics of earthquake fault systems, and whether numerical simulations can be used to define an ensemble forecasting technology similar to that used in weather and climate research. To effectively carry out such a program, we need 1) a topological realistic model to simulate the fault system; 2) data sets to constrain the model parameters through a systematic program of data assimilation; 3) a computational technology making use of modern paradigms of high performance and parallel computing systems; and 4) software to visualize and analyze the results. In particular, we focus attention of a new version of our code Virtual California (version 2001) in which we model all of the major strike slip faults extending throughout California, from the Mexico-California border to the Mendocino Triple Junction. We use the historic data set of earthquakes larger than magnitude M > 6 to define the frictional properties of all 654 fault segments (degrees of freedom) in the model. Previous versions of Virtual California had used only 215 fault segments to model the strike slip faults in southern California. To compute the dynamics and the associated surface deformation, we use message passing as implemented in the MPICH standard distribution on a small Beowulf cluster consisting of 10 cpus. We are also planning to run the code on significantly larger machines so that we can begin to examine much finer spatial scales of resolution, and to assess scaling properties of the code. We present results of simulations both as static images and as mpeg movies, so that the dynamical aspects of the computation can be assessed by the viewer. We also compute a variety of statistics from the simulations, including magnitude-frequency relations, and compare these with data from real fault systems.
A genetic scale of reading frame coding.
Michel, Christian J
2014-08-21
The reading frame coding (RFC) of codes (sets) of trinucleotides is a genetic concept which has been largely ignored during the last 50 years. A first objective is the definition of a new and simple statistical parameter PrRFC for analysing the probability (efficiency) of reading frame coding (RFC) of any trinucleotide code. A second objective is to reveal different classes and subclasses of trinucleotide codes involved in reading frame coding: the circular codes of 20 trinucleotides and the bijective genetic codes of 20 trinucleotides coding the 20 amino acids. This approach allows us to propose a genetic scale of reading frame coding which ranges from 1/3 with the random codes (RFC probability identical in the three frames) to 1 with the comma-free circular codes (RFC probability maximal in the reading frame and null in the two shifted frames). This genetic scale shows, in particular, the reading frame coding probabilities of the 12,964,440 circular codes (PrRFC=83.2% in average), the 216 C(3) self-complementary circular codes (PrRFC=84.1% in average) including the code X identified in eukaryotic and prokaryotic genes (PrRFC=81.3%) and the 339,738,624 bijective genetic codes (PrRFC=61.5% in average) including the 52 codes without permuted trinucleotides (PrRFC=66.0% in average). Otherwise, the reading frame coding probabilities of each trinucleotide code coding an amino acid with the universal genetic code are also determined. The four amino acids Gly, Lys, Phe and Pro are coded by codes (not circular) with RFC probabilities equal to 2/3, 1/2, 1/2 and 2/3, respectively. The amino acid Leu is coded by a circular code (not comma-free) with a RFC probability equal to 18/19. The 15 other amino acids are coded by comma-free circular codes, i.e. with RFC probabilities equal to 1. The identification of coding properties in some classes of trinucleotide codes studied here may bring new insights in the origin and evolution of the genetic code. Copyright © 2014 Elsevier Ltd. All rights reserved.
Data engineering systems: Computerized modeling and data bank capabilities for engineering analysis
NASA Technical Reports Server (NTRS)
Kopp, H.; Trettau, R.; Zolotar, B.
1984-01-01
The Data Engineering System (DES) is a computer-based system that organizes technical data and provides automated mechanisms for storage, retrieval, and engineering analysis. The DES combines the benefits of a structured data base system with automated links to large-scale analysis codes. While the DES provides the user with many of the capabilities of a computer-aided design (CAD) system, the systems are actually quite different in several respects. A typical CAD system emphasizes interactive graphics capabilities and organizes data in a manner that optimizes these graphics. On the other hand, the DES is a computer-aided engineering system intended for the engineer who must operationally understand an existing or planned design or who desires to carry out additional technical analysis based on a particular design. The DES emphasizes data retrieval in a form that not only provides the engineer access to search and display the data but also links the data automatically with the computer analysis codes.
Validation of the WIMSD4M cross-section generation code with benchmark results
DOE Office of Scientific and Technical Information (OSTI.GOV)
Deen, J.R.; Woodruff, W.L.; Leal, L.E.
1995-01-01
The WIMSD4 code has been adopted for cross-section generation in support of the Reduced Enrichment Research and Test Reactor (RERTR) program at Argonne National Laboratory (ANL). Subsequently, the code has undergone several updates, and significant improvements have been achieved. The capability of generating group-collapsed micro- or macroscopic cross sections from the ENDF/B-V library and the more recent evaluation, ENDF/B-VI, in the ISOTXS format makes the modified version of the WIMSD4 code, WIMSD4M, very attractive, not only for the RERTR program, but also for the reactor physics community. The intent of the present paper is to validate the WIMSD4M cross-section librariesmore » for reactor modeling of fresh water moderated cores. The results of calculations performed with multigroup cross-section data generated with the WIMSD4M code will be compared against experimental results. These results correspond to calculations carried out with thermal reactor benchmarks of the Oak Ridge National Laboratory (ORNL) unreflected HEU critical spheres, the TRX LEU critical experiments, and calculations of a modified Los Alamos HEU D{sub 2}O moderated benchmark critical system. The benchmark calculations were performed with the discrete-ordinates transport code, TWODANT, using WIMSD4M cross-section data. Transport calculations using the XSDRNPM module of the SCALE code system are also included. In addition to transport calculations, diffusion calculations with the DIF3D code were also carried out, since the DIF3D code is used in the RERTR program for reactor analysis and design. For completeness, Monte Carlo results of calculations performed with the VIM and MCNP codes are also presented.« less
NASA Astrophysics Data System (ADS)
Aleksandrov, A. P.; Berezovoj, A. N.; Gal'Per, A. M.; Grachev, V. M.; Dmitrenko, V. V.; Kirillov-Ugryumov, V. G.; Lebedev, V. V.; Lyakhov, V. A.; Moiseev, A. A.; Ulin, S. E.; Shchvets, N. I.
1984-11-01
Coding collimators are used to improve the angular resolution of gamma-ray telescopes at energies above 50 MeV. However, the interaction of cosmic rays with the collimator material can lead to the appearance of a gramma-ray background flux which can have a deleterious effect on measurement efficiency. An experiment was performed on the Salyut-6-Soyuz spacecraft system with the Elena-F small-scale gamma-ray telescope in order to measure the magnitude of this background. It is shown that, even at a zenith angle of approximately zero degrees (the angle at which the gamma-ray observations are made), the coding collimator has only an insignificant effect on the background conditions.
Carriot, Jérome; Jamali, Mohsen; Chacron, Maurice J; Cullen, Kathleen E
2017-04-15
In order to understand how the brain's coding strategies are adapted to the statistics of the sensory stimuli experienced during everyday life, the use of animal models is essential. Mice and non-human primates have become common models for furthering our knowledge of the neuronal coding of natural stimuli, but differences in their natural environments and behavioural repertoire may impact optimal coding strategies. Here we investigated the structure and statistics of the vestibular input experienced by mice versus non-human primates during natural behaviours, and found important differences. Our data establish that the structure and statistics of natural signals in non-human primates more closely resemble those observed previously in humans, suggesting similar coding strategies for incoming vestibular input. These results help us understand how the effects of active sensing and biomechanics will differentially shape the statistics of vestibular stimuli across species, and have important implications for sensory coding in other systems. It is widely believed that sensory systems are adapted to the statistical structure of natural stimuli, thereby optimizing coding. Recent evidence suggests that this is also the case for the vestibular system, which senses self-motion and in turn contributes to essential brain functions ranging from the most automatic reflexes to spatial perception and motor coordination. However, little is known about the statistics of self-motion stimuli actually experienced by freely moving animals in their natural environments. Accordingly, here we examined the natural self-motion signals experienced by mice and monkeys: two species commonly used to study vestibular neural coding. First, we found that probability distributions for all six dimensions of motion (three rotations, three translations) in both species deviated from normality due to long tails. Interestingly, the power spectra of natural rotational stimuli displayed similar structure for both species and were not well fitted by power laws. This result contrasts with reports that the natural spectra of other sensory modalities (i.e. vision, auditory and tactile) instead show a power-law relationship with frequency, which indicates scale invariance. Analysis of natural translational stimuli revealed important species differences as power spectra deviated from scale invariance for monkeys but not for mice. By comparing our results to previously published data for humans, we found the statistical structure of natural self-motion stimuli in monkeys and humans more closely resemble one another. Our results thus predict that, overall, neural coding strategies used by vestibular pathways to encode natural self-motion stimuli are fundamentally different in rodents and primates. © 2017 The Authors. The Journal of Physiology © 2017 The Physiological Society.
Carriot, Jérome; Jamali, Mohsen; Chacron, Maurice J.
2017-01-01
Key points In order to understand how the brain's coding strategies are adapted to the statistics of the sensory stimuli experienced during everyday life, the use of animal models is essential.Mice and non‐human primates have become common models for furthering our knowledge of the neuronal coding of natural stimuli, but differences in their natural environments and behavioural repertoire may impact optimal coding strategies.Here we investigated the structure and statistics of the vestibular input experienced by mice versus non‐human primates during natural behaviours, and found important differences.Our data establish that the structure and statistics of natural signals in non‐human primates more closely resemble those observed previously in humans, suggesting similar coding strategies for incoming vestibular input.These results help us understand how the effects of active sensing and biomechanics will differentially shape the statistics of vestibular stimuli across species, and have important implications for sensory coding in other systems. Abstract It is widely believed that sensory systems are adapted to the statistical structure of natural stimuli, thereby optimizing coding. Recent evidence suggests that this is also the case for the vestibular system, which senses self‐motion and in turn contributes to essential brain functions ranging from the most automatic reflexes to spatial perception and motor coordination. However, little is known about the statistics of self‐motion stimuli actually experienced by freely moving animals in their natural environments. Accordingly, here we examined the natural self‐motion signals experienced by mice and monkeys: two species commonly used to study vestibular neural coding. First, we found that probability distributions for all six dimensions of motion (three rotations, three translations) in both species deviated from normality due to long tails. Interestingly, the power spectra of natural rotational stimuli displayed similar structure for both species and were not well fitted by power laws. This result contrasts with reports that the natural spectra of other sensory modalities (i.e. vision, auditory and tactile) instead show a power‐law relationship with frequency, which indicates scale invariance. Analysis of natural translational stimuli revealed important species differences as power spectra deviated from scale invariance for monkeys but not for mice. By comparing our results to previously published data for humans, we found the statistical structure of natural self‐motion stimuli in monkeys and humans more closely resemble one another. Our results thus predict that, overall, neural coding strategies used by vestibular pathways to encode natural self‐motion stimuli are fundamentally different in rodents and primates. PMID:28083981
Exposure calculation code module for reactor core analysis: BURNER
DOE Office of Scientific and Technical Information (OSTI.GOV)
Vondy, D.R.; Cunningham, G.W.
1979-02-01
The code module BURNER for nuclear reactor exposure calculations is presented. The computer requirements are shown, as are the reference data and interface data file requirements, and the programmed equations and procedure of calculation are described. The operating history of a reactor is followed over the period between solutions of the space, energy neutronics problem. The end-of-period nuclide concentrations are determined given the necessary information. A steady state, continuous fueling model is treated in addition to the usual fixed fuel model. The control options provide flexibility to select among an unusually wide variety of programmed procedures. The code also providesmore » user option to make a number of auxiliary calculations and print such information as the local gamma source, cumulative exposure, and a fine scale power density distribution in a selected zone. The code is used locally in a system for computation which contains the VENTURE diffusion theory neutronics code and other modules.« less
NASA Astrophysics Data System (ADS)
Wang, H. H.; Shi, Y. P.; Li, X. H.; Ni, K.; Zhou, Q.; Wang, X. H.
2018-03-01
In this paper, a scheme to measure the position of precision stages, with a high precision, is presented. The encoder is composed of a scale grating and a compact two-probe reading head, to read the zero position pulse signal and continuous incremental displacement signal. The scale grating contains different codes, multiple reference codes with different spacing superimposed onto the incremental grooves with an equal spacing structure. The codes of reference mask in the reading head is the same with the reference codes on the scale grating, and generate pulse signal to locate the reference position primarily when the reading head moves along the scale grating. After locating the reference position in a section by means of the pulse signal, the reference position can be located precisely with the amplitude of the incremental displacement signal. A kind of reference codes and scale grating were designed, and experimental results show that the primary precision of the design achieved is 1 μ m. The period of the incremental signal is 1μ m, and 1000/N nm precision can be achieved by subdivide the incremental signal in N times.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Chang, S. L.
1998-08-25
Fluid Catalytic Cracking (FCC) technology is the most important process used by the refinery industry to convert crude oil to valuable lighter products such as gasoline. Process development is generally very time consuming especially when a small pilot unit is being scaled-up to a large commercial unit because of the lack of information to aide in the design of scaled-up units. Such information can now be obtained by analysis based on the pilot scale measurements and computer simulation that includes controlling physics of the FCC system. A Computational fluid dynamic (CFD) code, ICRKFLO, has been developed at Argonne National Laboratorymore » (ANL) and has been successfully applied to the simulation of catalytic petroleum cracking risers. It employs hybrid hydrodynamic-chemical kinetic coupling techniques, enabling the analysis of an FCC unit with complex chemical reaction sets containing tens or hundreds of subspecies. The code has been continuously validated based on pilot-scale experimental data. It is now being used to investigate the effects of scaled-up FCC units. Among FCC operating conditions, the feed injection conditions are found to have a strong impact on the product yields of scaled-up FCC units. The feed injection conditions appear to affect flow and heat transfer patterns and the interaction of hydrodynamics and cracking kinetics causes the product yields to change accordingly.« less
NASA Astrophysics Data System (ADS)
Papior, Nick; Lorente, Nicolás; Frederiksen, Thomas; García, Alberto; Brandbyge, Mads
2017-03-01
We present novel methods implemented within the non-equilibrium Green function code (NEGF) TRANSIESTA based on density functional theory (DFT). Our flexible, next-generation DFT-NEGF code handles devices with one or multiple electrodes (Ne ≥ 1) with individual chemical potentials and electronic temperatures. We describe its novel methods for electrostatic gating, contour optimizations, and assertion of charge conservation, as well as the newly implemented algorithms for optimized and scalable matrix inversion, performance-critical pivoting, and hybrid parallelization. Additionally, a generic NEGF "post-processing" code (TBTRANS/PHTRANS) for electron and phonon transport is presented with several novelties such as Hamiltonian interpolations, Ne ≥ 1 electrode capability, bond-currents, generalized interface for user-defined tight-binding transport, transmission projection using eigenstates of a projected Hamiltonian, and fast inversion algorithms for large-scale simulations easily exceeding 106 atoms on workstation computers. The new features of both codes are demonstrated and bench-marked for relevant test systems.
GBS: Global 3D simulation of tokamak edge region
NASA Astrophysics Data System (ADS)
Zhu, Ben; Fisher, Dustin; Rogers, Barrett; Ricci, Paolo
2012-10-01
A 3D two-fluid global code, namely Global Braginskii Solver (GBS), is being developed to explore the physics of turbulent transport, confinement, self-consistent profile formation, pedestal scaling and related phenomena in the edge region of tokamaks. Aimed at solving drift-reduced Braginskii equations [1] in complex magnetic geometry, the GBS is used for turbulence simulation in SOL region. In the recent upgrade, the simulation domain is expanded into close flux region with twist-shift boundary conditions. Hence, the new GBS code is able to explore global transport physics in an annular full-torus domain from the top of the pedestal into the far SOL. We are in the process of identifying and analyzing the linear and nonlinear instabilities in the system using the new GBS code. Preliminary results will be presented and compared with other codes if possible.[4pt] [1] A. Zeiler, J. F. Drake and B. Rogers, Phys. Plasmas 4, 2134 (1997)
Adams, Derk; Schreuder, Astrid B; Salottolo, Kristin; Settell, April; Goss, J Richard
2011-07-01
There are significant changes in the abbreviated injury scale (AIS) 2005 system, which make it impractical to compare patients coded in AIS version 98 with patients coded in AIS version 2005. Harborview Medical Center created a computer algorithm "Harborview AIS Mapping Program (HAMP)" to automatically convert AIS 2005 to AIS 98 injury codes. The mapping was validated using 6 months of double-coded patient injury records from a Level I Trauma Center. HAMP was used to determine how closely individual AIS and injury severity scores (ISS) were converted from AIS 2005 to AIS 98 versions. The kappa statistic was used to measure the agreement between manually determined codes and HAMP-derived codes. Seven hundred forty-nine patient records were used for validation. For the conversion of AIS codes, the measure of agreement between HAMP and manually determined codes was [kappa] = 0.84 (95% confidence interval, 0.82-0.86). The algorithm errors were smaller in magnitude than the manually determined coding errors. For the conversion of ISS, the agreement between HAMP versus manually determined ISS was [kappa] = 0.81 (95% confidence interval, 0.78-0.84). The HAMP algorithm successfully converted injuries coded in AIS 2005 to AIS 98. This algorithm will be useful when comparing trauma patient clinical data across populations coded in different versions, especially for longitudinal studies.
A real-time simulator of a turbofan engine
NASA Technical Reports Server (NTRS)
Litt, Jonathan S.; Delaat, John C.; Merrill, Walter C.
1989-01-01
A real-time digital simulator of a Pratt and Whitney F100 engine has been developed for real-time code verification and for actuator diagnosis during full-scale engine testing. This self-contained unit can operate in an open-loop stand-alone mode or as part of closed-loop control system. It can also be used for control system design and development. Tests conducted in conjunction with the NASA Advanced Detection, Isolation, and Accommodation program show that the simulator is a valuable tool for real-time code verification and as a real-time actuator simulator for actuator fault diagnosis. Although currently a small perturbation model, advances in microprocessor hardware should allow the simulator to evolve into a real-time, full-envelope, full engine simulation.
NASA Astrophysics Data System (ADS)
Cardall, Christian Y.; Budiardja, Reuben D.; Endeve, Eirik; Mezzacappa, Anthony
2014-02-01
GenASiS (General Astrophysical Simulation System) is a new code being developed initially and primarily, though by no means exclusively, for the simulation of core-collapse supernovae on the world's leading capability supercomputers. This paper—the first in a series—demonstrates a centrally refined coordinate patch suitable for gravitational collapse and documents methods for compressible nonrelativistic hydrodynamics. We benchmark the hydrodynamics capabilities of GenASiS against many standard test problems; the results illustrate the basic competence of our implementation, demonstrate the strengths and limitations of the HLLC relative to the HLL Riemann solver in a number of interesting cases, and provide preliminary indications of the code's ability to scale and to function with cell-by-cell fixed-mesh refinement.
Comparison of the CENTRM resonance processor to the NITAWL resonance processor in SCALE
DOE Office of Scientific and Technical Information (OSTI.GOV)
Hollenbach, D.F.; Petrie, L.M.
1998-01-01
This report compares the MTAWL and CENTRM resonance processors in the SCALE code system. The cases examined consist of the International OECD/NEA Criticality Working Group Benchmark 20 problem. These cases represent fuel pellets partially dissolved in a borated solution. The assumptions inherent to the Nordheim Integral Treatment, used in MTAWL, are not valid for these problems. CENTRM resolves this limitation by explicitly calculating a problem dependent point flux from point cross sections, which is then used to create group cross sections.
Matney, Susan; Bakken, Suzanne; Huff, Stanley M
2003-01-01
In recent years, the Logical Observation Identifiers, Names, and Codes (LOINC) Database has been expanded to include assessment items of relevance to nursing and in 2002 met the criteria for "recognition" by the American Nurses Association. Assessment measures in LOINC include those related to vital signs, obstetric measurements, clinical assessment scales, assessments from standardized nursing terminologies, and research instruments. In order for LOINC to be of greater use in implementing information systems that support nursing practice, additional content is needed. Moreover, those implementing systems for nursing practice must be aware of the manner in which LOINC codes for assessments can be appropriately linked with other aspects of the nursing process such as diagnoses and interventions. Such linkages are necessary to document nursing contributions to healthcare outcomes within the context of a multidisciplinary care environment and to facilitate building of nursing knowledge from clinical practice. The purposes of this paper are to provide an overview of the LOINC database, to describe examples of assessments of relevance to nursing contained in LOINC, and to illustrate linkages of LOINC assessments with other nursing concepts.
Post-Newtonian Dynamical Modeling of Supermassive Black Holes in Galactic-scale Simulations
DOE Office of Scientific and Technical Information (OSTI.GOV)
Rantala, Antti; Pihajoki, Pauli; Johansson, Peter H.
We present KETJU, a new extension of the widely used smoothed particle hydrodynamics simulation code GADGET-3. The key feature of the code is the inclusion of algorithmically regularized regions around every supermassive black hole (SMBH). This allows for simultaneously following global galactic-scale dynamical and astrophysical processes, while solving the dynamics of SMBHs, SMBH binaries, and surrounding stellar systems at subparsec scales. The KETJU code includes post-Newtonian terms in the equations of motions of the SMBHs, which enables a new SMBH merger criterion based on the gravitational wave coalescence timescale, pushing the merger separation of SMBHs down to ∼0.005 pc. Wemore » test the performance of our code by comparison to NBODY7 and rVINE. We set up dynamically stable multicomponent merger progenitor galaxies to study the SMBH binary evolution during galaxy mergers. In our simulation sample the SMBH binaries do not suffer from the final-parsec problem, which we attribute to the nonspherical shape of the merger remnants. For bulge-only models, the hardening rate decreases with increasing resolution, whereas for models that in addition include massive dark matter halos, the SMBH binary hardening rate becomes practically independent of the mass resolution of the stellar bulge. The SMBHs coalesce on average 200 Myr after the formation of the SMBH binary. However, small differences in the initial SMBH binary eccentricities can result in large differences in the SMBH coalescence times. Finally, we discuss the future prospects of KETJU, which allows for a straightforward inclusion of gas physics in the simulations.« less
Application of ARC/INFO to regional scale hydrogeologic modeling
DOE Office of Scientific and Technical Information (OSTI.GOV)
Wurstner, S.K.; McWethy, G.; Devary, J.L.
1993-05-01
Geographic Information Systems (GIS) can be a useful tool in data preparation for groundwater flow modeling, especially when studying large regional systems. ARC/INFO is being used in conjunction with GRASS to support data preparation for input to the CFEST (Coupled Fluid, Energy, and Solute Transport) groundwater modeling code. Simulations will be performed with CFEST to model three-dimensional, regional, groundwater flow in the West Siberian Basin.
2015-06-01
abstract constraints along six dimen- sions for expansion: user, actions, data , business rules, interfaces, and quality attributes [Gottesdiener 2010...relevant open source systems. For example, the CONNECT and HADOOP Distributed File System (HDFS) projects have many user stories that deal with...Iteration Zero involves architecture planning before writing any code. An overly long Iteration Zero is equivalent to the dysfunctional “ Big Up-Front
Development of WRF-CO2 4DVAR Data Assimilation System
NASA Astrophysics Data System (ADS)
Zheng, T.; French, N. H. F.
2016-12-01
Four dimensional variational (4DVar) assimilation systems have been widely used for CO2 inverse modeling at global scale. At regional scale, however, 4DVar assimilation systems have been lacking. At present, most regional CO2 inverse models use Lagrangian particle backward trajectory tools to compute influence function in an analytical/synthesis framework. To provide a 4DVar based alternative, we developed WRF-CO2 4DVAR based on Weather Research and Forecasting (WRF), its chemistry extension (WRF-Chem), and its data assimilation system (WRFDA/WRFPLUS). Different from WRFDA, WRF-CO2 4DVAR does not optimize meteorology initial condition, instead it solves for the optimized CO2 surface fluxes (sources/sink) constrained by atmospheric CO2 observations. Based on WRFPLUS, we developed tangent linear and adjoint code for CO2 emission, advection, vertical mixing in boundary layer, and convective transport. Furthermore, we implemented an incremental algorithm to solve for optimized CO2 emission scaling factors by iteratively minimizing the cost function in a Bayes framework. The model sensitivity (of atmospheric CO2 with respect to emission scaling factor) calculated by tangent linear and adjoint model agrees well with that calculated by finite difference, indicating the validity of the newly developed code. The effectiveness of WRF-CO2 4DVar for inverse modeling is tested using forward-model generated pseudo-observation data in two experiments: first-guess CO2 fluxes has a 50% overestimation in the first case and 50% underestimation in the second. In both cases, WRF-CO2 4DVar reduces cost function to less than 10-4 of its initial values in less than 20 iterations and successfully recovers the true values of emission scaling factors. We expect future applications of WRF-CO2 4DVar with satellite observations will provide insights for CO2 regional inverse modeling, including the impacts of model transport error in vertical mixing.
DOE Office of Scientific and Technical Information (OSTI.GOV)
de Supinski, B R; Miller, B P; Liblit, B
2011-09-13
Petascale platforms with O(10{sup 5}) and O(10{sup 6}) processing cores are driving advancements in a wide range of scientific disciplines. These large systems create unprecedented application development challenges. Scalable correctness tools are critical to shorten the time-to-solution on these systems. Currently, many DOE application developers use primitive manual debugging based on printf or traditional debuggers such as TotalView or DDT. This paradigm breaks down beyond a few thousand cores, yet bugs often arise above that scale. Programmers must reproduce problems in smaller runs to analyze them with traditional tools, or else perform repeated runs at scale using only primitive techniques.more » Even when traditional tools run at scale, the approach wastes substantial effort and computation cycles. Continued scientific progress demands new paradigms for debugging large-scale applications. The Correctness on Petascale Systems (CoPS) project is developing a revolutionary debugging scheme that will reduce the debugging problem to a scale that human developers can comprehend. The scheme can provide precise diagnoses of the root causes of failure, including suggestions of the location and the type of errors down to the level of code regions or even a single execution point. Our fundamentally new strategy combines and expands three relatively new complementary debugging approaches. The Stack Trace Analysis Tool (STAT), a 2011 R&D 100 Award Winner, identifies behavior equivalence classes in MPI jobs and highlights behavior when elements of the class demonstrate divergent behavior, often the first indicator of an error. The Cooperative Bug Isolation (CBI) project has developed statistical techniques for isolating programming errors in widely deployed code that we will adapt to large-scale parallel applications. Finally, we are developing a new approach to parallelizing expensive correctness analyses, such as analysis of memory usage in the Memgrind tool. In the first two years of the project, we have successfully extended STAT to determine the relative progress of different MPI processes. We have shown that the STAT, which is now included in the debugging tools distributed by Cray with their large-scale systems, substantially reduces the scale at which traditional debugging techniques are applied. We have extended CBI to large-scale systems and developed new compiler based analyses that reduce its instrumentation overhead. Our results demonstrate that CBI can identify the source of errors in large-scale applications. Finally, we have developed MPIecho, a new technique that will reduce the time required to perform key correctness analyses, such as the detection of writes to unallocated memory. Overall, our research results are the foundations for new debugging paradigms that will improve application scientist productivity by reducing the time to determine which package or module contains the root cause of a problem that arises at all scales of our high end systems. While we have made substantial progress in the first two years of CoPS research, significant work remains. While STAT provides scalable debugging assistance for incorrect application runs, we could apply its techniques to assertions in order to observe deviations from expected behavior. Further, we must continue to refine STAT's techniques to represent behavioral equivalence classes efficiently as we expect systems with millions of threads in the next year. We are exploring new CBI techniques that can assess the likelihood that execution deviations from past behavior are the source of erroneous execution. Finally, we must develop usable correctness analyses that apply the MPIecho parallelization strategy in order to locate coding errors. We expect to make substantial progress on these directions in the next year but anticipate that significant work will remain to provide usable, scalable debugging paradigms.« less
DOE Office of Scientific and Technical Information (OSTI.GOV)
Vance, J.N.; Holderness, J.H.; James, D.W.
1992-12-01
Waste stream scaling factors based on sampling programs are vulnerable to one or more of the following factors: sample representativeness, analytic accuracy, and measurement sensitivity. As an alternative to sample analyses or as a verification of the sampling results, this project proposes the use of the RADSOURCE code, which accounts for the release of fuel-source radionuclides. Once the release rates of these nuclides from fuel are known, the code develops scaling factors for waste streams based on easily measured Cobalt-60 (Co-60) and Cesium-137 (Cs-137). The project team developed mathematical models to account for the appearance rate of 10CFR61 radionuclides inmore » reactor coolant. They based these models on the chemistry and nuclear physics of the radionuclides involved. Next, they incorporated the models into a computer code that calculates plant waste stream scaling factors based on reactor coolant gamma- isotopic data. Finally, the team performed special sampling at 17 reactors to validate the models in the RADSOURCE code.« less
NASA Technical Reports Server (NTRS)
Bidwell, Colin S.; Papadakis, Michael
2005-01-01
Collection efficiency and ice accretion calculations have been made for a series of business jet horizontal tail configurations using a three-dimensional panel code, an adaptive grid code, and the NASA Glenn LEWICE3D grid based ice accretion code. The horizontal tail models included two full scale wing tips and a 25 percent scale model. Flow solutions for the horizontal tails were generated using the PMARC panel code. Grids used in the ice accretion calculations were generated using the adaptive grid code ICEGRID. The LEWICE3D grid based ice accretion program was used to calculate impingement efficiency and ice shapes. Ice shapes typifying rime and mixed icing conditions were generated for a 30 minute hold condition. All calculations were performed on an SGI Octane computer. The results have been compared to experimental flow and impingement data. In general, the calculated flow and collection efficiencies compared well with experiment, and the ice shapes appeared representative of the rime and mixed icing conditions for which they were calculated.
CASMO5/TSUNAMI-3D spent nuclear fuel reactivity uncertainty analysis
DOE Office of Scientific and Technical Information (OSTI.GOV)
Ferrer, R.; Rhodes, J.; Smith, K.
2012-07-01
The CASMO5 lattice physics code is used in conjunction with the TSUNAMI-3D sequence in ORNL's SCALE 6 code system to estimate the uncertainties in hot-to-cold reactivity changes due to cross-section uncertainty for PWR assemblies at various burnup points. The goal of the analysis is to establish the multiplication factor uncertainty similarity between various fuel assemblies at different conditions in a quantifiable manner and to obtain a bound on the hot-to-cold reactivity uncertainty over the various assembly types and burnup attributed to fundamental cross-section data uncertainty. (authors)
An Ice Protection and Detection Systems Manufacturer's Perspective
NASA Technical Reports Server (NTRS)
Sweet, Dave
2009-01-01
Accomplishments include: World Class Aircraft Icing Research Center and Facility. Primary Sponsor/Partner - Aircraft Icing Consortia/Meetings. Icing Research Tunnel. Icing Test Aircraft. Icing Codes - LEWICE/Scaling, et al. Development of New Technologies (SBIR, STTR, et al). Example: Look Ahead Ice Detection. Pilot Training Materials. Full Cooperation with Academia, Government and Industry.
Data Intensive Systems (DIS) Benchmark Performance Summary
2003-08-01
models assumed by today’s conventional architectures. Such applications include model- based Automatic Target Recognition (ATR), synthetic aperture...radar (SAR) codes, large scale dynamic databases/battlefield integration, dynamic sensor- based processing, high-speed cryptanalysis, high speed...distributed interactive and data intensive simulations, data-oriented problems characterized by pointer- based and other highly irregular data structures
Spatial application of WEPS for estimating wind erosion in the Pacific Northwest
USDA-ARS?s Scientific Manuscript database
The Wind Erosion Prediction System (WEPS) is used to simulate soil erosion on croplands and was originally designed to run field scale simulations. This research is an extension of the WEPS model to run on multiple fields (grids) covering a larger region. We modified the WEPS source code to allow it...
Composite load spectra for select space propulsion structural components
NASA Technical Reports Server (NTRS)
Newell, James F.; Ho, Hing W.
1991-01-01
This report summarizes the development for: (1) correlation fields; (2) applications to liquid oxygen post; (3) models for pressure fluctuatios and vibration loads fluctuations; (4) additions to expert systems; and (5) scaling criteria. Implementation to computer code is also described. Demonstration sample cases are included with additional applications to engine duct and pipe bend.
The Functional Analytic Psychotherapy Rating Scale (FAPRS): A Behavioral Psychotherapy Coding System
ERIC Educational Resources Information Center
Callaghan, Glenn M.; Follette, William C.; Ruckstuhl, L. E., Jr.; Linnerooth, Peter J. N.
2008-01-01
Many researchers and clinicians believe that the therapeutic relationship is essential in bringing about clinical change. Empirical research to support this contention is scarce in part due to the difficulty of specifying and measuring theoretically derived mechanisms of change and the important dimensions of the client-therapist relationship.…
CORALINA: a universal method for the generation of gRNA libraries for CRISPR-based screening.
Köferle, Anna; Worf, Karolina; Breunig, Christopher; Baumann, Valentin; Herrero, Javier; Wiesbeck, Maximilian; Hutter, Lukas H; Götz, Magdalena; Fuchs, Christiane; Beck, Stephan; Stricker, Stefan H
2016-11-14
The bacterial CRISPR system is fast becoming the most popular genetic and epigenetic engineering tool due to its universal applicability and adaptability. The desire to deploy CRISPR-based methods in a large variety of species and contexts has created an urgent need for the development of easy, time- and cost-effective methods enabling large-scale screening approaches. Here we describe CORALINA (comprehensive gRNA library generation through controlled nuclease activity), a method for the generation of comprehensive gRNA libraries for CRISPR-based screens. CORALINA gRNA libraries can be derived from any source of DNA without the need of complex oligonucleotide synthesis. We show the utility of CORALINA for human and mouse genomic DNA, its reproducibility in covering the most relevant genomic features including regulatory, coding and non-coding sequences and confirm the functionality of CORALINA generated gRNAs. The simplicity and cost-effectiveness make CORALINA suitable for any experimental system. The unprecedented sequence complexities obtainable with CORALINA libraries are a necessary pre-requisite for less biased large scale genomic and epigenomic screens.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Greenough, Jeffrey A.; de Supinski, Bronis R.; Yates, Robert K.
2005-04-25
We describe the performance of the block-structured Adaptive Mesh Refinement (AMR) code Raptor on the 32k node IBM BlueGene/L computer. This machine represents a significant step forward towards petascale computing. As such, it presents Raptor with many challenges for utilizing the hardware efficiently. In terms of performance, Raptor shows excellent weak and strong scaling when running in single level mode (no adaptivity). Hardware performance monitors show Raptor achieves an aggregate performance of 3:0 Tflops in the main integration kernel on the 32k system. Results from preliminary AMR runs on a prototype astrophysical problem demonstrate the efficiency of the current softwaremore » when running at large scale. The BG/L system is enabling a physics problem to be considered that represents a factor of 64 increase in overall size compared to the largest ones of this type computed to date. Finally, we provide a description of the development work currently underway to address our inefficiencies.« less
Dynamic Performance of High Bypass Ratio Turbine Engines With Water Ingestion
NASA Technical Reports Server (NTRS)
Murthy, S. N. B.
1996-01-01
The research on dynamic performance of high bypass turbofan engines includes studies on inlets, turbomachinery and the total engine system operating with air-water mixture; the water may be in vapor, droplet, or film form, and their combinations. Prediction codes (WISGS, WINCOF, WINCOF-1, WINCLR, and Transient Engine Performance Code) for performance changes, as well as changes in blade-casing clearance, have been established and demonstrated in application to actual, generic engines. In view of the continuous changes in water distribution in turbomachinery, the performance of both components and the total engine system must be determined in a time-dependent mode; hence, the determination of clearance changes also requires a time-dependent approach. In general, the performance and clearances changes cannot be scaled either with respect to operating or ingestion conditions. Removal of water prior to phase change is the most effective means of avoiding ingestion effects. Sufficient background has been established to perform definitive, full scale tests on a set of components and a complete engine to establish engine control and operability with various air-water vapor-water mixtures.
PhysiCell: An open source physics-based cell simulator for 3-D multicellular systems.
Ghaffarizadeh, Ahmadreza; Heiland, Randy; Friedman, Samuel H; Mumenthaler, Shannon M; Macklin, Paul
2018-02-01
Many multicellular systems problems can only be understood by studying how cells move, grow, divide, interact, and die. Tissue-scale dynamics emerge from systems of many interacting cells as they respond to and influence their microenvironment. The ideal "virtual laboratory" for such multicellular systems simulates both the biochemical microenvironment (the "stage") and many mechanically and biochemically interacting cells (the "players" upon the stage). PhysiCell-physics-based multicellular simulator-is an open source agent-based simulator that provides both the stage and the players for studying many interacting cells in dynamic tissue microenvironments. It builds upon a multi-substrate biotransport solver to link cell phenotype to multiple diffusing substrates and signaling factors. It includes biologically-driven sub-models for cell cycling, apoptosis, necrosis, solid and fluid volume changes, mechanics, and motility "out of the box." The C++ code has minimal dependencies, making it simple to maintain and deploy across platforms. PhysiCell has been parallelized with OpenMP, and its performance scales linearly with the number of cells. Simulations up to 105-106 cells are feasible on quad-core desktop workstations; larger simulations are attainable on single HPC compute nodes. We demonstrate PhysiCell by simulating the impact of necrotic core biomechanics, 3-D geometry, and stochasticity on the dynamics of hanging drop tumor spheroids and ductal carcinoma in situ (DCIS) of the breast. We demonstrate stochastic motility, chemical and contact-based interaction of multiple cell types, and the extensibility of PhysiCell with examples in synthetic multicellular systems (a "cellular cargo delivery" system, with application to anti-cancer treatments), cancer heterogeneity, and cancer immunology. PhysiCell is a powerful multicellular systems simulator that will be continually improved with new capabilities and performance improvements. It also represents a significant independent code base for replicating results from other simulation platforms. The PhysiCell source code, examples, documentation, and support are available under the BSD license at http://PhysiCell.MathCancer.org and http://PhysiCell.sf.net.
NASA's Information Power Grid: Large Scale Distributed Computing and Data Management
NASA Technical Reports Server (NTRS)
Johnston, William E.; Vaziri, Arsi; Hinke, Tom; Tanner, Leigh Ann; Feiereisen, William J.; Thigpen, William; Tang, Harry (Technical Monitor)
2001-01-01
Large-scale science and engineering are done through the interaction of people, heterogeneous computing resources, information systems, and instruments, all of which are geographically and organizationally dispersed. The overall motivation for Grids is to facilitate the routine interactions of these resources in order to support large-scale science and engineering. Multi-disciplinary simulations provide a good example of a class of applications that are very likely to require aggregation of widely distributed computing, data, and intellectual resources. Such simulations - e.g. whole system aircraft simulation and whole system living cell simulation - require integrating applications and data that are developed by different teams of researchers frequently in different locations. The research team's are the only ones that have the expertise to maintain and improve the simulation code and/or the body of experimental data that drives the simulations. This results in an inherently distributed computing and data management environment.
High Performance Radiation Transport Simulations on TITAN
DOE Office of Scientific and Technical Information (OSTI.GOV)
Baker, Christopher G; Davidson, Gregory G; Evans, Thomas M
2012-01-01
In this paper we describe the Denovo code system. Denovo solves the six-dimensional, steady-state, linear Boltzmann transport equation, of central importance to nuclear technology applications such as reactor core analysis (neutronics), radiation shielding, nuclear forensics and radiation detection. The code features multiple spatial differencing schemes, state-of-the-art linear solvers, the Koch-Baker-Alcouffe (KBA) parallel-wavefront sweep algorithm for inverting the transport operator, a new multilevel energy decomposition method scaling to hundreds of thousands of processing cores, and a modern, novel code architecture that supports straightforward integration of new features. In this paper we discuss the performance of Denovo on the 10--20 petaflop ORNLmore » GPU-based system, Titan. We describe algorithms and techniques used to exploit the capabilities of Titan's heterogeneous compute node architecture and the challenges of obtaining good parallel performance for this sparse hyperbolic PDE solver containing inherently sequential computations. Numerical results demonstrating Denovo performance on early Titan hardware are presented.« less
NASA Astrophysics Data System (ADS)
Yang, Lin; Zhang, Feng; Wang, Cai-Zhuang; Ho, Kai-Ming; Travesset, Alex
2018-04-01
We present an implementation of EAM and FS interatomic potentials, which are widely used in simulating metallic systems, in HOOMD-blue, a software designed to perform classical molecular dynamics simulations using GPU accelerations. We first discuss the details of our implementation and then report extensive benchmark tests. We demonstrate that single-precision floating point operations efficiently implemented on GPUs can produce sufficient accuracy when compared against double-precision codes, as demonstrated in test simulations of calculations of the glass-transition temperature of Cu64.5Zr35.5, and pair correlation function g (r) of liquid Ni3Al. Our code scales well with the size of the simulating system on NVIDIA Tesla M40 and P100 GPUs. Compared with another popular software LAMMPS running on 32 cores of AMD Opteron 6220 processors, the GPU/CPU performance ratio can reach as high as 4.6. The source code can be accessed through the HOOMD-blue web page for free by any interested user.
Neutronic calculation of fast reactors by the EUCLID/V1 integrated code
NASA Astrophysics Data System (ADS)
Koltashev, D. A.; Stakhanova, A. A.
2017-01-01
This article considers neutronic calculation of a fast-neutron lead-cooled reactor BREST-OD-300 by the EUCLID/V1 integrated code. The main goal of development and application of integrated codes is a nuclear power plant safety justification. EUCLID/V1 is integrated code designed for coupled neutronics, thermomechanical and thermohydraulic fast reactor calculations under normal and abnormal operating conditions. EUCLID/V1 code is being developed in the Nuclear Safety Institute of the Russian Academy of Sciences. The integrated code has a modular structure and consists of three main modules: thermohydraulic module HYDRA-IBRAE/LM/V1, thermomechanical module BERKUT and neutronic module DN3D. In addition, the integrated code includes databases with fuel, coolant and structural materials properties. Neutronic module DN3D provides full-scale simulation of neutronic processes in fast reactors. Heat sources distribution, control rods movement, reactivity level changes and other processes can be simulated. Neutron transport equation in multigroup diffusion approximation is solved. This paper contains some calculations implemented as a part of EUCLID/V1 code validation. A fast-neutron lead-cooled reactor BREST-OD-300 transient simulation (fuel assembly floating, decompression of passive feedback system channel) and cross-validation with MCU-FR code results are presented in this paper. The calculations demonstrate EUCLID/V1 code application for BREST-OD-300 simulating and safety justification.
BlazeDEM3D-GPU A Large Scale DEM simulation code for GPUs
NASA Astrophysics Data System (ADS)
Govender, Nicolin; Wilke, Daniel; Pizette, Patrick; Khinast, Johannes
2017-06-01
Accurately predicting the dynamics of particulate materials is of importance to numerous scientific and industrial areas with applications ranging across particle scales from powder flow to ore crushing. Computational discrete element simulations is a viable option to aid in the understanding of particulate dynamics and design of devices such as mixers, silos and ball mills, as laboratory scale tests comes at a significant cost. However, the computational time required to simulate an industrial scale simulation which consists of tens of millions of particles can take months to complete on large CPU clusters, making the Discrete Element Method (DEM) unfeasible for industrial applications. Simulations are therefore typically restricted to tens of thousands of particles with highly detailed particle shapes or a few million of particles with often oversimplified particle shapes. However, a number of applications require accurate representation of the particle shape to capture the macroscopic behaviour of the particulate system. In this paper we give an overview of the recent extensions to the open source GPU based DEM code, BlazeDEM3D-GPU, that can simulate millions of polyhedra and tens of millions of spheres on a desktop computer with a single or multiple GPUs.
Benchmark Simulation of Natural Circulation Cooling System with Salt Working Fluid Using SAM
DOE Office of Scientific and Technical Information (OSTI.GOV)
Ahmed, K. K.; Scarlat, R. O.; Hu, R.
Liquid salt-cooled reactors, such as the Fluoride Salt-Cooled High-Temperature Reactor (FHR), offer passive decay heat removal through natural circulation using Direct Reactor Auxiliary Cooling System (DRACS) loops. The behavior of such systems should be well-understood through performance analysis. The advanced system thermal-hydraulics tool System Analysis Module (SAM) from Argonne National Laboratory has been selected for this purpose. The work presented here is part of a larger study in which SAM modeling capabilities are being enhanced for the system analyses of FHR or Molten Salt Reactors (MSR). Liquid salt thermophysical properties have been implemented in SAM, as well as properties ofmore » Dowtherm A, which is used as a simulant fluid for scaled experiments, for future code validation studies. Additional physics modules to represent phenomena specific to salt-cooled reactors, such as freezing of coolant, are being implemented in SAM. This study presents a useful first benchmark for the applicability of SAM to liquid salt-cooled reactors: it provides steady-state and transient comparisons for a salt reactor system. A RELAP5-3D model of the Mark-1 Pebble-Bed FHR (Mk1 PB-FHR), and in particular its DRACS loop for emergency heat removal, provides steady state and transient results for flow rates and temperatures in the system that are used here for code-to-code comparison with SAM. The transient studied is a loss of forced circulation with SCRAM event. To the knowledge of the authors, this is the first application of SAM to FHR or any other molten salt reactors. While building these models in SAM, any gaps in the code’s capability to simulate such systems are identified and addressed immediately, or listed as future improvements to the code.« less
NASA Astrophysics Data System (ADS)
Byun, Hye Suk; El-Naggar, Mohamed Y.; Kalia, Rajiv K.; Nakano, Aiichiro; Vashishta, Priya
2017-10-01
Kinetic Monte Carlo (KMC) simulations are used to study long-time dynamics of a wide variety of systems. Unfortunately, the conventional KMC algorithm is not scalable to larger systems, since its time scale is inversely proportional to the simulated system size. A promising approach to resolving this issue is the synchronous parallel KMC (SPKMC) algorithm, which makes the time scale size-independent. This paper introduces a formal derivation of the SPKMC algorithm based on local transition-state and time-dependent Hartree approximations, as well as its scalable parallel implementation based on a dual linked-list cell method. The resulting algorithm has achieved a weak-scaling parallel efficiency of 0.935 on 1024 Intel Xeon processors for simulating biological electron transfer dynamics in a 4.2 billion-heme system, as well as decent strong-scaling parallel efficiency. The parallel code has been used to simulate a lattice of cytochrome complexes on a bacterial-membrane nanowire, and it is broadly applicable to other problems such as computational synthesis of new materials.
(U) Ristra Next Generation Code Report
DOE Office of Scientific and Technical Information (OSTI.GOV)
Hungerford, Aimee L.; Daniel, David John
LANL’s Weapons Physics management (ADX) and ASC program office have defined a strategy for exascale-class application codes that follows two supportive, and mutually risk-mitigating paths: evolution for established codes (with a strong pedigree within the user community) based upon existing programming paradigms (MPI+X); and Ristra (formerly known as NGC), a high-risk/high-reward push for a next-generation multi-physics, multi-scale simulation toolkit based on emerging advanced programming systems (with an initial focus on data-flow task-based models exemplified by Legion [5]). Development along these paths is supported by the ATDM, IC, and CSSE elements of the ASC program, with the resulting codes forming amore » common ecosystem, and with algorithm and code exchange between them anticipated. Furthermore, solution of some of the more challenging problems of the future will require a federation of codes working together, using established-pedigree codes in partnership with new capabilities as they come on line. The role of Ristra as the high-risk/high-reward path for LANL’s codes is fully consistent with its role in the Advanced Technology Development and Mitigation (ATDM) sub-program of ASC (see Appendix C), in particular its emphasis on evolving ASC capabilities through novel programming models and data management technologies.« less
COLAcode: COmoving Lagrangian Acceleration code
NASA Astrophysics Data System (ADS)
Tassev, Svetlin V.
2016-02-01
COLAcode is a serial particle mesh-based N-body code illustrating the COLA (COmoving Lagrangian Acceleration) method; it solves for Large Scale Structure (LSS) in a frame that is comoving with observers following trajectories calculated in Lagrangian Perturbation Theory (LPT). It differs from standard N-body code by trading accuracy at small-scales to gain computational speed without sacrificing accuracy at large scales. This is useful for generating large ensembles of accurate mock halo catalogs required to study galaxy clustering and weak lensing; such catalogs are needed to perform detailed error analysis for ongoing and future surveys of LSS.
Samorì, Bruno; Zuccheri, Giampaolo
2005-02-11
The nanometer scale is a special place where all sciences meet and develop a particularly strong interdisciplinarity. While biology is a source of inspiration for nanoscientists, chemistry has a central role in turning inspirations and methods from biological systems to nanotechnological use. DNA is the biological molecule by which nanoscience and nanotechnology is mostly fascinated. Nature uses DNA not only as a repository of the genetic information, but also as a controller of the expression of the genes it contains. Thus, there are codes embedded in the DNA sequence that serve to control recognition processes on the atomic scale, such as the base pairing, and others that control processes taking place on the nanoscale. From the chemical point of view, DNA is the supramolecular building block with the highest informational content. Nanoscience has therefore the opportunity of using DNA molecules to increase the level of complexity and efficiency in self-assembling and self-directing processes.
Simulation Studies of Mechanical Properties of Novel Silica Nano-structures
NASA Astrophysics Data System (ADS)
Muralidharan, Krishna; Torras Costa, Joan; Trickey, Samuel B.
2006-03-01
Advances in nanotechnology and the importance of silica as a technological material continue to stimulate computational study of the properties of possible novel silica nanostructures. Thus we have done classical molecular dynamics (MD) and multi-scale quantum mechanical (QM/MD) simulation studies of the mechanical properties of single-wall and multi-wall silica nano-rods of varying dimensions. Such nano-rods have been predicted by Mallik et al. to be unusually strong in tensile failure. Here we compare failure mechanisms of such nano-rods under tension, compression, and bending. The concurrent multi-scale QM/MD studies use the general PUPIL system (Torras et al.). In this case, PUPIL provides automated interoperation of the MNDO Transfer Hamiltonian QM code (Taylor et al.) and a locally written MD code. Embedding of the QM-forces domain is via the scheme of Mallik et al. Work supported by NSF ITR award DMR-0325553.
Modeling Hemispheric Detonation Experiments in 2-Dimensions
DOE Office of Scientific and Technical Information (OSTI.GOV)
Howard, W M; Fried, L E; Vitello, P A
2006-06-22
Experiments have been performed with LX-17 (92.5% TATB and 7.5% Kel-F 800 binder) to study scaling of detonation waves using a dimensional scaling in a hemispherical divergent geometry. We model these experiments using an arbitrary Lagrange-Eulerian (ALE3D) hydrodynamics code, with reactive flow models based on the thermo-chemical code, Cheetah. The thermo-chemical code Cheetah provides a pressure-dependent kinetic rate law, along with an equation of state based on exponential-6 fluid potentials for individual detonation product species, calibrated to high pressures ({approx} few Mbars) and high temperatures (20000K). The parameters for these potentials are fit to a wide variety of experimental data,more » including shock, compression and sound speed data. For the un-reacted high explosive equation of state we use a modified Murnaghan form. We model the detonator (including the flyer plate) and initiation system in detail. The detonator is composed of LX-16, for which we use a program burn model. Steinberg-Guinan models5 are used for the metal components of the detonator. The booster and high explosive are LX-10 and LX-17, respectively. For both the LX-10 and LX-17, we use a pressure dependent rate law, coupled with a chemical equilibrium equation of state based on Cheetah. For LX-17, the kinetic model includes carbon clustering on the nanometer size scale.« less
Validation of TGLF in C-Mod and DIII-D using machine learning and integrated modeling tools
NASA Astrophysics Data System (ADS)
Rodriguez-Fernandez, P.; White, Ae; Cao, Nm; Creely, Aj; Greenwald, Mj; Grierson, Ba; Howard, Nt; Meneghini, O.; Petty, Cc; Rice, Je; Sciortino, F.; Yuan, X.
2017-10-01
Predictive models for steady-state and perturbative transport are necessary to support burning plasma operations. A combination of machine learning algorithms and integrated modeling tools is used to validate TGLF in C-Mod and DIII-D. First, a new code suite, VITALS, is used to compare SAT1 and SAT0 models in C-Mod. VITALS exploits machine learning and optimization algorithms for the validation of transport codes. Unlike SAT0, the SAT1 saturation rule contains a model to capture cross-scale turbulence coupling. Results show that SAT1 agrees better with experiments, further confirming that multi-scale effects are needed to model heat transport in C-Mod L-modes. VITALS will next be used to analyze past data from DIII-D: L-mode ``Shortfall'' plasma and ECH swing experiments. A second code suite, PRIMA, allows for integrated modeling of the plasma response to Laser Blow-Off cold pulses. Preliminary results show that SAT1 qualitatively reproduces the propagation of cold pulses after LBO injections and SAT0 does not, indicating that cross-scale coupling effects play a role in the plasma response. PRIMA will be used to ``predict-first'' cold pulse experiments using the new LBO system at DIII-D, and analyze existing ECH heat pulse data. Work supported by DE-FC02-99ER54512, DE-FC02-04ER54698.
Samuel A. Cushman; Kevin McGarigal
2004-01-01
Multi-scale investigations of species/environment relationships are an important tool in ecological research. The scale at which independent and dependent variables are measured, and how they are coded for analysis, can strongly influence the relationships that are discovered. However, little is known about how the coding of the dependent variable set influences...
Comparison of Turkish Injury Scale (TIS) with the Abbreviated Injury Scale (AIS).
Günay, Yasemin; Yavuz, M Fatih; Eşiyok, Burcu
2003-03-12
According to the Turkish Penal Code, Section 456, an assailant is punished in a correlation to the severity of the victim's injury. In this study, the injury scale used in Turkey in the basis code 456 is compared with Abbreviated Injury Scale (AIS). For this aim, a total of 984 cases out of the total amount reported at the Traumatology Section of the Turkish Council for Forensic Medicine were randomly selected and evaluated retrospectively. In all, 40.7% of injuries were caused by blunt trauma, whereas 59.3% were caused by a penetrating trauma. According to the Turkish Injury Scale (TIS), 40.3% of the cases were scored to be of a first degree of injury, 15.6% as second degree and 44.1% as third degree. When compared, the score points 3, 4 and 5 in the AIS were seen to be nearly equivalent to the TIS of third degree. From this point of view, in the modified AIS 91.1% of first degree of injury, 51.2% of second degree and 97.2% of third degree of injury are harmonious with TIS. Generally, 83.2% of the cases are harmonious with the AIS system. The purpose of this study is to determine what was the source of differences and to focus on particular traumatic lesions in order to determine a possible rearrangement of the Turkish Injury Scale.
Color and Grey Scale in Sonar Displays
NASA Technical Reports Server (NTRS)
Kraiss, K. F.; Kuettelwesch, K. H.
1984-01-01
In spite of numerous publications 1 it is still rather unclear, whether color is of any help in sonar displays. The work presented here deals with a particular type of sonar data, i.e., LOFAR-grams (low frequency analysing and recording) where acoustic sensor data are continuously written as a time-frequency plot. The question to be answered quantitatively is, whether color coding does improve target detection when compared with a grey scale code. The data show significant differences in receiver-operating characteristics performance for the selected codes. In addition it turned out, that the background noise level affects the performance dramatically for some color codes, while others remain stable or even improve. Generally valid rules are presented on how to generate useful color scales for this particular application.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Srinath Vadlamani; Scott Kruger; Travis Austin
Extended magnetohydrodynamic (MHD) codes are used to model the large, slow-growing instabilities that are projected to limit the performance of International Thermonuclear Experimental Reactor (ITER). The multiscale nature of the extended MHD equations requires an implicit approach. The current linear solvers needed for the implicit algorithm scale poorly because the resultant matrices are so ill-conditioned. A new solver is needed, especially one that scales to the petascale. The most successful scalable parallel processor solvers to date are multigrid solvers. Applying multigrid techniques to a set of equations whose fundamental modes are dispersive waves is a promising solution to CEMM problems.more » For the Phase 1, we implemented multigrid preconditioners from the HYPRE project of the Center for Applied Scientific Computing at LLNL via PETSc of the DOE SciDAC TOPS for the real matrix systems of the extended MHD code NIMROD which is a one of the primary modeling codes of the OFES-funded Center for Extended Magnetohydrodynamic Modeling (CEMM) SciDAC. We implemented the multigrid solvers on the fusion test problem that allows for real matrix systems with success, and in the process learned about the details of NIMROD data structures and the difficulties of inverting NIMROD operators. The further success of this project will allow for efficient usage of future petascale computers at the National Leadership Facilities: Oak Ridge National Laboratory, Argonne National Laboratory, and National Energy Research Scientific Computing Center. The project will be a collaborative effort between computational plasma physicists and applied mathematicians at Tech-X Corporation, applied mathematicians Front Range Scientific Computations, Inc. (who are collaborators on the HYPRE project), and other computational plasma physicists involved with the CEMM project.« less
Fractal Viscous Fingering in Fracture Networks
NASA Astrophysics Data System (ADS)
Boyle, E.; Sams, W.; Ferer, M.; Smith, D. H.
2007-12-01
We have used two very different physical models and computer codes to study miscible injection of a low- viscosity fluid into a simple fracture network, where it displaces a much-more viscous "defending" fluid through "rock" that is otherwise impermeable. The one code (NETfLow) is a standard pore level model, originally intended to treat laboratory-scale experiments; it assumes negligible mixing of the two fluids. The other code (NFFLOW) was written to treat reservoir-scale engineering problems; It explicitly treats the flow through the fractures and allows for significant mixing of the fluids at the interface. Both codes treat the fractures as parallel plates, of different effective apertures. Results are presented for the composition profiles from both codes. Independent of the degree of fluid-mixing, the profiles from both models have a functional form identical to that for fractal viscous fingering (i.e., diffusion limited aggregation, DLA). The two codes that solve the equations for different models gave similar results; together they suggest that the injection of a low-viscosity fluid into large- scale fracture networks may be much more significantly affected by fractal fingering than previously illustrated.
NASA Astrophysics Data System (ADS)
Balaji, V.; Benson, Rusty; Wyman, Bruce; Held, Isaac
2016-10-01
Climate models represent a large variety of processes on a variety of timescales and space scales, a canonical example of multi-physics multi-scale modeling. Current hardware trends, such as Graphical Processing Units (GPUs) and Many Integrated Core (MIC) chips, are based on, at best, marginal increases in clock speed, coupled with vast increases in concurrency, particularly at the fine grain. Multi-physics codes face particular challenges in achieving fine-grained concurrency, as different physics and dynamics components have different computational profiles, and universal solutions are hard to come by. We propose here one approach for multi-physics codes. These codes are typically structured as components interacting via software frameworks. The component structure of a typical Earth system model consists of a hierarchical and recursive tree of components, each representing a different climate process or dynamical system. This recursive structure generally encompasses a modest level of concurrency at the highest level (e.g., atmosphere and ocean on different processor sets) with serial organization underneath. We propose to extend concurrency much further by running more and more lower- and higher-level components in parallel with each other. Each component can further be parallelized on the fine grain, potentially offering a major increase in the scalability of Earth system models. We present here first results from this approach, called coarse-grained component concurrency, or CCC. Within the Geophysical Fluid Dynamics Laboratory (GFDL) Flexible Modeling System (FMS), the atmospheric radiative transfer component has been configured to run in parallel with a composite component consisting of every other atmospheric component, including the atmospheric dynamics and all other atmospheric physics components. We will explore the algorithmic challenges involved in such an approach, and present results from such simulations. Plans to achieve even greater levels of coarse-grained concurrency by extending this approach within other components, such as the ocean, will be discussed.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Rizzi, Silvio; Hereld, Mark; Insley, Joseph
In this work we perform in-situ visualization of molecular dynamics simulations, which can help scientists to visualize simulation output on-the-fly, without incurring storage overheads. We present a case study to couple LAMMPS, the large-scale molecular dynamics simulation code with vl3, our parallel framework for large-scale visualization and analysis. Our motivation is to identify effective approaches for covisualization and exploration of large-scale atomistic simulations at interactive frame rates.We propose a system of coupled libraries and describe its architecture, with an implementation that runs on GPU-based clusters. We present the results of strong and weak scalability experiments, as well as future researchmore » avenues based on our results.« less
Nuclear Energy Knowledge and Validation Center (NEKVaC) Needs Workshop Summary Report
DOE Office of Scientific and Technical Information (OSTI.GOV)
Gougar, Hans
2015-02-01
The Department of Energy (DOE) has made significant progress developing simulation tools to predict the behavior of nuclear systems with greater accuracy and of increasing our capability to predict the behavior of these systems outside of the standard range of applications. These analytical tools require a more complex array of validation tests to accurately simulate the physics and multiple length and time scales. Results from modern simulations will allow experiment designers to narrow the range of conditions needed to bound system behavior and to optimize the deployment of instrumentation to limit the breadth and cost of the campaign. Modern validation,more » verification and uncertainty quantification (VVUQ) techniques enable analysts to extract information from experiments in a systematic manner and provide the users with a quantified uncertainty estimate. Unfortunately, the capability to perform experiments that would enable taking full advantage of the formalisms of these modern codes has progressed relatively little (with some notable exceptions in fuels and thermal-hydraulics); the majority of the experimental data available today is the "historic" data accumulated over the last decades of nuclear systems R&D. A validated code-model is a tool for users. An unvalidated code-model is useful for code developers to gain understanding, publish research results, attract funding, etc. As nuclear analysis codes have become more sophisticated, so have the measurement and validation methods and the challenges that confront them. A successful yet cost-effective validation effort requires expertise possessed only by a few, resources possessed only by the well-capitalized (or a willing collective), and a clear, well-defined objective (validating a code that is developed to satisfy the need(s) of an actual user). To that end, the Idaho National Laboratory established the Nuclear Energy Knowledge and Validation Center to address the challenges of modern code validation and to manage the knowledge from past, current, and future experimental campaigns. By pulling together the best minds involved in code development, experiment design, and validation to establish and disseminate best practices and new techniques, the Nuclear Energy Knowledge and Validation Center (NEKVaC or the ‘Center’) will be a resource for industry, DOE Programs, and academia validation efforts.« less
NASA Astrophysics Data System (ADS)
Durmaz, Murat; Karslioglu, Mahmut Onur
2015-04-01
There are various global and regional methods that have been proposed for the modeling of ionospheric vertical total electron content (VTEC). Global distribution of VTEC is usually modeled by spherical harmonic expansions, while tensor products of compactly supported univariate B-splines can be used for regional modeling. In these empirical parametric models, the coefficients of the basis functions as well as differential code biases (DCBs) of satellites and receivers can be treated as unknown parameters which can be estimated from geometry-free linear combinations of global positioning system observables. In this work we propose a new semi-parametric multivariate adaptive regression B-splines (SP-BMARS) method for the regional modeling of VTEC together with satellite and receiver DCBs, where the parametric part of the model is related to the DCBs as fixed parameters and the non-parametric part adaptively models the spatio-temporal distribution of VTEC. The latter is based on multivariate adaptive regression B-splines which is a non-parametric modeling technique making use of compactly supported B-spline basis functions that are generated from the observations automatically. This algorithm takes advantage of an adaptive scale-by-scale model building strategy that searches for best-fitting B-splines to the data at each scale. The VTEC maps generated from the proposed method are compared numerically and visually with the global ionosphere maps (GIMs) which are provided by the Center for Orbit Determination in Europe (CODE). The VTEC values from SP-BMARS and CODE GIMs are also compared with VTEC values obtained through calibration using local ionospheric model. The estimated satellite and receiver DCBs from the SP-BMARS model are compared with the CODE distributed DCBs. The results show that the SP-BMARS algorithm can be used to estimate satellite and receiver DCBs while adaptively and flexibly modeling the daily regional VTEC.
NASA Astrophysics Data System (ADS)
Grunloh, Timothy P.
The objective of this dissertation is to develop a 3-D domain-overlapping coupling method that leverages the superior flow field resolution of the Computational Fluid Dynamics (CFD) code STAR-CCM+ and the fast execution of the System Thermal Hydraulic (STH) code TRACE to efficiently and accurately model thermal hydraulic transport properties in nuclear power plants under complex conditions of regulatory and economic importance. The primary contribution is the novel Stabilized Inertial Domain Overlapping (SIDO) coupling method, which allows for on-the-fly correction of TRACE solutions for local pressures and velocity profiles inside multi-dimensional regions based on the results of the CFD simulation. The method is found to outperform the more frequently-used domain decomposition coupling methods. An STH code such as TRACE is designed to simulate large, diverse component networks, requiring simplifications to the fluid flow equations for reasonable execution times. Empirical correlations are therefore required for many sub-grid processes. The coarse grids used by TRACE diminish sensitivity to small scale geometric details such as Reactor Pressure Vessel (RPV) internals. A CFD code such as STAR-CCM+ uses much finer computational meshes that are sensitive to the geometric details of reactor internals. In turbulent flows, it is infeasible to fully resolve the flow solution, but the correlations used to model turbulence are at a low level. The CFD code can therefore resolve smaller scale flow processes. The development of a 3-D coupling method was carried out with the intention of improving predictive capabilities of transport properties in the downcomer and lower plenum regions of an RPV in reactor safety calculations. These regions are responsible for the multi-dimensional mixing effects that determine the distribution at the core inlet of quantities with reactivity implications, such as fluid temperature and dissolved neutron absorber concentration.
NASA Technical Reports Server (NTRS)
Sang, Janche
2003-01-01
Within NASA's Aviation Safety Program, NASA GRC participates in the Modeling and Simulation Project called ASMM. NASA GRC s focus is to characterize the propulsion systems performance from a fleet management and maintenance perspective by modeling and through simulation predict the characteristics of two classes of commercial engines (CFM56 and GE90). In prior years, the High Performance Computing and Communication (HPCC) program funded, NASA Glenn in developing a large scale, detailed simulations for the analysis and design of aircraft engines called the Numerical Propulsion System Simulation (NPSS). Three major aspects of this modeling included the integration of different engine components, coupling of multiple disciplines, and engine component zooming at appropriate level fidelity, require relatively tight coupling of different analysis codes. Most of these codes in aerodynamics and solid mechanics are written in Fortran. Refitting these legacy Fortran codes with distributed objects can increase these codes reusability. Aviation Safety s modeling and simulation use in characterizing fleet management has similar needs. The modeling and simulation of these propulsion systems use existing Fortran and C codes that are instrumental in determining the performance of the fleet. The research centers on building a CORBA-based development environment for programmers to easily wrap and couple legacy Fortran codes. This environment consists of a C++ wrapper library to hide the details of CORBA and an efficient remote variable scheme to facilitate data exchange between the client and the server model. Additionally, a Web Service model should also be constructed for evaluation of this technology s use over the next two- three years.
Webber, C J
2001-05-01
This article shows analytically that single-cell learning rules that give rise to oriented and localized receptive fields, when their synaptic weights are randomly and independently initialized according to a plausible assumption of zero prior information, will generate visual codes that are invariant under two-dimensional translations, rotations, and scale magnifications, provided that the statistics of their training images are sufficiently invariant under these transformations. Such codes span different image locations, orientations, and size scales with equal economy. Thus, single-cell rules could account for the spatial scaling property of the cortical simple-cell code. This prediction is tested computationally by training with natural scenes; it is demonstrated that a single-cell learning rule can give rise to simple-cell receptive fields spanning the full range of orientations, image locations, and spatial frequencies (except at the extreme high and low frequencies at which the scale invariance of the statistics of digitally sampled images must ultimately break down, because of the image boundary and the finite pixel resolution). Thus, no constraint on completeness, or any other coupling between cells, is necessary to induce the visual code to span wide ranges of locations, orientations, and size scales. This prediction is made using the theory of spontaneous symmetry breaking, which we have previously shown can also explain the data-driven self-organization of a wide variety of transformation invariances in neurons' responses, such as the translation invariance of complex cell response.
Android Protection Mechanism: A Signed Code Security Mechanism for Smartphone Applications
2011-03-01
status registers, exceptions, endian support, unaligned access support, synchronization primitives , the Jazelle Extension, and saturated integer...supports comprehensive non-blocking shared-memory synchronization primitives that scale for multiple-processor system designs. This is an improvement... synchronization . Memory semaphores can be loaded and altered without interruption because the load and store operations are atomic. Processor
Code Properties from Holographic Geometries
NASA Astrophysics Data System (ADS)
Pastawski, Fernando; Preskill, John
2017-04-01
Almheiri, Dong, and Harlow [J. High Energy Phys. 04 (2015) 163., 10.1007/JHEP04(2015)163] proposed a highly illuminating connection between the AdS /CFT holographic correspondence and operator algebra quantum error correction (OAQEC). Here, we explore this connection further. We derive some general results about OAQEC, as well as results that apply specifically to quantum codes that admit a holographic interpretation. We introduce a new quantity called price, which characterizes the support of a protected logical system, and find constraints on the price and the distance for logical subalgebras of quantum codes. We show that holographic codes defined on bulk manifolds with asymptotically negative curvature exhibit uberholography, meaning that a bulk logical algebra can be supported on a boundary region with a fractal structure. We argue that, for holographic codes defined on bulk manifolds with asymptotically flat or positive curvature, the boundary physics must be highly nonlocal, an observation with potential implications for black holes and for quantum gravity in AdS space at distance scales that are small compared to the AdS curvature radius.
The NASA Neutron Star Grand Challenge: The coalescences of Neutron Star Binary System
NASA Astrophysics Data System (ADS)
Suen, Wai-Mo
1998-04-01
NASA funded a Grand Challenge Project (9/1996-1999) for the development of a multi-purpose numerical treatment for relativistic astrophysics and gravitational wave astronomy. The coalescence of binary neutron stars is chosen as the model problem for the code development. The institutes involved in it are the Argonne Lab, Livermore lab, Max-Planck Institute at Potsdam, StonyBrook, U of Illinois and Washington U. We have recently succeeded in constructing a highly optimized parallel code which is capable of solving the full Einstein equations coupled with relativistic hydrodynamics, running at over 50 GFLOPS on a T3E (the second milestone point of the project). We are presently working on the head-on collisions of two neutron stars, and the inclusion of realistic equations of state into the code. The code will be released to the relativity and astrophysics community in April of 1998. With the full dynamics of the spacetime, relativistic hydro and microphysics all combined into a unified 3D code for the first time, many interesting large scale calculations in general relativistic astrophysics can now be carried out on massively parallel computers.
Simulation of 2D Kinetic Effects in Plasmas using the Grid Based Continuum Code LOKI
NASA Astrophysics Data System (ADS)
Banks, Jeffrey; Berger, Richard; Chapman, Tom; Brunner, Stephan
2016-10-01
Kinetic simulation of multi-dimensional plasma waves through direct discretization of the Vlasov equation is a useful tool to study many physical interactions and is particularly attractive for situations where minimal fluctuation levels are desired, for instance, when measuring growth rates of plasma wave instabilities. However, direct discretization of phase space can be computationally expensive, and as a result there are few examples of published results using Vlasov codes in more than a single configuration space dimension. In an effort to fill this gap we have developed the Eulerian-based kinetic code LOKI that evolves the Vlasov-Poisson system in 2+2-dimensional phase space. The code is designed to reduce the cost of phase-space computation by using fully 4th order accurate conservative finite differencing, while retaining excellent parallel scalability that efficiently uses large scale computing resources. In this poster I will discuss the algorithms used in the code as well as some aspects of their parallel implementation using MPI. I will also overview simulation results of basic plasma wave instabilities relevant to laser plasma interaction, which have been obtained using the code.
A Fault-Oblivious Extreme-Scale Execution Environment (FOX)
DOE Office of Scientific and Technical Information (OSTI.GOV)
Van Hensbergen, Eric; Speight, William; Xenidis, Jimi
IBM Research’s contribution to the Fault Oblivious Extreme-scale Execution Environment (FOX) revolved around three core research deliverables: • collaboration with Boston University around the Kittyhawk cloud infrastructure which both enabled a development and deployment platform for the project team and provided a fault-injection testbed to evaluate prototypes • operating systems research focused on exploring role-based operating system technologies through collaboration with Sandia National Labs on the NIX research operating system and collaboration with the broader IBM Research community around a hybrid operating system model which became known as FusedOS • IBM Research also participated in an advisory capacity with themore » Boston University SESA project, the core of which was derived from the K42 operating system research project funded in part by DARPA’s HPCS program. Both of these contributions were built on a foundation of previous operating systems research funding by the Department of Energy’s FastOS Program. Through the course of the X-stack funding we were able to develop prototypes, deploy them on production clusters at scale, and make them available to other researchers. As newer hardware, in the form of BlueGene/Q, came online, we were able to port the prototypes to the new hardware and release the source code for the resulting prototypes as open source to the community. In addition to the open source coded for the Kittyhawk and NIX prototypes, we were able to bring the BlueGene/Q Linux patches up to a more recent kernel and contribute them for inclusion by the broader Linux community. The lasting impact of the IBM Research work on FOX can be seen in its effect on the shift of IBM’s approach to HPC operating systems from Linux and Compute Node Kernels to role-based approaches as prototyped by the NIX and FusedOS work. This impact can be seen beyond IBM in follow-on ideas being incorporated into the proposals for the Exasacale Operating Systems/Runtime program.« less
Efficient parallelization of analytic bond-order potentials for large-scale atomistic simulations
NASA Astrophysics Data System (ADS)
Teijeiro, C.; Hammerschmidt, T.; Drautz, R.; Sutmann, G.
2016-07-01
Analytic bond-order potentials (BOPs) provide a way to compute atomistic properties with controllable accuracy. For large-scale computations of heterogeneous compounds at the atomistic level, both the computational efficiency and memory demand of BOP implementations have to be optimized. Since the evaluation of BOPs is a local operation within a finite environment, the parallelization concepts known from short-range interacting particle simulations can be applied to improve the performance of these simulations. In this work, several efficient parallelization methods for BOPs that use three-dimensional domain decomposition schemes are described. The schemes are implemented into the bond-order potential code BOPfox, and their performance is measured in a series of benchmarks. Systems of up to several millions of atoms are simulated on a high performance computing system, and parallel scaling is demonstrated for up to thousands of processors.
Carroll, Christopher P.; Cochran, Joseph A.; Price, Janet P.; Guse, Clare E.; Wang, Marjorie C.
2010-01-01
The Abbreviated Injury Scale (AIS) is commonly used to score injury severity and describe types of injuries. In 2005, the AIS-Head section was revised to capture more detailed information about head injuries and to better reflect their clinical severity, but the impact of these changes is largely unknown. The purpose of this study was to compare AIS-1998 and AIS-2005 coding of traumatic brain injuries (TBI) using medical records at a single Level I trauma center. We included patients with severe TBI (Glasgow Coma Scale 3–8) after blunt injury, excluding those who were missing medical records. Detailed descriptions of injuries were collected, then manually coded into AIS-1998 and AIS-2005 by the same Certified AIS Specialist. Compared to AIS-1998, AIS-2005 coded the same injuries with lower severity scores [p<0.01] and with decreased mean and maximum AIS-Head scores [p<0.01]. Of the types of traumatic brain injuries, most of the changes occurred among cerebellar and cerebral injuries. Traumatic hypoxic brain injury secondary to systemic dysfunction was captured by AIS-2005 but not by AIS-1998. However, AIS-2005 captured fewer loss of consciousness cases due to changes in criteria for coding concussive injury. In conclusion, changes from AIS-1998 to AIS-2005 result in significant differences in severity scores and types of injuries captured. This may complicate future TBI research by precluding direct comparison to datasets using AIS-1998. TBIs should be coded into the same AIS-version for comparison or evaluation of trends, and specify which AIS-version is used. PMID:21050606
Carroll, Christopher P; Cochran, Joseph A; Price, Janet P; Guse, Clare E; Wang, Marjorie C
2010-01-01
The Abbreviated Injury Scale (AIS) is commonly used to score injury severity and describe types of injuries. In 2005, the AIS-Head section was revised to capture more detailed information about head injuries and to better reflect their clinical severity, but the impact of these changes is largely unknown. The purpose of this study was to compare AIS-1998 and AIS-2005 coding of traumatic brain injuries (TBI) using medical records at a single Level I trauma center. We included patients with severe TBI (Glasgow Coma Scale 3-8) after blunt injury, excluding those who were missing medical records. Detailed descriptions of injuries were collected, then manually coded into AIS-1998 and AIS-2005 by the same Certified AIS Specialist. Compared to AIS-1998, AIS-2005 coded the same injuries with lower severity scores [p<0.01] and with decreased mean and maximum AIS-Head scores [p<0.01]. Of the types of traumatic brain injuries, most of the changes occurred among cerebellar and cerebral injuries. Traumatic hypoxic brain injury secondary to systemic dysfunction was captured by AIS-2005 but not by AIS-1998. However, AIS-2005 captured fewer loss of consciousness cases due to changes in criteria for coding concussive injury. In conclusion, changes from AIS-1998 to AIS-2005 result in significant differences in severity scores and types of injuries captured. This may complicate future TBI research by precluding direct comparison to datasets using AIS-1998. TBIs should be coded into the same AIS-version for comparison or evaluation of trends, and specify which AIS-version is used.
Subgrid Scale Modeling in Solar Convection Simulations using the ASH Code
NASA Technical Reports Server (NTRS)
Young, Y.-N.; Miesch, M.; Mansour, N. N.
2003-01-01
The turbulent solar convection zone has remained one of the most challenging and important subjects in physics. Understanding the complex dynamics in the solar con- vection zone is crucial for gaining insight into the solar dynamo problem. Many solar observatories have generated revealing data with great details of large scale motions in the solar convection zone. For example, a strong di erential rotation is observed: the angular rotation is observed to be faster at the equator than near the poles not only near the solar surface, but also deep in the convection zone. On the other hand, due to the wide range of dynamical scales of turbulence in the solar convection zone, both theory and simulation have limited success. Thus, cutting edge solar models and numerical simulations of the solar convection zone have focused more narrowly on a few key features of the solar convection zone, such as the time-averaged di erential rotation. For example, Brun & Toomre (2002) report computational finding of differential rotation in an anelastic model for solar convection. A critical shortcoming in this model is that the viscous dissipation is based on application of mixing length theory to stellar dynamics with some ad hoc parameter tuning. The goal of our work is to implement the subgrid scale model developed at CTR into the solar simulation code and examine how the differential rotation will be a affected as a result. Specifically, we implement a Smagorinsky-Lilly subgrid scale model into the ASH (anelastic spherical harmonic) code developed over the years by various authors. This paper is organized as follows. In x2 we briefly formulate the anelastic system that describes the solar convection. In x3 we formulate the Smagorinsky-Lilly subgrid scale model for unstably stratifed convection. We then present some preliminary results in x4, where we also provide some conclusions and future directions.
The Programming Language Python In Earth System Simulations
NASA Astrophysics Data System (ADS)
Gross, L.; Imranullah, A.; Mora, P.; Saez, E.; Smillie, J.; Wang, C.
2004-12-01
Mathematical models in earth sciences base on the solution of systems of coupled, non-linear, time-dependent partial differential equations (PDEs). The spatial and time-scale vary from a planetary scale and million years for convection problems to 100km and 10 years for fault systems simulations. Various techniques are in use to deal with the time dependency (e.g. Crank-Nicholson), with the non-linearity (e.g. Newton-Raphson) and weakly coupled equations (e.g. non-linear Gauss-Seidel). Besides these high-level solution algorithms discretization methods (e.g. finite element method (FEM), boundary element method (BEM)) are used to deal with spatial derivatives. Typically, large-scale, three dimensional meshes are required to resolve geometrical complexity (e.g. in the case of fault systems) or features in the solution (e.g. in mantel convection simulations). The modelling environment escript allows the rapid implementation of new physics as required for the development of simulation codes in earth sciences. Its main object is to provide a programming language, where the user can define new models and rapidly develop high-level solution algorithms. The current implementation is linked with the finite element package finley as a PDE solver. However, the design is open and other discretization technologies such as finite differences and boundary element methods could be included. escript is implemented as an extension of the interactive programming environment python (see www.python.org). Key concepts introduced are Data objects, which are holding values on nodes or elements of the finite element mesh, and linearPDE objects, which are defining linear partial differential equations to be solved by the underlying discretization technology. In this paper we will show the basic concepts of escript and will show how escript is used to implement a simulation code for interacting fault systems. We will show some results of large-scale, parallel simulations on an SGI Altix system. Acknowledgements: Project work is supported by Australian Commonwealth Government through the Australian Computational Earth Systems Simulator Major National Research Facility, Queensland State Government Smart State Research Facility Fund, The University of Queensland and SGI.
NASA Astrophysics Data System (ADS)
Featherstone, N. A.; Aurnou, J. M.; Yadav, R. K.; Heimpel, M. H.; Soderlund, K. M.; Matsui, H.; Stanley, S.; Brown, B. P.; Glatzmaier, G.; Olson, P.; Buffett, B. A.; Hwang, L.; Kellogg, L. H.
2017-12-01
In the past three years, CIG's Dynamo Working Group has successfully ported the Rayleigh Code to the Argonne Leadership Computer Facility's Mira BG/Q device. In this poster, we present some our first results, showing simulations of 1) convection in the solar convection zone; 2) dynamo action in Earth's core and 3) convection in the jovian deep atmosphere. These simulations have made efficient use of 131 thousand cores, 131 thousand cores and 232 thousand cores, respectively, on Mira. In addition to our novel results, the joys and logistical challenges of carrying out such large runs will also be discussed.
Computer aided system for parametric design of combination die
NASA Astrophysics Data System (ADS)
Naranje, Vishal G.; Hussein, H. M. A.; Kumar, S.
2017-09-01
In this paper, a computer aided system for parametric design of combination dies is presented. The system is developed using knowledge based system technique of artificial intelligence. The system is capable to design combination dies for production of sheet metal parts having punching and cupping operations. The system is coded in Visual Basic and interfaced with AutoCAD software. The low cost of the proposed system will help die designers of small and medium scale sheet metal industries for design of combination dies for similar type of products. The proposed system is capable to reduce design time and efforts of die designers for design of combination dies.
ab initio MD simulations of geomaterials with ~1000 atoms
NASA Astrophysics Data System (ADS)
Martin, G. B.; Kirtman, B.; Spera, F. J.
2009-12-01
In the last two decades, ab initio studies of materials using Density Functional Theory (DFT) have increased exponentially in popularity. DFT codes are now used routinely to simulate properties of geomaterials--mainly silicates and geochemically important metals such as Fe. These materials are ubiquitous in the Earth’s mantle and core and in terrestrial exoplanets. Because of computational limitations, most First Principles Molecular Dynamics (FPMD) calculations are done on systems of only ~100 atoms for a few picoseconds. While this approach can be useful for calculating physical quantities related to crystal structure, vibrational frequency, and other lattice-scale properties (especially in crystals), it is statistically marginal for duplicating physical properties of the liquid state like transport and structure. In MD simulations in the NEV ensemble, temperature (T), and pressure (P) fluctuations scale as N-1/2; small particle number (N) systems are therefore characterized by greater statistical state point location uncertainty than large N systems. Previous studies have used codes such as VASP where CPU time increases with N2, making calculations with N much greater than 100 impractical. SIESTA (Soler, et al. 2002) is a DFT code that enables electronic structure and MD computations on larger systems (N~103) by making some approximations, such as localized numerical orbitals, that would be useful in modeling some properties of geomaterials. Here we test the applicability of SIESTA to simulate geosilicates, both hydrous and anhydrous, in the solid and liquid state. We have used SIESTA for lattice calculations of brucite, Mg(OH)2, that compare very well to experiment and calculations using CRYSTAL, another DFT code. Good agreement between more classical DFT calculations and SIESTA is needed to justify study of geosilicates using SIESTA across a range of pressures and temperatures relevant to the Earth’s interior. Thus, it is useful to adjust parameters in SIESTA in accordance with calculations from CRYSTAL as a check on feasibility. Results are reported here that suggest SIESTA may indeed be useful to model silicate liquids at very high T and P.
The Scalable Checkpoint/Restart Library
DOE Office of Scientific and Technical Information (OSTI.GOV)
Moody, A.
The Scalable Checkpoint/Restart (SCR) library provides an interface that codes may use to worite our and read in application-level checkpoints in a scalable fashion. In the current implementation, checkpoint files are cached in local storage (hard disk or RAM disk) on the compute nodes. This technique provides scalable aggregate bandwidth and uses storage resources that are fully dedicated to the job. This approach addresses the two common drawbacks of checkpointing a large-scale application to a shared parallel file system, namely, limited bandwidth and file system contention. In fact, on current platforms, SCR scales linearly with the number of compute nodes.more » It has been benchmarked as high as 720GB/s on 1094 nodes of Atlas, which is nearly two orders of magnitude faster thanthe parallel file system.« less
Developing Discontinuous Galerkin Methods for Solving Multiphysics Problems in General Relativity
NASA Astrophysics Data System (ADS)
Kidder, Lawrence; Field, Scott; Teukolsky, Saul; Foucart, Francois; SXS Collaboration
2016-03-01
Multi-messenger observations of the merger of black hole-neutron star and neutron star-neutron star binaries, and of supernova explosions will probe fundamental physics inaccessible to terrestrial experiments. Modeling these systems requires a relativistic treatment of hydrodynamics, including magnetic fields, as well as neutrino transport and nuclear reactions. The accuracy, efficiency, and robustness of current codes that treat all of these problems is not sufficient to keep up with the observational needs. We are building a new numerical code that uses the Discontinuous Galerkin method with a task-based parallelization strategy, a promising combination that will allow multiphysics applications to be treated both accurately and efficiently on petascale and exascale machines. The code will scale to more than 100,000 cores for efficient exploration of the parameter space of potential sources and allowed physics, and the high-fidelity predictions needed to realize the promise of multi-messenger astronomy. I will discuss the current status of the development of this new code.
Employing multi-GPU power for molecular dynamics simulation: an extension of GALAMOST
NASA Astrophysics Data System (ADS)
Zhu, You-Liang; Pan, Deng; Li, Zhan-Wei; Liu, Hong; Qian, Hu-Jun; Zhao, Yang; Lu, Zhong-Yuan; Sun, Zhao-Yan
2018-04-01
We describe the algorithm of employing multi-GPU power on the basis of Message Passing Interface (MPI) domain decomposition in a molecular dynamics code, GALAMOST, which is designed for the coarse-grained simulation of soft matters. The code of multi-GPU version is developed based on our previous single-GPU version. In multi-GPU runs, one GPU takes charge of one domain and runs single-GPU code path. The communication between neighbouring domains takes a similar algorithm of CPU-based code of LAMMPS, but is optimised specifically for GPUs. We employ a memory-saving design which can enlarge maximum system size at the same device condition. An optimisation algorithm is employed to prolong the update period of neighbour list. We demonstrate good performance of multi-GPU runs on the simulation of Lennard-Jones liquid, dissipative particle dynamics liquid, polymer and nanoparticle composite, and two-patch particles on workstation. A good scaling of many nodes on cluster for two-patch particles is presented.
(U) Status of Trinity and Crossroads Systems
DOE Office of Scientific and Technical Information (OSTI.GOV)
Archer, Billy Joe; Lujan, James Westley; Hemmert, K. S.
2017-01-10
(U) This paper provides a general overview of current and future plans for the Advanced Simulation and Computing (ASC) Advanced Technology (AT) systems fielded by the New Mexico Alliance for Computing at Extreme Scale (ACES), a collaboration between Los Alamos Laboratory and Sandia National Laboratories. Additionally, this paper touches on research of technology beyond traditional CMOS. The status of Trinity, ASCs first AT system, and Crossroads, anticipated to succeed Trinity as the third AT system in 2020 will be presented, along with initial performance studies of the Intel Knights Landing Xeon Phi processors, introduced on Trinity. The challenges and opportunitiesmore » for our production simulation codes on AT systems will also be discussed. Trinity and Crossroads are a joint procurement by ACES and Lawrence Berkeley Laboratory as part of the Alliance for application Performance at EXtreme scale (APEX) http://apex.lanl.gov.« less
Exclusively Visual Analysis of Classroom Group Interactions
ERIC Educational Resources Information Center
Tucker, Laura; Scherr, Rachel E.; Zickler, Todd; Mazur, Eric
2016-01-01
Large-scale audiovisual data that measure group learning are time consuming to collect and analyze. As an initial step towards scaling qualitative classroom observation, we qualitatively coded classroom video using an established coding scheme with and without its audio cues. We find that interrater reliability is as high when using visual data…
A Secure and Robust Object-Based Video Authentication System
NASA Astrophysics Data System (ADS)
He, Dajun; Sun, Qibin; Tian, Qi
2004-12-01
An object-based video authentication system, which combines watermarking, error correction coding (ECC), and digital signature techniques, is presented for protecting the authenticity between video objects and their associated backgrounds. In this system, a set of angular radial transformation (ART) coefficients is selected as the feature to represent the video object and the background, respectively. ECC and cryptographic hashing are applied to those selected coefficients to generate the robust authentication watermark. This content-based, semifragile watermark is then embedded into the objects frame by frame before MPEG4 coding. In watermark embedding and extraction, groups of discrete Fourier transform (DFT) coefficients are randomly selected, and their energy relationships are employed to hide and extract the watermark. The experimental results demonstrate that our system is robust to MPEG4 compression, object segmentation errors, and some common object-based video processing such as object translation, rotation, and scaling while securely preventing malicious object modifications. The proposed solution can be further incorporated into public key infrastructure (PKI).
Neural code alterations and abnormal time patterns in Parkinson’s disease
NASA Astrophysics Data System (ADS)
Andres, Daniela Sabrina; Cerquetti, Daniel; Merello, Marcelo
2015-04-01
Objective. The neural code used by the basal ganglia is a current question in neuroscience, relevant for the understanding of the pathophysiology of Parkinson’s disease. While a rate code is known to participate in the communication between the basal ganglia and the motor thalamus/cortex, different lines of evidence have also favored the presence of complex time patterns in the discharge of the basal ganglia. To gain insight into the way the basal ganglia code information, we studied the activity of the globus pallidus pars interna (GPi), an output node of the circuit. Approach. We implemented the 6-hydroxydopamine model of Parkinsonism in Sprague-Dawley rats, and recorded the spontaneous discharge of single GPi neurons, in head-restrained conditions at full alertness. Analyzing the temporal structure function, we looked for characteristic scales in the neuronal discharge of the GPi. Main results. At a low-scale, we observed the presence of dynamic processes, which allow the transmission of time patterns. Conversely, at a middle-scale, stochastic processes force the use of a rate code. Regarding the time patterns transmitted, we measured the word length and found that it is increased in Parkinson’s disease. Furthermore, it showed a positive correlation with the frequency of discharge, indicating that an exacerbation of this abnormal time pattern length can be expected, as the dopamine depletion progresses. Significance. We conclude that a rate code and a time pattern code can co-exist in the basal ganglia at different temporal scales. However, their normal balance is progressively altered and replaced by pathological time patterns in Parkinson’s disease.
Evaluation of ground motion scaling methods for analysis of structural systems
O'Donnell, A. P.; Beltsar, O.A.; Kurama, Y.C.; Kalkan, E.; Taflanidis, A.A.
2011-01-01
Ground motion selection and scaling comprises undoubtedly the most important component of any seismic risk assessment study that involves time-history analysis. Ironically, this is also the single parameter with the least guidance provided in current building codes, resulting in the use of mostly subjective choices in design. The relevant research to date has been primarily on single-degree-of-freedom systems, with only a few studies using multi-degree-of-freedom systems. Furthermore, the previous research is based solely on numerical simulations with no experimental data available for the validation of the results. By contrast, the research effort described in this paper focuses on an experimental evaluation of selected ground motion scaling methods based on small-scale shake-table experiments of re-configurable linearelastic and nonlinear multi-story building frame structure models. Ultimately, the experimental results will lead to the development of guidelines and procedures to achieve reliable demand estimates from nonlinear response history analysis in seismic design. In this paper, an overview of this research effort is discussed and preliminary results based on linear-elastic dynamic response are presented. ?? ASCE 2011.
Cloud Computing for Complex Performance Codes.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Appel, Gordon John; Hadgu, Teklu; Klein, Brandon Thorin
This report describes the use of cloud computing services for running complex public domain performance assessment problems. The work consisted of two phases: Phase 1 was to demonstrate complex codes, on several differently configured servers, could run and compute trivial small scale problems in a commercial cloud infrastructure. Phase 2 focused on proving non-trivial large scale problems could be computed in the commercial cloud environment. The cloud computing effort was successfully applied using codes of interest to the geohydrology and nuclear waste disposal modeling community.
An interactive web application for the dissemination of human systems immunology data.
Speake, Cate; Presnell, Scott; Domico, Kelly; Zeitner, Brad; Bjork, Anna; Anderson, David; Mason, Michael J; Whalen, Elizabeth; Vargas, Olivia; Popov, Dimitry; Rinchai, Darawan; Jourde-Chiche, Noemie; Chiche, Laurent; Quinn, Charlie; Chaussabel, Damien
2015-06-19
Systems immunology approaches have proven invaluable in translational research settings. The current rate at which large-scale datasets are generated presents unique challenges and opportunities. Mining aggregates of these datasets could accelerate the pace of discovery, but new solutions are needed to integrate the heterogeneous data types with the contextual information that is necessary for interpretation. In addition, enabling tools and technologies facilitating investigators' interaction with large-scale datasets must be developed in order to promote insight and foster knowledge discovery. State of the art application programming was employed to develop an interactive web application for browsing and visualizing large and complex datasets. A collection of human immune transcriptome datasets were loaded alongside contextual information about the samples. We provide a resource enabling interactive query and navigation of transcriptome datasets relevant to human immunology research. Detailed information about studies and samples are displayed dynamically; if desired the associated data can be downloaded. Custom interactive visualizations of the data can be shared via email or social media. This application can be used to browse context-rich systems-scale data within and across systems immunology studies. This resource is publicly available online at [Gene Expression Browser Landing Page ( https://gxb.benaroyaresearch.org/dm3/landing.gsp )]. The source code is also available openly [Gene Expression Browser Source Code ( https://github.com/BenaroyaResearch/gxbrowser )]. We have developed a data browsing and visualization application capable of navigating increasingly large and complex datasets generated in the context of immunological studies. This intuitive tool ensures that, whether taken individually or as a whole, such datasets generated at great effort and expense remain interpretable and a ready source of insight for years to come.
Scale/TSUNAMI Sensitivity Data for ICSBEP Evaluations
DOE Office of Scientific and Technical Information (OSTI.GOV)
Rearden, Bradley T; Reed, Davis Allan; Lefebvre, Robert A
2011-01-01
The Tools for Sensitivity and Uncertainty Analysis Methodology Implementation (TSUNAMI) software developed at Oak Ridge National Laboratory (ORNL) as part of the Scale code system provide unique methods for code validation, gap analysis, and experiment design. For TSUNAMI analysis, sensitivity data are generated for each application and each existing or proposed experiment used in the assessment. The validation of diverse sets of applications requires potentially thousands of data files to be maintained and organized by the user, and a growing number of these files are available through the International Handbook of Evaluated Criticality Safety Benchmark Experiments (IHECSBE) distributed through themore » International Criticality Safety Benchmark Evaluation Program (ICSBEP). To facilitate the use of the IHECSBE benchmarks in rigorous TSUNAMI validation and gap analysis techniques, ORNL generated SCALE/TSUNAMI sensitivity data files (SDFs) for several hundred benchmarks for distribution with the IHECSBE. For the 2010 edition of IHECSBE, the sensitivity data were generated using 238-group cross-section data based on ENDF/B-VII.0 for 494 benchmark experiments. Additionally, ORNL has developed a quality assurance procedure to guide the generation of Scale inputs and sensitivity data, as well as a graphical user interface to facilitate the use of sensitivity data in identifying experiments and applying them in validation studies.« less
A Large Scale Code Resolution Service Network in the Internet of Things
Yu, Haining; Zhang, Hongli; Fang, Binxing; Yu, Xiangzhan
2012-01-01
In the Internet of Things a code resolution service provides a discovery mechanism for a requester to obtain the information resources associated with a particular product code immediately. In large scale application scenarios a code resolution service faces some serious issues involving heterogeneity, big data and data ownership. A code resolution service network is required to address these issues. Firstly, a list of requirements for the network architecture and code resolution services is proposed. Secondly, in order to eliminate code resolution conflicts and code resolution overloads, a code structure is presented to create a uniform namespace for code resolution records. Thirdly, we propose a loosely coupled distributed network consisting of heterogeneous, independent; collaborating code resolution services and a SkipNet based code resolution service named SkipNet-OCRS, which not only inherits DHT's advantages, but also supports administrative control and autonomy. For the external behaviors of SkipNet-OCRS, a novel external behavior mode named QRRA mode is proposed to enhance security and reduce requester complexity. For the internal behaviors of SkipNet-OCRS, an improved query algorithm is proposed to increase query efficiency. It is analyzed that integrating SkipNet-OCRS into our resolution service network can meet our proposed requirements. Finally, simulation experiments verify the excellent performance of SkipNet-OCRS. PMID:23202207
A large scale code resolution service network in the Internet of Things.
Yu, Haining; Zhang, Hongli; Fang, Binxing; Yu, Xiangzhan
2012-11-07
In the Internet of Things a code resolution service provides a discovery mechanism for a requester to obtain the information resources associated with a particular product code immediately. In large scale application scenarios a code resolution service faces some serious issues involving heterogeneity, big data and data ownership. A code resolution service network is required to address these issues. Firstly, a list of requirements for the network architecture and code resolution services is proposed. Secondly, in order to eliminate code resolution conflicts and code resolution overloads, a code structure is presented to create a uniform namespace for code resolution records. Thirdly, we propose a loosely coupled distributed network consisting of heterogeneous, independent; collaborating code resolution services and a SkipNet based code resolution service named SkipNet-OCRS, which not only inherits DHT’s advantages, but also supports administrative control and autonomy. For the external behaviors of SkipNet-OCRS, a novel external behavior mode named QRRA mode is proposed to enhance security and reduce requester complexity. For the internal behaviors of SkipNet-OCRS, an improved query algorithm is proposed to increase query efficiency. It is analyzed that integrating SkipNet-OCRS into our resolution service network can meet our proposed requirements. Finally, simulation experiments verify the excellent performance of SkipNet-OCRS.
Sub-Selective Quantization for Learning Binary Codes in Large-Scale Image Search.
Li, Yeqing; Liu, Wei; Huang, Junzhou
2018-06-01
Recently with the explosive growth of visual content on the Internet, large-scale image search has attracted intensive attention. It has been shown that mapping high-dimensional image descriptors to compact binary codes can lead to considerable efficiency gains in both storage and performing similarity computation of images. However, most existing methods still suffer from expensive training devoted to large-scale binary code learning. To address this issue, we propose a sub-selection based matrix manipulation algorithm, which can significantly reduce the computational cost of code learning. As case studies, we apply the sub-selection algorithm to several popular quantization techniques including cases using linear and nonlinear mappings. Crucially, we can justify the resulting sub-selective quantization by proving its theoretic properties. Extensive experiments are carried out on three image benchmarks with up to one million samples, corroborating the efficacy of the sub-selective quantization method in terms of image retrieval.
DOE Office of Scientific and Technical Information (OSTI.GOV)
McDeavitt, Sean; Shao, Lin; Tsvetkov, Pavel
2014-04-07
Advanced fast reactor systems being developed under the DOE's Advanced Fuel Cycle Initiative are designed to destroy TRU isotopes generated in existing and future nuclear energy systems. Over the past 40 years, multiple experiments and demonstrations have been completed using U-Zr, U-Pu-Zr, U-Mo and other metal alloys. As a result, multiple empirical and semi-empirical relationships have been established to develop empirical performance modeling codes. Many mechanistic questions about fission as mobility, bubble coalescience, and gas release have been answered through industrial experience, research, and empirical understanding. The advent of modern computational materials science, however, opens new doors of development suchmore » that physics-based multi-scale models may be developed to enable a new generation of predictive fuel performance codes that are not limited by empiricism.« less
A parallel-vector algorithm for rapid structural analysis on high-performance computers
NASA Technical Reports Server (NTRS)
Storaasli, Olaf O.; Nguyen, Duc T.; Agarwal, Tarun K.
1990-01-01
A fast, accurate Choleski method for the solution of symmetric systems of linear equations is presented. This direct method is based on a variable-band storage scheme and takes advantage of column heights to reduce the number of operations in the Choleski factorization. The method employs parallel computation in the outermost DO-loop and vector computation via the 'loop unrolling' technique in the innermost DO-loop. The method avoids computations with zeros outside the column heights, and as an option, zeros inside the band. The close relationship between Choleski and Gauss elimination methods is examined. The minor changes required to convert the Choleski code to a Gauss code to solve non-positive-definite symmetric systems of equations are identified. The results for two large-scale structural analyses performed on supercomputers, demonstrate the accuracy and speed of the method.
A parallel-vector algorithm for rapid structural analysis on high-performance computers
NASA Technical Reports Server (NTRS)
Storaasli, Olaf O.; Nguyen, Duc T.; Agarwal, Tarun K.
1990-01-01
A fast, accurate Choleski method for the solution of symmetric systems of linear equations is presented. This direct method is based on a variable-band storage scheme and takes advantage of column heights to reduce the number of operations in the Choleski factorization. The method employs parallel computation in the outermost DO-loop and vector computation via the loop unrolling technique in the innermost DO-loop. The method avoids computations with zeros outside the column heights, and as an option, zeros inside the band. The close relationship between Choleski and Gauss elimination methods is examined. The minor changes required to convert the Choleski code to a Gauss code to solve non-positive-definite symmetric systems of equations are identified. The results for two large scale structural analyses performed on supercomputers, demonstrate the accuracy and speed of the method.
The design plan of a VLSI single chip (255, 223) Reed-Solomon decoder
NASA Technical Reports Server (NTRS)
Hsu, I. S.; Shao, H. M.; Deutsch, L. J.
1987-01-01
The very large-scale integration (VLSI) architecture of a single chip (255, 223) Reed-Solomon decoder for decoding both errors and erasures is described. A decoding failure detection capability is also included in this system so that the decoder will recognize a failure to decode instead of introducing additional errors. This could happen whenever the received word contains too many errors and erasures for the code to correct. The number of transistors needed to implement this decoder is estimated at about 75,000 if the delay for received message is not included. This is in contrast to the older transform decoding algorithm which needs about 100,000 transistors. However, the transform decoder is simpler in architecture than the time decoder. It is therefore possible to implement a single chip (255, 223) Reed-Solomon decoder with today's VLSI technology. An implementation strategy for the decoder system is presented. This represents the first step in a plan to take advantage of advanced coding techniques to realize a 2.0 dB coding gain for future space missions.
The bioelectric code: An ancient computational medium for dynamic control of growth and form.
Levin, Michael; Martyniuk, Christopher J
2018-02-01
What determines large-scale anatomy? DNA does not directly specify geometrical arrangements of tissues and organs, and a process of encoding and decoding for morphogenesis is required. Moreover, many species can regenerate and remodel their structure despite drastic injury. The ability to obtain the correct target morphology from a diversity of initial conditions reveals that the morphogenetic code implements a rich system of pattern-homeostatic processes. Here, we describe an important mechanism by which cellular networks implement pattern regulation and plasticity: bioelectricity. All cells, not only nerves and muscles, produce and sense electrical signals; in vivo, these processes form bioelectric circuits that harness individual cell behaviors toward specific anatomical endpoints. We review emerging progress in reading and re-writing anatomical information encoded in bioelectrical states, and discuss the approaches to this problem from the perspectives of information theory, dynamical systems, and computational neuroscience. Cracking the bioelectric code will enable much-improved control over biological patterning, advancing basic evolutionary developmental biology as well as enabling numerous applications in regenerative medicine and synthetic bioengineering. Copyright © 2017 Elsevier B.V. All rights reserved.
NASA Technical Reports Server (NTRS)
Bartlett, E. P.; Morse, H. L.; Tong, H.
1971-01-01
Procedures and methods for predicting aerothermodynamic heating to delta orbiter shuttle vehicles were reviewed. A number of approximate methods were found to be adequate for large scale parameter studies, but are considered inadequate for final design calculations. It is recommended that final design calculations be based on a computer code which accounts for nonequilibrium chemistry, streamline spreading, entropy swallowing, and turbulence. It is further recommended that this code be developed with the intent that it can be directly coupled with an exact inviscid flow field calculation when the latter becomes available. A nonsimilar, equilibrium chemistry computer code (BLIMP) was used to evaluate the effects of entropy swallowing, turbulence, and various three dimensional approximations. These solutions were compared with available wind tunnel data. It was found study that, for wind tunnel conditions, the effect of entropy swallowing and three dimensionality are small for laminar boundary layers but entropy swallowing causes a significant increase in turbulent heat transfer. However, it is noted that even small effects (say, 10-20%) may be important for the shuttle reusability concept.
Development of a Watershed Boundary Dataset for Mississippi
Van Wilson, K.; Clair, Michael G.; Turnipseed, D. Phil; Rebich, Richard A.
2009-01-01
The U.S. Geological Survey, in cooperation with the Mississippi Department of Environmental Quality, U.S. Department of Agriculture-Natural Resources Conservation Service, Mississippi Department of Transportation, U.S. Department of Agriculture-Forest Service, and the Mississippi Automated Resource Information System, developed a 1:24,000-scale Watershed Boundary Dataset for Mississippi including watershed and subwatershed boundaries, codes, names, and drainage areas. The Watershed Boundary Dataset for Mississippi provides a standard geographical framework for water-resources and selected land-resources planning. The original 8-digit subbasins (hydrologic unit codes) were further subdivided into 10-digit watersheds and 12-digit subwatersheds - the exceptions are the Lower Mississippi River Alluvial Plain (known locally as the Delta) and the Mississippi River inside levees, which were only subdivided into 10-digit watersheds. Also, large water bodies in the Mississippi Sound along the coast were not delineated as small as a typical 12-digit subwatershed. All of the data - including watershed and subwatershed boundaries, hydrologic unit codes and names, and drainage-area data - are stored in a Geographic Information System database.
Vaccarino, Anthony L; Dharsee, Moyez; Strother, Stephen; Aldridge, Don; Arnott, Stephen R; Behan, Brendan; Dafnas, Costas; Dong, Fan; Edgecombe, Kenneth; El-Badrawi, Rachad; El-Emam, Khaled; Gee, Tom; Evans, Susan G; Javadi, Mojib; Jeanson, Francis; Lefaivre, Shannon; Lutz, Kristen; MacPhee, F Chris; Mikkelsen, Jordan; Mikkelsen, Tom; Mirotchnick, Nicholas; Schmah, Tanya; Studzinski, Christa M; Stuss, Donald T; Theriault, Elizabeth; Evans, Kenneth R
2018-01-01
Historically, research databases have existed in isolation with no practical avenue for sharing or pooling medical data into high dimensional datasets that can be efficiently compared across databases. To address this challenge, the Ontario Brain Institute's "Brain-CODE" is a large-scale neuroinformatics platform designed to support the collection, storage, federation, sharing and analysis of different data types across several brain disorders, as a means to understand common underlying causes of brain dysfunction and develop novel approaches to treatment. By providing researchers access to aggregated datasets that they otherwise could not obtain independently, Brain-CODE incentivizes data sharing and collaboration and facilitates analyses both within and across disorders and across a wide array of data types, including clinical, neuroimaging and molecular. The Brain-CODE system architecture provides the technical capabilities to support (1) consolidated data management to securely capture, monitor and curate data, (2) privacy and security best-practices, and (3) interoperable and extensible systems that support harmonization, integration, and query across diverse data modalities and linkages to external data sources. Brain-CODE currently supports collaborative research networks focused on various brain conditions, including neurodevelopmental disorders, cerebral palsy, neurodegenerative diseases, epilepsy and mood disorders. These programs are generating large volumes of data that are integrated within Brain-CODE to support scientific inquiry and analytics across multiple brain disorders and modalities. By providing access to very large datasets on patients with different brain disorders and enabling linkages to provincial, national and international databases, Brain-CODE will help to generate new hypotheses about the biological bases of brain disorders, and ultimately promote new discoveries to improve patient care.
Automated UMLS-Based Comparison of Medical Forms
Dugas, Martin; Fritz, Fleur; Krumm, Rainer; Breil, Bernhard
2013-01-01
Medical forms are very heterogeneous: on a European scale there are thousands of data items in several hundred different systems. To enable data exchange for clinical care and research purposes there is a need to develop interoperable documentation systems with harmonized forms for data capture. A prerequisite in this harmonization process is comparison of forms. So far – to our knowledge – an automated method for comparison of medical forms is not available. A form contains a list of data items with corresponding medical concepts. An automatic comparison needs data types, item names and especially item with these unique concept codes from medical terminologies. The scope of the proposed method is a comparison of these items by comparing their concept codes (coded in UMLS). Each data item is represented by item name, concept code and value domain. Two items are called identical, if item name, concept code and value domain are the same. Two items are called matching, if only concept code and value domain are the same. Two items are called similar, if their concept codes are the same, but the value domains are different. Based on these definitions an open-source implementation for automated comparison of medical forms in ODM format with UMLS-based semantic annotations was developed. It is available as package compareODM from http://cran.r-project.org. To evaluate this method, it was applied to a set of 7 real medical forms with 285 data items from a large public ODM repository with forms for different medical purposes (research, quality management, routine care). Comparison results were visualized with grid images and dendrograms. Automated comparison of semantically annotated medical forms is feasible. Dendrograms allow a view on clustered similar forms. The approach is scalable for a large set of real medical forms. PMID:23861827
Gao, Y Nina
2018-04-06
The Resource-Based Relative Value Scale Update Committee (RUC) submits recommended reimbursement values for physician work (wRVUs) under Medicare Part B. The RUC includes rotating representatives from medical specialties. To identify changes in physician reimbursements associated with RUC rotating seat representation. Relative Value Scale Update Committee members 1994-2013; Medicare Part B Relative Value Scale 1994-2013; Physician/Supplier Procedure Summary Master File 2007; Part B National Summary Data File 2000-2011. I match service and procedure codes to specialties using 2007 Medicare billing data. Subsequently, I model wRVUs as a function of RUC rotating committee representation and level of code specialization. An annual RUC rotating seat membership is associated with a statistically significant 3-5 percent increase in Medicare expenditures for codes billed to that specialty. For codes that are performed by a small number of physicians, the association between reimbursement and rotating subspecialty representation is positive, 0.177 (SE = 0.024). For codes that are performed by a large number of physicians, the association is negative, -0.183 (SE = 0.026). Rotating representation on the RUC is correlated with overall reimbursement rates. The resulting differential changes may exacerbate existing reimbursement discrepancies between generalist and specialist practitioners. © Health Research and Educational Trust.
NASA Astrophysics Data System (ADS)
Rehman, Khalil Ur; Malik, Aneeqa Ashfaq; Malik, M. Y.; Tahir, M.; Zehra, Iffat
2018-03-01
A short communication is structured to offer a set of scaling group of transformation for Prandtl-Eyring fluid flow yields by stretching flat porous surface. The fluid flow regime is carried with both heat and mass transfer characteristics. To seek solution of flow problem a set of scaling group of transformation is proposed by adopting Lie approach. These transformations are used to step down the partial differential equations into ordinary differential equations. The reduced system is solved by numerical method termed as shooting method. A self-coded algorithm is executed in this regard. The obtain results are elaborated by means of figures and tables.
NASA Astrophysics Data System (ADS)
Cui, Tie Jun; Wu, Rui Yuan; Wu, Wei; Shi, Chuan Bo; Li, Yun Bo
2017-10-01
We propose fast and accurate designs to large-scale and low-profile transmission-type anisotropic coding metasurfaces with multiple functions in the millimeter-wave frequencies based on the antenna-array method. The numerical simulation of an anisotropic coding metasurface with the size of 30λ × 30λ by the proposed method takes only 20 min, which however cannot be realized by commercial software due to huge memory usage in personal computers. To inspect the performance of coding metasurfaces in the millimeter-wave band, the working frequency is chosen as 60 GHz. Based on the convolution operations and holographic theory, the proposed multifunctional anisotropic coding metasurface exhibits different effects excited by y-polarized and x-polarized incidences. This study extends the frequency range of coding metasurfaces, filling the gap between microwave and terahertz bands, and implying promising applications in millimeter-wave communication and imaging.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Adamek, Julian; Daverio, David; Durrer, Ruth
We present a new N-body code, gevolution , for the evolution of large scale structure in the Universe. Our code is based on a weak field expansion of General Relativity and calculates all six metric degrees of freedom in Poisson gauge. N-body particles are evolved by solving the geodesic equation which we write in terms of a canonical momentum such that it remains valid also for relativistic particles. We validate the code by considering the Schwarzschild solution and, in the Newtonian limit, by comparing with the Newtonian N-body codes Gadget-2 and RAMSES . We then proceed with a simulation ofmore » large scale structure in a Universe with massive neutrinos where we study the gravitational slip induced by the neutrino shear stress. The code can be extended to include different kinds of dark energy or modified gravity models and going beyond the usually adopted quasi-static approximation. Our code is publicly available.« less
Compiled MPI: Cost-Effective Exascale Applications Development
DOE Office of Scientific and Technical Information (OSTI.GOV)
Bronevetsky, G; Quinlan, D; Lumsdaine, A
2012-04-10
The complexity of petascale and exascale machines makes it increasingly difficult to develop applications that can take advantage of them. Future systems are expected to feature billion-way parallelism, complex heterogeneous compute nodes and poor availability of memory (Peter Kogge, 2008). This new challenge for application development is motivating a significant amount of research and development on new programming models and runtime systems designed to simplify large-scale application development. Unfortunately, DoE has significant multi-decadal investment in a large family of mission-critical scientific applications. Scaling these applications to exascale machines will require a significant investment that will dwarf the costs of hardwaremore » procurement. A key reason for the difficulty in transitioning today's applications to exascale hardware is their reliance on explicit programming techniques, such as the Message Passing Interface (MPI) programming model to enable parallelism. MPI provides a portable and high performance message-passing system that enables scalable performance on a wide variety of platforms. However, it also forces developers to lock the details of parallelization together with application logic, making it very difficult to adapt the application to significant changes in the underlying system. Further, MPI's explicit interface makes it difficult to separate the application's synchronization and communication structure, reducing the amount of support that can be provided by compiler and run-time tools. This is in contrast to the recent research on more implicit parallel programming models such as Chapel, OpenMP and OpenCL, which promise to provide significantly more flexibility at the cost of reimplementing significant portions of the application. We are developing CoMPI, a novel compiler-driven approach to enable existing MPI applications to scale to exascale systems with minimal modifications that can be made incrementally over the application's lifetime. It includes: (1) New set of source code annotations, inserted either manually or automatically, that will clarify the application's use of MPI to the compiler infrastructure, enabling greater accuracy where needed; (2) A compiler transformation framework that leverages these annotations to transform the original MPI source code to improve its performance and scalability; (3) Novel MPI runtime implementation techniques that will provide a rich set of functionality extensions to be used by applications that have been transformed by our compiler; and (4) A novel compiler analysis that leverages simple user annotations to automatically extract the application's communication structure and synthesize most complex code annotations.« less
Air Force Geophysics Laboratory portable PCM ground station
NASA Astrophysics Data System (ADS)
Shaw, H.; Lawrence, F. A.
The present paper is concerned with the development of a portable Pulse-Code Modulation (PCM) telemetry station for the Air Force Geophysics Laboratory (AFGL). A system description is provided, taking into account the system equipment, the interface, the decommutator (DECOM) section of the interface, the direct memory access (DMA) section, and system specifications and capabilities. In the context of selecting between two conflicting philosophies regarding software, it was decided to favor a small scale specialized approach. Attention is given to the operating system, aspects of setting up the software, the application software, and questions of portability.
HACC: Extreme Scaling and Performance Across Diverse Architectures
NASA Astrophysics Data System (ADS)
Habib, Salman; Morozov, Vitali; Frontiere, Nicholas; Finkel, Hal; Pope, Adrian; Heitmann, Katrin
2013-11-01
Supercomputing is evolving towards hybrid and accelerator-based architectures with millions of cores. The HACC (Hardware/Hybrid Accelerated Cosmology Code) framework exploits this diverse landscape at the largest scales of problem size, obtaining high scalability and sustained performance. Developed to satisfy the science requirements of cosmological surveys, HACC melds particle and grid methods using a novel algorithmic structure that flexibly maps across architectures, including CPU/GPU, multi/many-core, and Blue Gene systems. We demonstrate the success of HACC on two very different machines, the CPU/GPU system Titan and the BG/Q systems Sequoia and Mira, attaining unprecedented levels of scalable performance. We demonstrate strong and weak scaling on Titan, obtaining up to 99.2% parallel efficiency, evolving 1.1 trillion particles. On Sequoia, we reach 13.94 PFlops (69.2% of peak) and 90% parallel efficiency on 1,572,864 cores, with 3.6 trillion particles, the largest cosmological benchmark yet performed. HACC design concepts are applicable to several other supercomputer applications.
Biodegradation of paint stripper solvents in a modified gas lift loop bioreactor.
Vanderberg-Twary, L; Steenhoudt, K; Travis, B J; Hanners, J L; Foreman, T M; Brainard, J R
1997-07-05
Paint stripping wastes generated during the decontamination and decommissioning of former nuclear facilities contain paint stripping organics (dichloromethane, 2-propanol, and methanol) and bulk materials containing paint pigments. It is desirable to degrade the organic residues as part of an integrated chemical-biological treatment system. We have developed a modified gas lift loop bioreactor employing a defined consortium of Rhodococcus rhodochrous strain OFS and Hyphomicrobium sp. DM-2 that degrades paint stripper organics. Mass transfer coefficients and kinetic constants for biodegradation in the system were determined. It was found that transfer of organic substrates from surrogate waste into the air and further into the liquid medium in the bioreactor were rapid processes, occurring within minutes. Monod kinetics was employed to model the biodegradation of paint stripping organics. Analysis of the bioreactor process was accomplished with BIOLAB, a mathematical code that simulates coupled mass transfer and biodegradation processes. This code was used to fit experimental data to Monod kinetics and to determine kinetic parameters. The BIOLAB code was also employed to compare activities in the bioreactor of individual microbial cultures to the activities of combined cultures in the bioreactor. This code is of benefit for further optimization and scale-up of the bioreactor for treatment of paint stripping and other volatile organic wastes in bulk materials.
A large scale software system for simulation and design optimization of mechanical systems
NASA Technical Reports Server (NTRS)
Dopker, Bernhard; Haug, Edward J.
1989-01-01
The concept of an advanced integrated, networked simulation and design system is outlined. Such an advanced system can be developed utilizing existing codes without compromising the integrity and functionality of the system. An example has been used to demonstrate the applicability of the concept of the integrated system outlined here. The development of an integrated system can be done incrementally. Initial capabilities can be developed and implemented without having a detailed design of the global system. Only a conceptual global system must exist. For a fully integrated, user friendly design system, further research is needed in the areas of engineering data bases, distributed data bases, and advanced user interface design.
Automating annotation of information-giving for analysis of clinical conversation.
Mayfield, Elijah; Laws, M Barton; Wilson, Ira B; Penstein Rosé, Carolyn
2014-02-01
Coding of clinical communication for fine-grained features such as speech acts has produced a substantial literature. However, annotation by humans is laborious and expensive, limiting application of these methods. We aimed to show that through machine learning, computers could code certain categories of speech acts with sufficient reliability to make useful distinctions among clinical encounters. The data were transcripts of 415 routine outpatient visits of HIV patients which had previously been coded for speech acts using the Generalized Medical Interaction Analysis System (GMIAS); 50 had also been coded for larger scale features using the Comprehensive Analysis of the Structure of Encounters System (CASES). We aggregated selected speech acts into information-giving and requesting, then trained the machine to automatically annotate using logistic regression classification. We evaluated reliability by per-speech act accuracy. We used multiple regression to predict patient reports of communication quality from post-visit surveys using the patient and provider information-giving to information-requesting ratio (briefly, information-giving ratio) and patient gender. Automated coding produces moderate reliability with human coding (accuracy 71.2%, κ=0.57), with high correlation between machine and human prediction of the information-giving ratio (r=0.96). The regression significantly predicted four of five patient-reported measures of communication quality (r=0.263-0.344). The information-giving ratio is a useful and intuitive measure for predicting patient perception of provider-patient communication quality. These predictions can be made with automated annotation, which is a practical option for studying large collections of clinical encounters with objectivity, consistency, and low cost, providing greater opportunity for training and reflection for care providers.
Multimodal Discriminative Binary Embedding for Large-Scale Cross-Modal Retrieval.
Wang, Di; Gao, Xinbo; Wang, Xiumei; He, Lihuo; Yuan, Bo
2016-10-01
Multimodal hashing, which conducts effective and efficient nearest neighbor search across heterogeneous data on large-scale multimedia databases, has been attracting increasing interest, given the explosive growth of multimedia content on the Internet. Recent multimodal hashing research mainly aims at learning the compact binary codes to preserve semantic information given by labels. The overwhelming majority of these methods are similarity preserving approaches which approximate pairwise similarity matrix with Hamming distances between the to-be-learnt binary hash codes. However, these methods ignore the discriminative property in hash learning process, which results in hash codes from different classes undistinguished, and therefore reduces the accuracy and robustness for the nearest neighbor search. To this end, we present a novel multimodal hashing method, named multimodal discriminative binary embedding (MDBE), which focuses on learning discriminative hash codes. First, the proposed method formulates the hash function learning in terms of classification, where the binary codes generated by the learned hash functions are expected to be discriminative. And then, it exploits the label information to discover the shared structures inside heterogeneous data. Finally, the learned structures are preserved for hash codes to produce similar binary codes in the same class. Hence, the proposed MDBE can preserve both discriminability and similarity for hash codes, and will enhance retrieval accuracy. Thorough experiments on benchmark data sets demonstrate that the proposed method achieves excellent accuracy and competitive computational efficiency compared with the state-of-the-art methods for large-scale cross-modal retrieval task.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Gamblin, T; de Supinski, B R; Schulz, M
Good load balance is crucial on very large parallel systems, but the most sophisticated algorithms introduce dynamic imbalances through adaptation in domain decomposition or use of adaptive solvers. To observe and diagnose imbalance, developers need system-wide, temporally-ordered measurements from full-scale runs. This potentially requires data collection from multiple code regions on all processors over the entire execution. Doing this instrumentation naively can, in combination with the application itself, exceed available I/O bandwidth and storage capacity, and can induce severe behavioral perturbations. We present and evaluate a novel technique for scalable, low-error load balance measurement. This uses a parallel wavelet transformmore » and other parallel encoding methods. We show that our technique collects and reconstructs system-wide measurements with low error. Compression time scales sublinearly with system size and data volume is several orders of magnitude smaller than the raw data. The overhead is low enough for online use in a production environment.« less
Vehicle-scale investigation of a fluorine jet-pump liquid hydrogen tank pressurization system
NASA Technical Reports Server (NTRS)
Cady, E. C.; Kendle, D. W.
1972-01-01
A comprehensive analytical and experimental program was performed to evaluate the performance of a fluorine-hydrogen jet-pump injector for main tank injection (MTI) pressurization of a liquid hydrogen (LH2) tank. The injector performance during pressurization and LH2 expulsion was determined by a series of seven tests of a full-scale injector and MTI pressure control system in a 28.3 cu m (1000 cu ft) flight-weight LH2 tank. Although the injector did not effectively jet-pump LH2 continuously, it showed improved pressurization performance compared to straight-pipe injectors tested under the same conditions in a previous program. The MTI computer code was modified to allow performance prediction for the jet-pump injector.
Palmer, Cameron S; Franklyn, Melanie
2011-01-07
Trauma systems should consistently monitor a given trauma population over a period of time. The Abbreviated Injury Scale (AIS) and derived scores such as the Injury Severity Score (ISS) are commonly used to quantify injury severities in trauma registries. To reflect contemporary trauma management and treatment, the most recent version of the AIS (AIS08) contains many codes which differ in severity from their equivalents in the earlier 1998 version (AIS98). Consequently, the adoption of AIS08 may impede comparisons between data coded using different AIS versions. It may also affect the number of patients classified as major trauma. The entire AIS98-coded injury dataset of a large population based trauma registry was retrieved and mapped to AIS08 using the currently available AIS98-AIS08 dictionary map. The percentage of codes which had increased or decreased in severity, or could not be mapped, was examined in conjunction with the effect of these changes to the calculated ISS. The potential for free text information accompanying AIS coding to improve the quality of AIS mapping was explored. A total of 128280 AIS98-coded injuries were evaluated in 32134 patients, 15471 patients of whom were classified as major trauma. Although only 4.5% of dictionary codes decreased in severity from AIS98 to AIS08, this represented almost 13% of injuries in the registry. In 4.9% of patients, no injuries could be mapped. ISS was potentially unreliable in one-third of patients, as they had at least one AIS98 code which could not be mapped. Using AIS08, the number of patients classified as major trauma decreased by between 17.3% and 30.3%. Evaluation of free text descriptions for some injuries demonstrated the potential to improve mapping between AIS versions. Converting AIS98-coded data to AIS08 results in a significant decrease in the number of patients classified as major trauma. Many AIS98 codes are missing from the existing AIS map, and across a trauma population the AIS08 dataset estimates which it produces are of insufficient quality to be used in practice. However, it may be possible to improve AIS98 to AIS08 mapping to the point where it is useful to established registries.
2011-01-01
Background Trauma systems should consistently monitor a given trauma population over a period of time. The Abbreviated Injury Scale (AIS) and derived scores such as the Injury Severity Score (ISS) are commonly used to quantify injury severities in trauma registries. To reflect contemporary trauma management and treatment, the most recent version of the AIS (AIS08) contains many codes which differ in severity from their equivalents in the earlier 1998 version (AIS98). Consequently, the adoption of AIS08 may impede comparisons between data coded using different AIS versions. It may also affect the number of patients classified as major trauma. Methods The entire AIS98-coded injury dataset of a large population based trauma registry was retrieved and mapped to AIS08 using the currently available AIS98-AIS08 dictionary map. The percentage of codes which had increased or decreased in severity, or could not be mapped, was examined in conjunction with the effect of these changes to the calculated ISS. The potential for free text information accompanying AIS coding to improve the quality of AIS mapping was explored. Results A total of 128280 AIS98-coded injuries were evaluated in 32134 patients, 15471 patients of whom were classified as major trauma. Although only 4.5% of dictionary codes decreased in severity from AIS98 to AIS08, this represented almost 13% of injuries in the registry. In 4.9% of patients, no injuries could be mapped. ISS was potentially unreliable in one-third of patients, as they had at least one AIS98 code which could not be mapped. Using AIS08, the number of patients classified as major trauma decreased by between 17.3% and 30.3%. Evaluation of free text descriptions for some injuries demonstrated the potential to improve mapping between AIS versions. Conclusions Converting AIS98-coded data to AIS08 results in a significant decrease in the number of patients classified as major trauma. Many AIS98 codes are missing from the existing AIS map, and across a trauma population the AIS08 dataset estimates which it produces are of insufficient quality to be used in practice. However, it may be possible to improve AIS98 to AIS08 mapping to the point where it is useful to established registries. PMID:21214906
Uncertainty Quantification Techniques of SCALE/TSUNAMI
DOE Office of Scientific and Technical Information (OSTI.GOV)
Rearden, Bradley T; Mueller, Don
2011-01-01
The Standardized Computer Analysis for Licensing Evaluation (SCALE) code system developed at Oak Ridge National Laboratory (ORNL) includes Tools for Sensitivity and Uncertainty Analysis Methodology Implementation (TSUNAMI). The TSUNAMI code suite can quantify the predicted change in system responses, such as k{sub eff}, reactivity differences, or ratios of fluxes or reaction rates, due to changes in the energy-dependent, nuclide-reaction-specific cross-section data. Where uncertainties in the neutron cross-section data are available, the sensitivity of the system to the cross-section data can be applied to propagate the uncertainties in the cross-section data to an uncertainty in the system response. Uncertainty quantification ismore » useful for identifying potential sources of computational biases and highlighting parameters important to code validation. Traditional validation techniques often examine one or more average physical parameters to characterize a system and identify applicable benchmark experiments. However, with TSUNAMI correlation coefficients are developed by propagating the uncertainties in neutron cross-section data to uncertainties in the computed responses for experiments and safety applications through sensitivity coefficients. The bias in the experiments, as a function of their correlation coefficient with the intended application, is extrapolated to predict the bias and bias uncertainty in the application through trending analysis or generalized linear least squares techniques, often referred to as 'data adjustment.' Even with advanced tools to identify benchmark experiments, analysts occasionally find that the application models include some feature or material for which adequately similar benchmark experiments do not exist to support validation. For example, a criticality safety analyst may want to take credit for the presence of fission products in spent nuclear fuel. In such cases, analysts sometimes rely on 'expert judgment' to select an additional administrative margin to account for gap in the validation data or to conclude that the impact on the calculated bias and bias uncertainty is negligible. As a result of advances in computer programs and the evolution of cross-section covariance data, analysts can use the sensitivity and uncertainty analysis tools in the TSUNAMI codes to estimate the potential impact on the application-specific bias and bias uncertainty resulting from nuclides not represented in available benchmark experiments. This paper presents the application of methods described in a companion paper.« less
Mod3DMT and EMTF: Free Software for MT Data Processing and Inversion
NASA Astrophysics Data System (ADS)
Egbert, G. D.; Kelbert, A.; Meqbel, N. M.
2017-12-01
"ModEM" was developed at Oregon State University as a modular system for inversion of electromagnetic (EM) geophysical data (Egbert and Kelbert, 2012; Kelbert et al., 2014). Although designed for more general (frequency domain) EM applications, and originally intended as a testbed for exploring inversion search and regularization strategies, our own initial uses of ModEM were for 3-D imaging of the deep crust and upper mantle at large scales. Since 2013 we have offered a version of the source code suitable for 3D magnetotelluric (MT) inversion on an "as is, user beware" basis for free for non-commercial applications. This version, which we refer to as Mod3DMT, has since been widely used by the international MT community. Over 250 users have registered to download the source code, and at least 50 MT studies in the refereed literature, covering locations around the globe at a range of spatial scales, cite use of ModEM for 3D inversion. For over 30 years I have also made MT processing software available for free use. In this presentation, I will discuss my experience with these freely available (but perhaps not truly open-source) computer codes. Although users are allowed to make modifications to the codes (on conditions that they provide a copy of the modified version) only a handful of users have tried to make any modification, and only rarely are modifications even reported, much less provided back to the developers.
NASA Technical Reports Server (NTRS)
Plante, I; Wu, H
2014-01-01
The code RITRACKS (Relativistic Ion Tracks) has been developed over the last few years at the NASA Johnson Space Center to simulate the effects of ionizing radiations at the microscopic scale, to understand the effects of space radiation at the biological level. The fundamental part of this code is the stochastic simulation of radiation track structure of heavy ions, an important component of space radiations. The code can calculate many relevant quantities such as the radial dose, voxel dose, and may also be used to calculate the dose in spherical and cylindrical targets of various sizes. Recently, we have incorporated DNA structure and damage simulations at the molecular scale in RITRACKS. The direct effect of radiations is simulated by introducing a slight modification of the existing particle transport algorithms, using the Binary-Encounter-Bethe model of ionization cross sections for each molecular orbitals of DNA. The simulation of radiation chemistry is done by a step-by-step diffusion-reaction program based on the Green's functions of the diffusion equation]. This approach is also used to simulate the indirect effect of ionizing radiation on DNA. The software can be installed independently on PC and tablets using the Windows operating system and does not require any coding from the user. It includes a Graphic User Interface (GUI) and a 3D OpenGL visualization interface. The calculations are executed simultaneously (in parallel) on multiple CPUs. The main features of the software will be presented.
Parallel interference cancellation for CDMA applications
NASA Technical Reports Server (NTRS)
Divsalar, Dariush (Inventor); Simon, Marvin K. (Inventor); Raphaeli, Dan (Inventor)
1997-01-01
The present invention provides a method of decoding a spread spectrum composite signal, the composite signal comprising plural user signals that have been spread with plural respective codes, wherein each coded signal is despread, averaged to produce a signal value, analyzed to produce a tentative decision, respread, summed with other respread signals to produce combined interference signals, the method comprising scaling the combined interference signals with a weighting factor to produce a scaled combined interference signal, scaling the composite signal with the weighting factor to produce a scaled composite signal, scaling the signal value by the complement of the weighting factor to produce a leakage signal, combining the scaled composite signal, the scaled combined interference signal and the leakage signal to produce an estimate of a respective user signal.
Content Coding of Psychotherapy Transcripts Using Labeled Topic Models.
Gaut, Garren; Steyvers, Mark; Imel, Zac E; Atkins, David C; Smyth, Padhraic
2017-03-01
Psychotherapy represents a broad class of medical interventions received by millions of patients each year. Unlike most medical treatments, its primary mechanisms are linguistic; i.e., the treatment relies directly on a conversation between a patient and provider. However, the evaluation of patient-provider conversation suffers from critical shortcomings, including intensive labor requirements, coder error, nonstandardized coding systems, and inability to scale up to larger data sets. To overcome these shortcomings, psychotherapy analysis needs a reliable and scalable method for summarizing the content of treatment encounters. We used a publicly available psychotherapy corpus from Alexander Street press comprising a large collection of transcripts of patient-provider conversations to compare coding performance for two machine learning methods. We used the labeled latent Dirichlet allocation (L-LDA) model to learn associations between text and codes, to predict codes in psychotherapy sessions, and to localize specific passages of within-session text representative of a session code. We compared the L-LDA model to a baseline lasso regression model using predictive accuracy and model generalizability (measured by calculating the area under the curve (AUC) from the receiver operating characteristic curve). The L-LDA model outperforms the lasso logistic regression model at predicting session-level codes with average AUC scores of 0.79, and 0.70, respectively. For fine-grained level coding, L-LDA and logistic regression are able to identify specific talk-turns representative of symptom codes. However, model performance for talk-turn identification is not yet as reliable as human coders. We conclude that the L-LDA model has the potential to be an objective, scalable method for accurate automated coding of psychotherapy sessions that perform better than comparable discriminative methods at session-level coding and can also predict fine-grained codes.
Content Coding of Psychotherapy Transcripts Using Labeled Topic Models
Gaut, Garren; Steyvers, Mark; Imel, Zac E; Atkins, David C; Smyth, Padhraic
2016-01-01
Psychotherapy represents a broad class of medical interventions received by millions of patients each year. Unlike most medical treatments, its primary mechanisms are linguistic; i.e., the treatment relies directly on a conversation between a patient and provider. However, the evaluation of patient-provider conversation suffers from critical shortcomings, including intensive labor requirements, coder error, non-standardized coding systems, and inability to scale up to larger data sets. To overcome these shortcomings, psychotherapy analysis needs a reliable and scalable method for summarizing the content of treatment encounters. We used a publicly-available psychotherapy corpus from Alexander Street press comprising a large collection of transcripts of patient-provider conversations to compare coding performance for two machine learning methods. We used the Labeled Latent Dirichlet Allocation (L-LDA) model to learn associations between text and codes, to predict codes in psychotherapy sessions, and to localize specific passages of within-session text representative of a session code. We compared the L-LDA model to a baseline lasso regression model using predictive accuracy and model generalizability (measured by calculating the area under the curve (AUC) from the receiver operating characteristic (ROC) curve). The L-LDA model outperforms the lasso logistic regression model at predicting session-level codes with average AUC scores of .79, and .70, respectively. For fine-grained level coding, L-LDA and logistic regression are able to identify specific talk-turns representative of symptom codes. However, model performance for talk-turn identification is not yet as reliable as human coders. We conclude that the L-LDA model has the potential to be an objective, scaleable method for accurate automated coding of psychotherapy sessions that performs better than comparable discriminative methods at session-level coding and can also predict fine-grained codes. PMID:26625437
NASA Astrophysics Data System (ADS)
Rundle, P. B.; Rundle, J. B.; Morein, G.; Donnellan, A.; Turcotte, D.; Klein, W.
2004-12-01
The research community is rapidly moving towards the development of an earthquake forecast technology based on the use of complex, system-level earthquake fault system simulations. Using these topologically and dynamically realistic simulations, it is possible to develop ensemble forecasting methods similar to that used in weather and climate research. To effectively carry out such a program, one needs 1) a topologically realistic model to simulate the fault system; 2) data sets to constrain the model parameters through a systematic program of data assimilation; 3) a computational technology making use of modern paradigms of high performance and parallel computing systems; and 4) software to visualize and analyze the results. In particular, we focus attention on a new version of our code Virtual California (version 2001) in which we model all of the major strike slip faults in California, from the Mexico-California border to the Mendocino Triple Junction. Virtual California is a "backslip model", meaning that the long term rate of slip on each fault segment in the model is matched to the observed rate. We use the historic data set of earthquakes larger than magnitude M > 6 to define the frictional properties of 650 fault segments (degrees of freedom) in the model. To compute the dynamics and the associated surface deformation, we use message passing as implemented in the MPICH standard distribution on a Beowulf clusters consisting of >10 cpus. We also will report results from implementing the code on significantly larger machines so that we can begin to examine much finer spatial scales of resolution, and to assess scaling properties of the code. We present results of simulations both as static images and as mpeg movies, so that the dynamical aspects of the computation can be assessed by the viewer. We compute a variety of statistics from the simulations, including magnitude-frequency relations, and compare these with data from real fault systems. We report recent results on use of Virtual California for probabilistic earthquake forecasting for several sub-groups of major faults in California. These methods have the advantage that system-level fault interactions are explicitly included, as well as laboratory-based friction laws.
PhysiCell: An open source physics-based cell simulator for 3-D multicellular systems
Ghaffarizadeh, Ahmadreza; Mumenthaler, Shannon M.
2018-01-01
Many multicellular systems problems can only be understood by studying how cells move, grow, divide, interact, and die. Tissue-scale dynamics emerge from systems of many interacting cells as they respond to and influence their microenvironment. The ideal “virtual laboratory” for such multicellular systems simulates both the biochemical microenvironment (the “stage”) and many mechanically and biochemically interacting cells (the “players” upon the stage). PhysiCell—physics-based multicellular simulator—is an open source agent-based simulator that provides both the stage and the players for studying many interacting cells in dynamic tissue microenvironments. It builds upon a multi-substrate biotransport solver to link cell phenotype to multiple diffusing substrates and signaling factors. It includes biologically-driven sub-models for cell cycling, apoptosis, necrosis, solid and fluid volume changes, mechanics, and motility “out of the box.” The C++ code has minimal dependencies, making it simple to maintain and deploy across platforms. PhysiCell has been parallelized with OpenMP, and its performance scales linearly with the number of cells. Simulations up to 105-106 cells are feasible on quad-core desktop workstations; larger simulations are attainable on single HPC compute nodes. We demonstrate PhysiCell by simulating the impact of necrotic core biomechanics, 3-D geometry, and stochasticity on the dynamics of hanging drop tumor spheroids and ductal carcinoma in situ (DCIS) of the breast. We demonstrate stochastic motility, chemical and contact-based interaction of multiple cell types, and the extensibility of PhysiCell with examples in synthetic multicellular systems (a “cellular cargo delivery” system, with application to anti-cancer treatments), cancer heterogeneity, and cancer immunology. PhysiCell is a powerful multicellular systems simulator that will be continually improved with new capabilities and performance improvements. It also represents a significant independent code base for replicating results from other simulation platforms. The PhysiCell source code, examples, documentation, and support are available under the BSD license at http://PhysiCell.MathCancer.org and http://PhysiCell.sf.net. PMID:29474446
DOE Office of Scientific and Technical Information (OSTI.GOV)
Wilke, Jeremiah J; Kenny, Joseph P.
2015-02-01
Discrete event simulation provides a powerful mechanism for designing and testing new extreme- scale programming models for high-performance computing. Rather than debug, run, and wait for results on an actual system, design can first iterate through a simulator. This is particularly useful when test beds cannot be used, i.e. to explore hardware or scales that do not yet exist or are inaccessible. Here we detail the macroscale components of the structural simulation toolkit (SST). Instead of depending on trace replay or state machines, the simulator is architected to execute real code on real software stacks. Our particular user-space threading frameworkmore » allows massive scales to be simulated even on small clusters. The link between the discrete event core and the threading framework allows interesting performance metrics like call graphs to be collected from a simulated run. Performance analysis via simulation can thus become an important phase in extreme-scale programming model and runtime system design via the SST macroscale components.« less
Ethical Guidelines for Structural Interventions to Small-Scale Historic Stone Masonry Buildings.
Hurol, Yonca; Yüceer, Hülya; Başarır, Hacer
2015-12-01
Structural interventions to historic stone masonry buildings require that both structural and heritage values be considered simultaneously. The absence of one of these value systems in implementation can be regarded as an unethical professional action. The research objective of this article is to prepare a guideline for ensuring ethical structural interventions to small-scale stone historic masonry buildings in the conservation areas of Northern Cyprus. The methodology covers an analysis of internationally accepted conservation documents and national laws related to the conservation of historic buildings, an analysis of building codes, especially Turkish building codes, which have been used in Northern Cyprus, and an analysis of the structural interventions introduced to a significant historic building in a semi-intact state in the walled city of Famagusta. This guideline covers issues related to whether buildings are intact or ruined, the presence of earthquake risk, the types of structural decisions in an architectural conservation project, and the values to consider during the decision making phase.
ECCD-induced tearing mode stabilization in coupled IPS/NIMROD/GENRAY HPC simulations
NASA Astrophysics Data System (ADS)
Jenkins, Thomas; Kruger, S. E.; Held, E. D.; Harvey, R. W.; Elwasif, W. R.
2012-03-01
We summarize ongoing developments toward an integrated, predictive model for determining optimal ECCD-based NTM stabilization strategies in ITER. We demonstrate the capability of the SWIM Project's Integrated Plasma Simulator (IPS) framework to choreograph multiple executions of, and data exchanges between, physics codes modeling various spatiotemporal scales of this coupled RF/MHD problem on several thousand HPC processors. As NIMROD evolves fluid equations to model bulk plasma behavior, self-consistent propagation/deposition of RF power in the ensuing plasma profiles is calculated by GENRAY. Data from both codes is then processed by computational geometry packages to construct the RF-induced quasilinear diffusion tensor; moments of this tensor (entering as additional terms in NIMROD's fluid equations due to the disparity in RF/MHD spatiotemporal scales) influence the dynamics of current, momentum, and energy evolution as well as the MHD closures. Initial results are shown to correctly capture the physics of magnetic island stabilization; we also discuss the development of a numerical plasma control system for active feedback stabilization of tearing modes.
NASA Astrophysics Data System (ADS)
Schmieschek, S.; Shamardin, L.; Frijters, S.; Krüger, T.; Schiller, U. D.; Harting, J.; Coveney, P. V.
2017-08-01
We introduce the lattice-Boltzmann code LB3D, version 7.1. Building on a parallel program and supporting tools which have enabled research utilising high performance computing resources for nearly two decades, LB3D version 7 provides a subset of the research code functionality as an open source project. Here, we describe the theoretical basis of the algorithm as well as computational aspects of the implementation. The software package is validated against simulations of meso-phases resulting from self-assembly in ternary fluid mixtures comprising immiscible and amphiphilic components such as water-oil-surfactant systems. The impact of the surfactant species on the dynamics of spinodal decomposition are tested and quantitative measurement of the permeability of a body centred cubic (BCC) model porous medium for a simple binary mixture is described. Single-core performance and scaling behaviour of the code are reported for simulations on current supercomputer architectures.
Parallel software for lattice N = 4 supersymmetric Yang-Mills theory
NASA Astrophysics Data System (ADS)
Schaich, David; DeGrand, Thomas
2015-05-01
We present new parallel software, SUSY LATTICE, for lattice studies of four-dimensional N = 4 supersymmetric Yang-Mills theory with gauge group SU(N). The lattice action is constructed to exactly preserve a single supersymmetry charge at non-zero lattice spacing, up to additional potential terms included to stabilize numerical simulations. The software evolved from the MILC code for lattice QCD, and retains a similar large-scale framework despite the different target theory. Many routines are adapted from an existing serial code (Catterall and Joseph, 2012), which SUSY LATTICE supersedes. This paper provides an overview of the new parallel software, summarizing the lattice system, describing the applications that are currently provided and explaining their basic workflow for non-experts in lattice gauge theory. We discuss the parallel performance of the code, and highlight some notable aspects of the documentation for those interested in contributing to its future development.
ls1 mardyn: The Massively Parallel Molecular Dynamics Code for Large Systems.
Niethammer, Christoph; Becker, Stefan; Bernreuther, Martin; Buchholz, Martin; Eckhardt, Wolfgang; Heinecke, Alexander; Werth, Stephan; Bungartz, Hans-Joachim; Glass, Colin W; Hasse, Hans; Vrabec, Jadran; Horsch, Martin
2014-10-14
The molecular dynamics simulation code ls1 mardyn is presented. It is a highly scalable code, optimized for massively parallel execution on supercomputing architectures and currently holds the world record for the largest molecular simulation with over four trillion particles. It enables the application of pair potentials to length and time scales that were previously out of scope for molecular dynamics simulation. With an efficient dynamic load balancing scheme, it delivers high scalability even for challenging heterogeneous configurations. Presently, multicenter rigid potential models based on Lennard-Jones sites, point charges, and higher-order polarities are supported. Due to its modular design, ls1 mardyn can be extended to new physical models, methods, and algorithms, allowing future users to tailor it to suit their respective needs. Possible applications include scenarios with complex geometries, such as fluids at interfaces, as well as nonequilibrium molecular dynamics simulation of heat and mass transfer.
A Computational Chemistry Database for Semiconductor Processing
NASA Technical Reports Server (NTRS)
Jaffe, R.; Meyyappan, M.; Arnold, J. O. (Technical Monitor)
1998-01-01
The concept of 'virtual reactor' or 'virtual prototyping' has received much attention recently in the semiconductor industry. Commercial codes to simulate thermal CVD and plasma processes have become available to aid in equipment and process design efforts, The virtual prototyping effort would go nowhere if codes do not come with a reliable database of chemical and physical properties of gases involved in semiconductor processing. Commercial code vendors have no capabilities to generate such a database, rather leave the task to the user of finding whatever is needed. While individual investigations of interesting chemical systems continue at Universities, there has not been any large scale effort to create a database. In this presentation, we outline our efforts in this area. Our effort focuses on the following five areas: 1. Thermal CVD reaction mechanism and rate constants. 2. Thermochemical properties. 3. Transport properties.4. Electron-molecule collision cross sections. and 5. Gas-surface interactions.
Leach, R; McNally, Donal; Bashir, Mohamad; Sastry, Priya; Cuerden, Richard; Richens, David; Field, Mark
2012-10-01
The severity and location of injuries resulting from vehicular collisions are normally recorded in Abbreviated Injury Scale (AIS) code; we propose a system to link AIS code to a description of acute aortic syndrome (AAS), thus allowing the hypothesis that aortic injury is progressive with collision kinematics to be tested. Standard AIS codes were matched with a clinical description of AAS. A total of 199 collisions that resulted in aortic injury were extracted from a national automotive collision database and the outcomes mapped onto AAS descriptions. The severity of aortic injury (AIS severity score) and stage of AAS progression were compared with collision kinematics and occupant demographics. Post hoc power analyses were used to estimate maximum effect size. The general demographic distribution of the sample represented that of the UK population in regard to sex and age. No significant relationship was observed between estimated test speed, collision direction, occupant location or seat belt use and clinical progression of aortic injury (once initiated). Power analysis confirmed that a suitable sample size was used to observe a medium effect in most of the cases. Similarly, no association was observed between injury severity and collision kinematics. There is sufficient information on AIS severity and location codes to map onto the clinical AAS spectrum. It was not possible, with this data set, to consider the influence of collision kinematics on aortic injury initiation. However, it was demonstrated that after initiation, further progression along the AAS pathway was not influenced by collision kinematics. This might be because the injury is not progressive, because the vehicle kinematics studied do not fully represent the kinematics of the occupants, or because an unknown factor, such as stage of cardiac cycle, dominates. Epidemiologic/prognostic study, level IV.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Jessee, Matthew Anderson
The SCALE Code System is a widely-used modeling and simulation suite for nuclear safety analysis and design that is developed, maintained, tested, and managed by the Reactor and Nuclear Systems Division (RNSD) of Oak Ridge National Laboratory (ORNL). SCALE provides a comprehensive, verified and validated, user-friendly tool set for criticality safety, reactor and lattice physics, radiation shielding, spent fuel and radioactive source term characterization, and sensitivity and uncertainty analysis. Since 1980, regulators, licensees, and research institutions around the world have used SCALE for safety analysis and design. SCALE provides an integrated framework with dozens of computational modules including three deterministicmore » and three Monte Carlo radiation transport solvers that are selected based on the desired solution strategy. SCALE includes current nuclear data libraries and problem-dependent processing tools for continuous-energy (CE) and multigroup (MG) neutronics and coupled neutron-gamma calculations, as well as activation, depletion, and decay calculations. SCALE includes unique capabilities for automated variance reduction for shielding calculations, as well as sensitivity and uncertainty analysis. SCALE’s graphical user interfaces assist with accurate system modeling, visualization of nuclear data, and convenient access to desired results.SCALE 6.2 provides many new capabilities and significant improvements of existing features.New capabilities include:• ENDF/B-VII.1 nuclear data libraries CE and MG with enhanced group structures,• Neutron covariance data based on ENDF/B-VII.1 and supplemented with ORNL data,• Covariance data for fission product yields and decay constants,• Stochastic uncertainty and correlation quantification for any SCALE sequence with Sampler,• Parallel calculations with KENO,• Problem-dependent temperature corrections for CE calculations,• CE shielding and criticality accident alarm system analysis with MAVRIC,• CE depletion with TRITON (T5-DEPL/T6-DEPL),• CE sensitivity/uncertainty analysis with TSUNAMI-3D,• Simplified and efficient LWR lattice physics with Polaris,• Large scale detailed spent fuel characterization with ORIGAMI and ORIGAMI Automator,• Advanced fission source convergence acceleration capabilities with Sourcerer,• Nuclear data library generation with AMPX, and• Integrated user interface with Fulcrum.Enhanced capabilities include:• Accurate and efficient CE Monte Carlo methods for eigenvalue and fixed source calculations,• Improved MG resonance self-shielding methodologies and data,• Resonance self-shielding with modernized and efficient XSProc integrated into most sequences,• Accelerated calculations with TRITON/NEWT (generally 4x faster than SCALE 6.1),• Spent fuel characterization with 1470 new reactor-specific libraries for ORIGEN,• Modernization of ORIGEN (Chebyshev Rational Approximation Method [CRAM] solver, API for high-performance depletion, new keyword input format)• Extension of the maximum mixture number to values well beyond the previous limit of 2147 to ~2 billion,• Nuclear data formats enabling the use of more than 999 energy groups,• Updated standard composition library to provide more accurate use of natural abundances, andvi• Numerous other enhancements for improved usability and stability.« less
Ibrahim, Khaled Z.; Madduri, Kamesh; Williams, Samuel; ...
2013-07-18
The Gyrokinetic Toroidal Code (GTC) uses the particle-in-cell method to efficiently simulate plasma microturbulence. This paper presents novel analysis and optimization techniques to enhance the performance of GTC on large-scale machines. We introduce cell access analysis to better manage locality vs. synchronization tradeoffs on CPU and GPU-based architectures. Finally, our optimized hybrid parallel implementation of GTC uses MPI, OpenMP, and NVIDIA CUDA, achieves up to a 2× speedup over the reference Fortran version on multiple parallel systems, and scales efficiently to tens of thousands of cores.
NASA Astrophysics Data System (ADS)
Horiuchi, Toshiyuki; Watanabe, Jun; Suzuki, Yuta; Iwasaki, Jun-ya
2017-05-01
Two dimensional code marks are often used for the production management. In particular, in the production lines of liquid-crystal-display panels and others, data on fabrication processes such as production number and process conditions are written on each substrate or device in detail, and they are used for quality managements. For this reason, lithography system specialized in code mark printing is developed. However, conventional systems using lamp projection exposure or laser scan exposure are very expensive. Therefore, development of a low-cost exposure system using light emitting diodes (LEDs) and optical fibers with squared ends arrayed in a matrix is strongly expected. In the past research, feasibility of such a new exposure system was demonstrated using a handmade system equipped with 100 LEDs with a central wavelength of 405 nm, a 10×10 matrix of optical fibers with 1 mm square ends, and a 10X projection lens. Based on these progresses, a new method for fabricating large-scale arrays of finer fibers with squared ends was developed in this paper. At most 40 plastic optical fibers were arranged in a linear gap of an arraying instrument, and simultaneously squared by heating them on a hotplate at 120°C for 7 min. Fiber sizes were homogeneous within 496+/-4 μm. In addition, average light leak was improved from 34.4 to 21.3% by adopting the new method in place of conventional one by one squaring method. Square matrix arrays necessary for printing code marks will be obtained by piling the newly fabricated linear arrays up.
NASA Astrophysics Data System (ADS)
Watanabe, Y.; Abe, S.
2014-06-01
Terrestrial neutron-induced soft errors in MOSFETs from a 65 nm down to a 25 nm design rule are analyzed by means of multi-scale Monte Carlo simulation using the PHITS-HyENEXSS code system. Nuclear reaction models implemented in PHITS code are validated by comparisons with experimental data. From the analysis of calculated soft error rates, it is clarified that secondary He and H ions provide a major impact on soft errors with decreasing critical charge. It is also found that the high energy component from 10 MeV up to several hundreds of MeV in secondary cosmic-ray neutrons has the most significant source of soft errors regardless of design rule.
Grizzly Usage and Theory Manual
DOE Office of Scientific and Technical Information (OSTI.GOV)
Spencer, B. W.; Backman, M.; Chakraborty, P.
2016-03-01
Grizzly is a multiphysics simulation code for characterizing the behavior of nuclear power plant (NPP) structures, systems and components (SSCs) subjected to a variety of age-related aging mechanisms. Grizzly simulates both the progression of aging processes, as well as the capacity of aged components to safely perform. This initial beta release of Grizzly includes capabilities for engineering-scale thermo-mechanical analysis of reactor pressure vessels (RPVs). Grizzly will ultimately include capabilities for a wide range of components and materials. Grizzly is in a state of constant development, and future releases will broaden the capabilities of this code for RPV analysis, as wellmore » as expand it to address degradation in other critical NPP components.« less
Development of photovoltaic array and module safety requirements
NASA Technical Reports Server (NTRS)
1982-01-01
Safety requirements for photovoltaic module and panel designs and configurations likely to be used in residential, intermediate, and large-scale applications were identified and developed. The National Electrical Code and Building Codes were reviewed with respect to present provisions which may be considered to affect the design of photovoltaic modules. Limited testing, primarily in the roof fire resistance field was conducted. Additional studies and further investigations led to the development of a proposed standard for safety for flat-plate photovoltaic modules and panels. Additional work covered the initial investigation of conceptual approaches and temporary deployment, for concept verification purposes, of a differential dc ground-fault detection circuit suitable as a part of a photovoltaic array safety system.
Digital 8-DPSK Modem For Trellis-Coded Communication
NASA Technical Reports Server (NTRS)
Jedrey, T. C.; Lay, N. E.; Rafferty, W.
1989-01-01
Digital real-time modem processes octuple differential-phase-shift-keyed trellis-coded modulation. Intended for use in communicating data at rate up to 4.8 kb/s in land-mobile satellite channel (Rician fading) of 5-kHz bandwidth at carrier frequency of 1 to 2 GHz. Modulator and demodulator contain digital signal processors performing modem functions. Design flexible in that functions altered via software. Modem successfully tested and evaluated in both laboratory and field experiments, including recent full-scale satellite experiment. In all cases, modem performed within 1 dB of theory. Other communication systems benefitting from this type of modem include land mobile (without satellites), paging, digitized voice, and frequency-modulation subcarrier data broadcasting.
NASA Technical Reports Server (NTRS)
Sandlin, Doral R.; Howard, Kipp E.
1991-01-01
A user friendly FORTRAN code that can be used for preliminary design of V/STOL aircraft is described. The program estimates lift increments, due to power induced effects, encountered by aircraft in V/STOL flight. These lift increments are calculated using empirical relations developed from wind tunnel tests and are due to suckdown, fountain, ground vortex, jet wake, and the reaction control system. The code can be used as a preliminary design tool along with NASA Ames' Aircraft Synthesis design code or as a stand-alone program for V/STOL aircraft designers. The Power Induced Effects (PIE) module was validated using experimental data and data computed from lift increment routines. Results are presented for many flat plate models along with the McDonnell Aircraft Company's MFVT (mixed flow vectored thrust) V/STOL preliminary design and a 15 percent scale model of the YAV-8B Harrier V/STOL aircraft. Trends and magnitudes of lift increments versus aircraft height above the ground were predicted well by the PIE module. The code also provided good predictions of the magnitudes of lift increments versus aircraft forward velocity. More experimental results are needed to determine how well the code predicts lift increments as they vary with jet deflection angle and angle of attack. The FORTRAN code is provided in the appendix.
[Representation of knowledge in respiratory medicine: ontology should help the coding process].
Blanc, F-X; Baneyx, A; Charlet, J; Housset, B
2010-09-01
Access to medical knowledge is a major issue for health professionals and requires the development of terminologies. The objective of the reported work was to construct an ontology of respiratory medicine, i.e. an organized and formalized terminology composed by specific knowledge. The purpose is to help the medico-economical coding process and to represent the relevant knowledge about the patient. Our researches cover the whole life cycle of an ontology, from the development of a methodology, to building it from texts, to its use in an operational system. A computerized tool, based on the ontology, allows both a medico-economical coding and a graphical medical one. This second one will be used to index hospital reports. Our ontology counts 1913 concepts and contains all the knowledge included in the PMSI part of the SPLF thesaurus. Our tool has been evaluated and showed a recall of 80% and an accuracy of 85% regarding the medico-economical coding. The work presented in this paper justifies the approach that has been used. It must be continued on a large scale to validate our coding principles and the possibility of making enquiries on patient reports concerning clinical research. Copyright © 2010. Published by Elsevier Masson SAS.
Dynamic fisheye grids for binary black hole simulations
NASA Astrophysics Data System (ADS)
Zilhão, Miguel; Noble, Scott C.
2014-03-01
We present a new warped gridding scheme adapted to simulating gas dynamics in binary black hole spacetimes. The grid concentrates grid points in the vicinity of each black hole to resolve the smaller scale structures there, and rarefies grid points away from each black hole to keep the overall problem size at a practical level. In this respect, our system can be thought of as a ‘double’ version of the fisheye coordinate system, used before in numerical relativity codes for evolving binary black holes. The gridding scheme is constructed as a mapping between a uniform coordinate system—in which the equations of motion are solved—to the distorted system representing the spatial locations of our grid points. Since we are motivated to eventually use this system for circumbinary disc calculations, we demonstrate how the distorted system can be constructed to asymptote to the typical spherical polar coordinate system, amenable to efficiently simulating orbiting gas flows about central objects with little numerical diffusion. We discuss its implementation in the Harm3d code, tailored to evolve the magnetohydrodynamics equations in curved spacetimes. We evaluate the performance of the system’s implementation in Harm3d with a series of tests, such as the advected magnetic field loop test, magnetized Bondi accretion, and evolutions of hydrodynamic discs about a single black hole and about a binary black hole. Like we have done with Harm3d, this gridding scheme can be implemented in other unigrid codes as a (possibly) simpler alternative to adaptive mesh refinement.
Coded-aperture Compton camera for gamma-ray imaging
NASA Astrophysics Data System (ADS)
Farber, Aaron M.
This dissertation describes the development of a novel gamma-ray imaging system concept and presents results from Monte Carlo simulations of the new design. Current designs for large field-of-view gamma cameras suitable for homeland security applications implement either a coded aperture or a Compton scattering geometry to image a gamma-ray source. Both of these systems require large, expensive position-sensitive detectors in order to work effectively. By combining characteristics of both of these systems, a new design can be implemented that does not require such expensive detectors and that can be scaled down to a portable size. This new system has significant promise in homeland security, astronomy, botany and other fields, while future iterations may prove useful in medical imaging, other biological sciences and other areas, such as non-destructive testing. A proof-of-principle study of the new gamma-ray imaging system has been performed by Monte Carlo simulation. Various reconstruction methods have been explored and compared. General-Purpose Graphics-Processor-Unit (GPGPU) computation has also been incorporated. The resulting code is a primary design tool for exploring variables such as detector spacing, material selection and thickness and pixel geometry. The advancement of the system from a simple 1-dimensional simulation to a full 3-dimensional model is described. Methods of image reconstruction are discussed and results of simulations consisting of both a 4 x 4 and a 16 x 16 object space mesh have been presented. A discussion of the limitations and potential areas of further study is also presented.
NASA Astrophysics Data System (ADS)
Porter, Ian Edward
A nuclear reactor systems code has the ability to model the system response in an accident scenario based on known initial conditions at the onset of the transient. However, there has been a tendency for these codes to lack the detailed thermo-mechanical fuel rod response models needed for accurate prediction of fuel rod failure. This proposed work will couple today's most widely used steady-state (FRAPCON) and transient (FRAPTRAN) fuel rod models with a systems code TRACE for best-estimate modeling of system response in accident scenarios such as a loss of coolant accident (LOCA). In doing so, code modifications will be made to model gamma heating in LWRs during steady-state and accident conditions and to improve fuel rod thermal/mechanical analysis by allowing axial nodalization of burnup-dependent phenomena such as swelling, cladding creep and oxidation. With the ability to model both burnup-dependent parameters and transient fuel rod response, a fuel dispersal study will be conducted using a hypothetical accident scenario under both PWR and BWR conditions to determine the amount of fuel dispersed under varying conditions. Due to the fuel fragmentation size and internal rod pressure both being dependent on burnup, this analysis will be conducted at beginning, middle and end of cycle to examine the effects that cycle time can play on fuel rod failure and dispersal. Current fuel rod and system codes used by the Nuclear Regulatory Commission (NRC) are compilations of legacy codes with only commonly used light water reactor materials, Uranium Dioxide (UO2), Mixed Oxide (U/PuO 2) and zirconium alloys. However, the events at Fukushima Daiichi and Three Mile Island accident have shown the need for exploration into advanced materials possessing improved accident tolerance. This work looks to further modify the NRC codes to include silicon carbide (SiC), an advanced cladding material proposed by current DOE funded research on accident tolerant fuels (ATF). Several additional fuels will also be analyzed, including uranium nitride (UN), uranium carbide (UC) and uranium silicide (U3Si2). Focusing on the system response in an accident scenario, an emphasis is placed on the fracture mechanics of the ceramic cladding by design the fuel rods to eliminate pellet cladding mechanical interaction (PCMI). The time to failure and how much of the fuel in the reactor fails with an advanced fuel design will be analyzed and compared to the current UO2/Zircaloy design using a full scale reactor model.
The Metropolis Monte Carlo method with CUDA enabled Graphic Processing Units
DOE Office of Scientific and Technical Information (OSTI.GOV)
Hall, Clifford; School of Physics, Astronomy, and Computational Sciences, George Mason University, 4400 University Dr., Fairfax, VA 22030; Ji, Weixiao
2014-02-01
We present a CPU–GPU system for runtime acceleration of large molecular simulations using GPU computation and memory swaps. The memory architecture of the GPU can be used both as container for simulation data stored on the graphics card and as floating-point code target, providing an effective means for the manipulation of atomistic or molecular data on the GPU. To fully take advantage of this mechanism, efficient GPU realizations of algorithms used to perform atomistic and molecular simulations are essential. Our system implements a versatile molecular engine, including inter-molecule interactions and orientational variables for performing the Metropolis Monte Carlo (MMC) algorithm,more » which is one type of Markov chain Monte Carlo. By combining memory objects with floating-point code fragments we have implemented an MMC parallel engine that entirely avoids the communication time of molecular data at runtime. Our runtime acceleration system is a forerunner of a new class of CPU–GPU algorithms exploiting memory concepts combined with threading for avoiding bus bandwidth and communication. The testbed molecular system used here is a condensed phase system of oligopyrrole chains. A benchmark shows a size scaling speedup of 60 for systems with 210,000 pyrrole monomers. Our implementation can easily be combined with MPI to connect in parallel several CPU–GPU duets. -- Highlights: •We parallelize the Metropolis Monte Carlo (MMC) algorithm on one CPU—GPU duet. •The Adaptive Tempering Monte Carlo employs MMC and profits from this CPU—GPU implementation. •Our benchmark shows a size scaling-up speedup of 62 for systems with 225,000 particles. •The testbed involves a polymeric system of oligopyrroles in the condensed phase. •The CPU—GPU parallelization includes dipole—dipole and Mie—Jones classic potentials.« less
DOE Office of Scientific and Technical Information (OSTI.GOV)
Eisenbach, Markus; Li, Ying Wai; Liu, Xianglin
2017-12-01
LSMS is a first principles, Density Functional theory based, electronic structure code targeted mainly at materials applications. LSMS calculates the local spin density approximation to the diagonal part of the electron Green's function. The electron/spin density and energy are easily determined once the Green's function is known. Linear scaling with system size is achieved in the LSMS by using several unique properties of the real space multiple scattering approach to the Green's function.
2017-02-01
scale blade servers (Dell PowerEdge) [20]. It must be recognized however, that the findings are distributed over this collection of architectures not...current operating system designs run into millions of lines of code. Moreover, they compound the opportunity for compromise by granting device drivers...properties (e.g. IP & MAC address) so as to invalidate an adversary’s surveillance data. The current running and bootstrapping instances of the micro
Modern gyrokinetic particle-in-cell simulation of fusion plasmas on top supercomputers
Wang, Bei; Ethier, Stephane; Tang, William; ...
2017-06-29
The Gyrokinetic Toroidal Code at Princeton (GTC-P) is a highly scalable and portable particle-in-cell (PIC) code. It solves the 5D Vlasov-Poisson equation featuring efficient utilization of modern parallel computer architectures at the petascale and beyond. Motivated by the goal of developing a modern code capable of dealing with the physics challenge of increasing problem size with sufficient resolution, new thread-level optimizations have been introduced as well as a key additional domain decomposition. GTC-P's multiple levels of parallelism, including inter-node 2D domain decomposition and particle decomposition, as well as intra-node shared memory partition and vectorization have enabled pushing the scalability ofmore » the PIC method to extreme computational scales. In this paper, we describe the methods developed to build a highly parallelized PIC code across a broad range of supercomputer designs. This particularly includes implementations on heterogeneous systems using NVIDIA GPU accelerators and Intel Xeon Phi (MIC) co-processors and performance comparisons with state-of-the-art homogeneous HPC systems such as Blue Gene/Q. New discovery science capabilities in the magnetic fusion energy application domain are enabled, including investigations of Ion-Temperature-Gradient (ITG) driven turbulence simulations with unprecedented spatial resolution and long temporal duration. Performance studies with realistic fusion experimental parameters are carried out on multiple supercomputing systems spanning a wide range of cache capacities, cache-sharing configurations, memory bandwidth, interconnects and network topologies. These performance comparisons using a realistic discovery-science-capable domain application code provide valuable insights on optimization techniques across one of the broadest sets of current high-end computing platforms worldwide.« less
Modern gyrokinetic particle-in-cell simulation of fusion plasmas on top supercomputers
DOE Office of Scientific and Technical Information (OSTI.GOV)
Wang, Bei; Ethier, Stephane; Tang, William
The Gyrokinetic Toroidal Code at Princeton (GTC-P) is a highly scalable and portable particle-in-cell (PIC) code. It solves the 5D Vlasov-Poisson equation featuring efficient utilization of modern parallel computer architectures at the petascale and beyond. Motivated by the goal of developing a modern code capable of dealing with the physics challenge of increasing problem size with sufficient resolution, new thread-level optimizations have been introduced as well as a key additional domain decomposition. GTC-P's multiple levels of parallelism, including inter-node 2D domain decomposition and particle decomposition, as well as intra-node shared memory partition and vectorization have enabled pushing the scalability ofmore » the PIC method to extreme computational scales. In this paper, we describe the methods developed to build a highly parallelized PIC code across a broad range of supercomputer designs. This particularly includes implementations on heterogeneous systems using NVIDIA GPU accelerators and Intel Xeon Phi (MIC) co-processors and performance comparisons with state-of-the-art homogeneous HPC systems such as Blue Gene/Q. New discovery science capabilities in the magnetic fusion energy application domain are enabled, including investigations of Ion-Temperature-Gradient (ITG) driven turbulence simulations with unprecedented spatial resolution and long temporal duration. Performance studies with realistic fusion experimental parameters are carried out on multiple supercomputing systems spanning a wide range of cache capacities, cache-sharing configurations, memory bandwidth, interconnects and network topologies. These performance comparisons using a realistic discovery-science-capable domain application code provide valuable insights on optimization techniques across one of the broadest sets of current high-end computing platforms worldwide.« less
DOE Office of Scientific and Technical Information (OSTI.GOV)
Lee, Joon H.; Siegel, Malcolm Dean; Arguello, Jose Guadalupe, Jr.
2011-03-01
This report describes a gap analysis performed in the process of developing the Waste Integrated Performance and Safety Codes (IPSC) in support of the U.S. Department of Energy (DOE) Office of Nuclear Energy Advanced Modeling and Simulation (NEAMS) Campaign. The goal of the Waste IPSC is to develop an integrated suite of computational modeling and simulation capabilities to quantitatively assess the long-term performance of waste forms in the engineered and geologic environments of a radioactive waste storage or disposal system. The Waste IPSC will provide this simulation capability (1) for a range of disposal concepts, waste form types, engineered repositorymore » designs, and geologic settings, (2) for a range of time scales and distances, (3) with appropriate consideration of the inherent uncertainties, and (4) in accordance with rigorous verification, validation, and software quality requirements. The gap analyses documented in this report were are performed during an initial gap analysis to identify candidate codes and tools to support the development and integration of the Waste IPSC, and during follow-on activities that delved into more detailed assessments of the various codes that were acquired, studied, and tested. The current Waste IPSC strategy is to acquire and integrate the necessary Waste IPSC capabilities wherever feasible, and develop only those capabilities that cannot be acquired or suitably integrated, verified, or validated. The gap analysis indicates that significant capabilities may already exist in the existing THC codes although there is no single code able to fully account for all physical and chemical processes involved in a waste disposal system. Large gaps exist in modeling chemical processes and their couplings with other processes. The coupling of chemical processes with flow transport and mechanical deformation remains challenging. The data for extreme environments (e.g., for elevated temperature and high ionic strength media) that are needed for repository modeling are severely lacking. In addition, most of existing reactive transport codes were developed for non-radioactive contaminants, and they need to be adapted to account for radionuclide decay and in-growth. The accessibility to the source codes is generally limited. Because the problems of interest for the Waste IPSC are likely to result in relatively large computational models, a compact memory-usage footprint and a fast/robust solution procedure will be needed. A robust massively parallel processing (MPP) capability will also be required to provide reasonable turnaround times on the analyses that will be performed with the code. A performance assessment (PA) calculation for a waste disposal system generally requires a large number (hundreds to thousands) of model simulations to quantify the effect of model parameter uncertainties on the predicted repository performance. A set of codes for a PA calculation must be sufficiently robust and fast in terms of code execution. A PA system as a whole must be able to provide multiple alternative models for a specific set of physical/chemical processes, so that the users can choose various levels of modeling complexity based on their modeling needs. This requires PA codes, preferably, to be highly modularized. Most of the existing codes have difficulties meeting these requirements. Based on the gap analysis results, we have made the following recommendations for the code selection and code development for the NEAMS waste IPSC: (1) build fully coupled high-fidelity THCMBR codes using the existing SIERRA codes (e.g., ARIA and ADAGIO) and platform, (2) use DAKOTA to build an enhanced performance assessment system (EPAS), and build a modular code architecture and key code modules for performance assessments. The key chemical calculation modules will be built by expanding the existing CANTERA capabilities as well as by extracting useful components from other existing codes.« less
DOE Office of Scientific and Technical Information (OSTI.GOV)
J.A. Bamberger; L.M. Liljegren; P.S. Lowery
This document presents an analysis of the mechanisms influencing mixing within double-shell slurry tanks. A research program to characterize mixing of slurries within tanks has been proposed. The research program presents a combined experimental and computational approach to produce correlations describing the tank slurry concentration profile (and therefore uniformity) as a function of mixer pump operating conditions. The TEMPEST computer code was used to simulate both a full-scale (prototype) and scaled (model) double-shell waste tank to predict flow patterns resulting from a stationary jet centered in the tank. The simulation results were used to evaluate flow patterns in the tankmore » and to determine whether flow patterns are similar between the full-scale prototype and an existing 1/12-scale model tank. The flow patterns were sufficiently similar to recommend conducting scoping experiments at 1/12-scale. Also, TEMPEST modeled velocity profiles of the near-floor jet were compared to experimental measurements of the near-floor jet with good agreement. Reported values of physical properties of double-shell tank slurries were analyzed to evaluate the range of properties appropriate for conducting scaled experiments. One-twelfth scale scoping experiments are recommended to confirm the prioritization of the dimensionless groups (gravitational settling, Froude, and Reynolds numbers) that affect slurry suspension in the tank. Two of the proposed 1/12-scale test conditions were modeled using the TEMPEST computer code to observe the anticipated flow fields. This information will be used to guide selection of sampling probe locations. Additional computer modeling is being conducted to model a particulate laden, rotating jet centered in the tank. The results of this modeling effort will be compared to the scaled experimental data to quantify the agreement between the code and the 1/12-scale experiment. The scoping experiment results will guide selection of parameters to be varied in the follow-on experiments. Data from the follow-on experiments will be used to develop correlations to describe slurry concentration profile as a function of mixing pump operating conditions. This data will also be used to further evaluate the computer model applications. If the agreement between the experimental data and the code predictions is good, the computer code will be recommended for use to predict slurry uniformity in the tanks under various operating conditions. If the agreement between the code predictions and experimental results is not good, the experimental data correlations will be used to predict slurry uniformity in the tanks within the range of correlation applicability.« less
NASA Astrophysics Data System (ADS)
Pandey, S.; Rajaram, H.
2015-12-01
This work investigates hydrologic and geochemical interactions in the Critical Zone (CZ) using high-resolution reactive transport modeling. Reactive transport models can be used to predict the response of geochemical weathering and solute fluxes in the CZ to changes in a dynamic environment, such as those pertaining to human activities and climate change in recent years. The scales of hydrology and geochemistry in the CZ range from days to eons in time and centimeters to kilometers in space. Here, we present results of a multi-dimensional, multi-scale hydro-geochemical model to investigate the role of subsurface heterogeneity on the formation of mineral weathering fronts in the CZ, which requires consideration of many of these spatio-temporal scales. The model is implemented using the reactive transport code PFLOTRAN, an open source subsurface flow and reactive transport code that utilizes parallelization over multiple processing nodes and provides a strong framework for simulating weathering in the CZ. The model is set up to simulate weathering dynamics in the mountainous catchments representative of the Colorado Front Range. Model parameters were constrained based on hydrologic, geochemical, and geophysical observations from the Boulder Creek Critical Zone Observatory (BcCZO). Simulations were performed in fractured rock systems and compared with systems of heterogeneous and homogeneous permeability fields. Tracer simulations revealed that the mean residence time of solutes was drastically accelerated as fracture density increased. In simulations that include mineral reactions, distinct signatures of transport limitations on weathering arose when discrete flow paths were included. This transport limitation was related to both advective and diffusive processes in the highly heterogeneous systems (i.e. fractured media and correlated random permeability fields with σlnk > 3). The well-known time-dependence of mineral weathering rates was found to be the most pronounced in the fractured systems, with a departure from the maximum system-averaged dissolution rate occurring after ~100 kyr followed by a gradual decrease in the reaction rate with time that persists beyond 104 kyr.
Fast neutron counting in a mobile, trailer-based search platform
NASA Astrophysics Data System (ADS)
Hayward, Jason P.; Sparger, John; Fabris, Lorenzo; Newby, Robert J.
2017-12-01
Trailer-based search platforms for detection of radiological and nuclear threats are often based upon coded aperture gamma-ray imaging, because this method can be rendered insensitive to local variations in gamma background while still localizing the source well. Since gamma source emissions are rather easily shielded, in this work we consider the addition of fast neutron counting to a mobile platform for detection of sources containing Pu. A proof-of-concept system capable of combined gamma and neutron coded-aperture imaging was built inside of a trailer and used to detect a 252Cf source while driving along a roadway. Neutron detector types employed included EJ-309 in a detector plane and EJ-299-33 in a front mask plane. While the 252Cf gamma emissions were not readily detectable while driving by at 16.9 m standoff, the neutron emissions can be detected while moving. Mobile detection performance for this system and a scaled-up system design are presented, along with implications for threat sensing.
NASA Astrophysics Data System (ADS)
Cacace, Mauro; Jacquey, Antoine B.
2017-09-01
Theory and numerical implementation describing groundwater flow and the transport of heat and solute mass in fully saturated fractured rocks with elasto-plastic mechanical feedbacks are developed. In our formulation, fractures are considered as being of lower dimension than the hosting deformable porous rock and we consider their hydraulic and mechanical apertures as scaling parameters to ensure continuous exchange of fluid mass and energy within the fracture-solid matrix system. The coupled system of equations is implemented in a new simulator code that makes use of a Galerkin finite-element technique. The code builds on a flexible, object-oriented numerical framework (MOOSE, Multiphysics Object Oriented Simulation Environment) which provides an extensive scalable parallel and implicit coupling to solve for the multiphysics problem. The governing equations of groundwater flow, heat and mass transport, and rock deformation are solved in a weak sense (either by classical Newton-Raphson or by free Jacobian inexact Newton-Krylow schemes) on an underlying unstructured mesh. Nonlinear feedbacks among the active processes are enforced by considering evolving fluid and rock properties depending on the thermo-hydro-mechanical state of the system and the local structure, i.e. degree of connectivity, of the fracture system. A suite of applications is presented to illustrate the flexibility and capability of the new simulator to address problems of increasing complexity and occurring at different spatial (from centimetres to tens of kilometres) and temporal scales (from minutes to hundreds of years).
Application of Gaussian Process Modeling to Analysis of Functional Unreliability
DOE Office of Scientific and Technical Information (OSTI.GOV)
R. Youngblood
2014-06-01
This paper applies Gaussian Process (GP) modeling to analysis of the functional unreliability of a “passive system.” GPs have been used widely in many ways [1]. The present application uses a GP for emulation of a system simulation code. Such an emulator can be applied in several distinct ways, discussed below. All applications illustrated in this paper have precedents in the literature; the present paper is an application of GP technology to a problem that was originally analyzed [2] using neural networks (NN), and later [3, 4] by a method called “Alternating Conditional Expectations” (ACE). This exercise enables a multifacetedmore » comparison of both the processes and the results. Given knowledge of the range of possible values of key system variables, one could, in principle, quantify functional unreliability by sampling from their joint probability distribution, and performing a system simulation for each sample to determine whether the function succeeded for that particular setting of the variables. Using previously available system simulation codes, such an approach is generally impractical for a plant-scale problem. It has long been recognized, however, that a well-trained code emulator or surrogate could be used in a sampling process to quantify certain performance metrics, even for plant-scale problems. “Response surfaces” were used for this many years ago. But response surfaces are at their best for smoothly varying functions; in regions of parameter space where key system performance metrics may behave in complex ways, or even exhibit discontinuities, response surfaces are not the best available tool. This consideration was one of several that drove the work in [2]. In the present paper, (1) the original quantification of functional unreliability using NN [2], and later ACE [3], is reprised using GP; (2) additional information provided by the GP about uncertainty in the limit surface, generally unavailable in other representations, is discussed; (3) a simple forensic exercise is performed, analogous to the inverse problem of code calibration, but with an accident management spin: given an observation about containment pressure, what can we say about the system variables? References 1. For an introduction to GPs, see (for example) Gaussian Processes for Machine Learning, C. E. Rasmussen and C. K. I. Williams (MIT, 2006). 2. Reliability Quantification of Advanced Reactor Passive Safety Systems, J. J. Vandenkieboom, PhD Thesis (University of Michigan, 1996). 3. Z. Cui, J. C. Lee, J. J. Vandenkieboom, and R. W. Youngblood, “Unreliability Quantification of a Containment Cooling System through ACE and ANN Algorithms,” Trans. Am. Nucl. Soc. 85, 178 (2001). 4. Risk and Safety Analysis of Nuclear Systems, J. C. Lee and N. J. McCormick (Wiley, 2011). See especially §11.2.4.« less
NASA Astrophysics Data System (ADS)
González, Diego; Botella, Guillermo; García, Carlos; Prieto, Manuel; Tirado, Francisco
2013-12-01
This contribution focuses on the optimization of matching-based motion estimation algorithms widely used for video coding standards using an Altera custom instruction-based paradigm and a combination of synchronous dynamic random access memory (SDRAM) with on-chip memory in Nios II processors. A complete profile of the algorithms is achieved before the optimization, which locates code leaks, and afterward, creates a custom instruction set, which is then added to the specific design, enhancing the original system. As well, every possible memory combination between on-chip memory and SDRAM has been tested to achieve the best performance. The final throughput of the complete designs are shown. This manuscript outlines a low-cost system, mapped using very large scale integration technology, which accelerates software algorithms by converting them into custom hardware logic blocks and showing the best combination between on-chip memory and SDRAM for the Nios II processor.
: A Scalable and Transparent System for Simulating MPI Programs
DOE Office of Scientific and Technical Information (OSTI.GOV)
Perumalla, Kalyan S
2010-01-01
is a scalable, transparent system for experimenting with the execution of parallel programs on simulated computing platforms. The level of simulated detail can be varied for application behavior as well as for machine characteristics. Unique features of are repeatability of execution, scalability to millions of simulated (virtual) MPI ranks, scalability to hundreds of thousands of host (real) MPI ranks, portability of the system to a variety of host supercomputing platforms, and the ability to experiment with scientific applications whose source-code is available. The set of source-code interfaces supported by is being expanded to support a wider set of applications, andmore » MPI-based scientific computing benchmarks are being ported. In proof-of-concept experiments, has been successfully exercised to spawn and sustain very large-scale executions of an MPI test program given in source code form. Low slowdowns are observed, due to its use of purely discrete event style of execution, and due to the scalability and efficiency of the underlying parallel discrete event simulation engine, sik. In the largest runs, has been executed on up to 216,000 cores of a Cray XT5 supercomputer, successfully simulating over 27 million virtual MPI ranks, each virtual rank containing its own thread context, and all ranks fully synchronized by virtual time.« less
User's manual for the BNW-I optimization code for dry-cooled power plants. Volume I
DOE Office of Scientific and Technical Information (OSTI.GOV)
Braun, D.J.; Daniel, D.J.; De Mier, W.V.
1977-01-01
This User's Manual provides information on the use and operation of three versions of BNW-I, a computer code developed by Battelle, Pacific Northwest Laboratory (PNL) as a part of its activities under the ERDA Dry Cooling Tower Program. These three versions of BNW-I were used as reported elsewhere to obtain comparative incremental costs of electrical power production by two advanced concepts (one using plastic heat exchangers and one using ammonia as an intermediate heat transfer fluid) and a state-of-the-art system. The computer program offers a comprehensive method of evaluating the cost savings potential of dry-cooled heat rejection systems and componentsmore » for power plants. This method goes beyond simple ''figure-of-merit'' optimization of the cooling tower and includes such items as the cost of replacement capacity needed on an annual basis and the optimum split between plant scale-up and replacement capacity, as well as the purchase and operating costs of all major heat rejection components. Hence, the BNW-I code is a useful tool for determining potential cost savings of new heat transfer surfaces, new piping or other components as part of an optimized system for a dry-cooled power plant.« less
National Hydrography Dataset (NHD)
,
2001-01-01
The National Hydrography Dataset (NHD) is a feature-based database that interconnects and uniquely identifies the stream segments or reaches that make up the nation's surface water drainage system. NHD data was originally developed at 1:100,000 scale and exists at that scale for the whole country. High resolution NHD adds detail to the original 1:100,000-scale NHD. (Data for Alaska, Puerto Rico and the Virgin Islands was developed at high-resolution, not 1:100,000 scale.) Like the 1:100,000-scale NHD, high resolution NHD contains reach codes for networked features and isolated lakes, flow direction, names, stream level, and centerline representations for areal water bodies. Reaches are also defined to represent waterbodies and the approximate shorelines of the Great Lakes, the Atlantic and Pacific Oceans and the Gulf of Mexico. The NHD also incorporates the National Spatial Data Infrastructure framework criteria set out by the Federal Geographic Data Committee.
Posttest analysis of the 1:6-scale reinforced concrete containment
DOE Office of Scientific and Technical Information (OSTI.GOV)
Pfeiffer, P.A.; Kennedy, J.M.; Marchertas, A.H.
A prediction of the response of the Sandia National Laboratories 1:6- scale reinforced concrete containment model test was made by Argonne National Laboratory. ANL along with nine other organizations performed a detailed nonlinear response analysis of the 1:6-scale model containment subjected to overpressurization in the fall of 1986. The two-dimensional code TEMP-STRESS and the three-dimensional NEPTUNE code were utilized (1) to predict the global response of the structure, (2) to identify global failure sites and the corresponding failure pressures and (3) to identify some local failure sites and pressure levels. A series of axisymmetric models was studied with the two-dimensionalmore » computer program TEMP-STRESS. The comparison of these pretest computations with test data from the containment model has provided a test for the capability of the respective finite element codes to predict global failure modes, and hence serves as a validation of these codes. Only the two-dimensional analyses will be discussed in this paper. 3 refs., 10 figs.« less
The life cycles of intense cyclonic and anticyclonic circulation systems observed over oceans
NASA Technical Reports Server (NTRS)
Smith, Phillip J.
1993-01-01
Full attention was now directed to the blocking case studies mentioned in previous reports. Coding and initial computational tests were completed for a North Atlantic blocking case that occurred in late October/early November 1985 and an upstream cyclone that developed rapidly 24 hours before block onset. This work is the subject of two papers accepted for presentation at the International Symposium on the Lifecycles of Extratropical Cyclones in Bergen, Norway, 27 June - 1 July 1994. This effort is currently highlighted by two features. The first is the extension of the Zwack-Okossi equation, originally formulated for the diagnosis of surface wave development, for application at any pressure level. The second is the separation of the basic large-scale analysis fields into synoptic-scale and planetary-scale components, using a two-dimensional Shapiro filter, and the corresponding partitioning of the Zwack-Okossi equation into synoptic-scale, planetary-scale, and synoptic/planetary-scale interaction terms. Preliminary tests suggest substantial contribution from the synoptic-scale and interaction terms.
The MOLDY short-range molecular dynamics package
NASA Astrophysics Data System (ADS)
Ackland, G. J.; D'Mellow, K.; Daraszewicz, S. L.; Hepburn, D. J.; Uhrin, M.; Stratford, K.
2011-12-01
We describe a parallelised version of the MOLDY molecular dynamics program. This Fortran code is aimed at systems which may be described by short-range potentials and specifically those which may be addressed with the embedded atom method. This includes a wide range of transition metals and alloys. MOLDY provides a range of options in terms of the molecular dynamics ensemble used and the boundary conditions which may be applied. A number of standard potentials are provided, and the modular structure of the code allows new potentials to be added easily. The code is parallelised using OpenMP and can therefore be run on shared memory systems, including modern multicore processors. Particular attention is paid to the updates required in the main force loop, where synchronisation is often required in OpenMP implementations of molecular dynamics. We examine the performance of the parallel code in detail and give some examples of applications to realistic problems, including the dynamic compression of copper and carbon migration in an iron-carbon alloy. Program summaryProgram title: MOLDY Catalogue identifier: AEJU_v1_0 Program summary URL:http://cpc.cs.qub.ac.uk/summaries/AEJU_v1_0.html Program obtainable from: CPC Program Library, Queen's University, Belfast, N. Ireland Licensing provisions: GNU General Public License version 2 No. of lines in distributed program, including test data, etc.: 382 881 No. of bytes in distributed program, including test data, etc.: 6 705 242 Distribution format: tar.gz Programming language: Fortran 95/OpenMP Computer: Any Operating system: Any Has the code been vectorised or parallelized?: Yes. OpenMP is required for parallel execution RAM: 100 MB or more Classification: 7.7 Nature of problem: Moldy addresses the problem of many atoms (of order 10 6) interacting via a classical interatomic potential on a timescale of microseconds. It is designed for problems where statistics must be gathered over a number of equivalent runs, such as measuring thermodynamic properities, diffusion, radiation damage, fracture, twinning deformation, nucleation and growth of phase transitions, sputtering etc. In the vast majority of materials, the interactions are non-pairwise, and the code must be able to deal with many-body forces. Solution method: Molecular dynamics involves integrating Newton's equations of motion. MOLDY uses verlet (for good energy conservation) or predictor-corrector (for accurate trajectories) algorithms. It is parallelised using open MP. It also includes a static minimisation routine to find the lowest energy structure. Boundary conditions for surfaces, clusters, grain boundaries, thermostat (Nose), barostat (Parrinello-Rahman), and externally applied strain are provided. The initial configuration can be either a repeated unit cell or have all atoms given explictly. Initial velocities are generated internally, but it is also possible to specify the velocity of a particular atom. A wide range of interatomic force models are implemented, including embedded atom, Morse or Lennard-Jones. Thus the program is especially well suited to calculations of metals. Restrictions: The code is designed for short-ranged potentials, and there is no Ewald sum. Thus for long range interactions where all particles interact with all others, the order- N scaling will fail. Different interatomic potential forms require recompilation of the code. Additional comments: There is a set of associated open-source analysis software for postprocessing and visualisation. This includes local crystal structure recognition and identification of topological defects. Running time: A set of test modules for running time are provided. The code scales as order N. The parallelisation shows near-linear scaling with number of processors in a shared memory environment. A typical run of a few tens of nanometers for a few nanoseconds will run on a timescale of days on a multiprocessor desktop.
Democratisation of AAC Symbol Choices Using Technology.
Draffan, E A; Wald, Mike; Zeinoun, Nadine; Banes, David
2017-01-01
The use of an online voting system has been developed to enable democratic choices of newly designed symbols to support speech, language and literacy skills in a localisation situation. The system works for those using and supporting Augmentative and Alternative Communication (AAC) symbols on electronic systems by the provision of simplified scales of acceptance and adapted grids. The methodology and results highlighted the importance of user participation at the outset and concrete examples of symbol adaptations that were found necessary to ensure higher levels of user satisfaction. Design changes included appropriate local dress codes, linguistic nuances, social settings, the built environment and religious sensitivities.
Icing Simulation Research Supporting the Ice-Accretion Testing of Large-Scale Swept-Wing Models
NASA Technical Reports Server (NTRS)
Yadlin, Yoram; Monnig, Jaime T.; Malone, Adam M.; Paul, Bernard P.
2018-01-01
The work summarized in this report is a continuation of NASA's Large-Scale, Swept-Wing Test Articles Fabrication; Research and Test Support for NASA IRT contract (NNC10BA05 -NNC14TA36T) performed by Boeing under the NASA Research and Technology for Aerospace Propulsion Systems (RTAPS) contract. In the study conducted under RTAPS, a series of icing tests in the Icing Research Tunnel (IRT) have been conducted to characterize ice formations on large-scale swept wings representative of modern commercial transport airplanes. The outcome of that campaign was a large database of ice-accretion geometries that can be used for subsequent aerodynamic evaluation in other experimental facilities and for validation of ice-accretion prediction codes.
Coordinated design of coding and modulation systems
NASA Technical Reports Server (NTRS)
Massey, J. L.; Ancheta, T.; Johannesson, R.; Lauer, G.; Lee, L.
1976-01-01
The joint optimization of the coding and modulation systems employed in telemetry systems was investigated. Emphasis was placed on formulating inner and outer coding standards used by the Goddard Spaceflight Center. Convolutional codes were found that are nearly optimum for use with Viterbi decoding in the inner coding of concatenated coding systems. A convolutional code, the unit-memory code, was discovered and is ideal for inner system usage because of its byte-oriented structure. Simulations of sequential decoding on the deep-space channel were carried out to compare directly various convolutional codes that are proposed for use in deep-space systems.
On fully three-dimensional resistive wall mode and feedback stabilization computationsa)
NASA Astrophysics Data System (ADS)
Strumberger, E.; Merkel, P.; Sempf, M.; Günter, S.
2008-05-01
Resistive walls, located close to the plasma boundary, reduce the growth rates of external kink modes to resistive time scales. For such slowly growing resistive wall modes, the stabilization by an active feedback system becomes feasible. The fully three-dimensional stability code STARWALL, and the feedback optimization code OPTIM have been developed [P. Merkel and M. Sempf, 21st IAEA Fusion Energy Conference 2006, Chengdu, China (International Atomic Energy Agency, Vienna, 2006, paper TH/P3-8] to compute the growth rates of resistive wall modes in the presence of nonaxisymmetric, multiply connected wall structures and to model the active feedback stabilization of these modes. In order to demonstrate the capabilities of the codes and to study the effect of the toroidal mode coupling caused by multiply connected wall structures, the codes are applied to test equilibria using the resistive wall structures currently under debate for ITER [M. Shimada et al., Nucl. Fusion 47, S1 (2007)] and ASDEX Upgrade [W. Köppendörfer et al., Proceedings of the 16th Symposium on Fusion Technology, London, 1990 (Elsevier, Amsterdam, 1991), Vol. 1, p. 208].
On fully three-dimensional resistive wall mode and feedback stabilization computations
DOE Office of Scientific and Technical Information (OSTI.GOV)
Strumberger, E.; Merkel, P.; Sempf, M.
2008-05-15
Resistive walls, located close to the plasma boundary, reduce the growth rates of external kink modes to resistive time scales. For such slowly growing resistive wall modes, the stabilization by an active feedback system becomes feasible. The fully three-dimensional stability code STARWALL, and the feedback optimization code OPTIM have been developed [P. Merkel and M. Sempf, 21st IAEA Fusion Energy Conference 2006, Chengdu, China (International Atomic Energy Agency, Vienna, 2006, paper TH/P3-8] to compute the growth rates of resistive wall modes in the presence of nonaxisymmetric, multiply connected wall structures and to model the active feedback stabilization of these modes.more » In order to demonstrate the capabilities of the codes and to study the effect of the toroidal mode coupling caused by multiply connected wall structures, the codes are applied to test equilibria using the resistive wall structures currently under debate for ITER [M. Shimada et al., Nucl. Fusion 47, S1 (2007)] and ASDEX Upgrade [W. Koeppendoerfer et al., Proceedings of the 16th Symposium on Fusion Technology, London, 1990 (Elsevier, Amsterdam, 1991), Vol. 1, p. 208].« less
Some practical universal noiseless coding techniques, part 3, module PSl14,K+
NASA Technical Reports Server (NTRS)
Rice, Robert F.
1991-01-01
The algorithmic definitions, performance characterizations, and application notes for a high-performance adaptive noiseless coding module are provided. Subsets of these algorithms are currently under development in custom very large scale integration (VLSI) at three NASA centers. The generality of coding algorithms recently reported is extended. The module incorporates a powerful adaptive noiseless coder for Standard Data Sources (i.e., sources whose symbols can be represented by uncorrelated non-negative integers, where smaller integers are more likely than the larger ones). Coders can be specified to provide performance close to the data entropy over any desired dynamic range (of entropy) above 0.75 bit/sample. This is accomplished by adaptively choosing the best of many efficient variable-length coding options to use on each short block of data (e.g., 16 samples) All code options used for entropies above 1.5 bits/sample are 'Huffman Equivalent', but they require no table lookups to implement. The coding can be performed directly on data that have been preprocessed to exhibit the characteristics of a standard source. Alternatively, a built-in predictive preprocessor can be used where applicable. This built-in preprocessor includes the familiar 1-D predictor followed by a function that maps the prediction error sequences into the desired standard form. Additionally, an external prediction can be substituted if desired. A broad range of issues dealing with the interface between the coding module and the data systems it might serve are further addressed. These issues include: multidimensional prediction, archival access, sensor noise, rate control, code rate improvements outside the module, and the optimality of certain internal code options.
WOMBAT: A Scalable and High-performance Astrophysical Magnetohydrodynamics Code
NASA Astrophysics Data System (ADS)
Mendygral, P. J.; Radcliffe, N.; Kandalla, K.; Porter, D.; O'Neill, B. J.; Nolting, C.; Edmon, P.; Donnert, J. M. F.; Jones, T. W.
2017-02-01
We present a new code for astrophysical magnetohydrodynamics specifically designed and optimized for high performance and scaling on modern and future supercomputers. We describe a novel hybrid OpenMP/MPI programming model that emerged from a collaboration between Cray, Inc. and the University of Minnesota. This design utilizes MPI-RMA optimized for thread scaling, which allows the code to run extremely efficiently at very high thread counts ideal for the latest generation of multi-core and many-core architectures. Such performance characteristics are needed in the era of “exascale” computing. We describe and demonstrate our high-performance design in detail with the intent that it may be used as a model for other, future astrophysical codes intended for applications demanding exceptional performance.
NASA Astrophysics Data System (ADS)
Mirvis, E.; Iredell, M.
2015-12-01
The operational (OPS) NOAA National Centers for Environmental Prediction (NCEP) suite, traditionally, consist of a large set of multi- scale HPC models, workflows, scripts, tools and utilities, which are very much depending on the variety of the additional components. Namely, this suite utilizes a unique collection of the in-house developed 20+ shared libraries (NCEPLIBS), certain versions of the 3-rd party libraries (like netcdf, HDF, ESMF, jasper, xml etc.), HPC workflow tool within dedicated (sometimes even vendors' customized) HPC system homogeneous environment. This domain and site specific, accompanied with NCEP's product- driven large scale real-time data operations complicates NCEP collaborative development tremendously by reducing chances to replicate this OPS environment anywhere else. The NOAA/NCEP's Environmental Modeling Center (EMC) missions to develop and improve numerical weather, climate, hydrological and ocean prediction through the partnership with the research community. Realizing said difficulties, lately, EMC has been taken an innovative approach to improve flexibility of the HPC environment by building the elements and a foundation for NCEP OPS functionally equivalent environment (FEE), which can be used to ease the external interface constructs as well. Aiming to reduce turnaround time of the community code enhancements via Research-to-Operations (R2O) cycle, EMC developed and deployed several project sub-set standards that already paved the road to NCEP OPS implementation standards. In this topic we will discuss the EMC FEE for O2R requirements and approaches in collaborative standardization, including NCEPLIBS FEE and models code version control paired with the models' derived customized HPC modules and FEE footprints. We will share NCEP/EMC experience and potential in the refactoring of EMC development processes, legacy codes and in securing model source code quality standards by using combination of the Eclipse IDE, integrated with the reverse engineering tools/APIs. We will also inform on collaborative efforts in the restructuring of the NOAA Environmental Modeling System (NEMS) - the multi- model and coupling framework, and transitioning FEE verification methodology.
Development of a 14-digit Hydrologic Unit Code Numbering System for South Carolina
Bower, David E.; Lowry, Claude; Lowery, Mark A.; Hurley, Noel M.
1999-01-01
A Hydrologic Unit Map showing the cataloging units, watersheds, and subwatersheds of South Carolina has been developed by the U.S. Geological Survey in cooperation with the South Carolina Department of Health and Environmental Control, funded through a U.S. Environmental Protection Agency 319 Grant, and the U.S. Department of Agriculture, Natural Resources Conservation Service. These delineations represent 8-, 11-, and 14-digit Hydrologic Unit Codes, respectively. This map presents information on drainage, hydrography, and hydrologic boundaries of the water-resources regions, subregions, accounting units, cataloging units, watersheds, and subwatersheds. The source maps for the basin delineations are 1:24,000-scale 7.5-minute series topographic maps and the base maps are from 1:100,000-scale Digital Line Graphs; however, the data are published at a scale of 1:500,000. In addition, an electronic version of the data is provided on a compact disc.Of the 1,022 subwatersheds delineated for this project, 1,004 range in size from 3,000 to 40,000 acres (4.69 to 62.5 square miles). Seventeen subwatersheds are smaller than 3,000 acres and one subwatershed, located on St. Helena Island, is larger than 40,000 acres.This map and its associated codes provide a standardized base for use by water-resource managers and planners in locating, storing, retrieving, and exchanging hydrologic data. In addition, the map can be used for cataloging water-data acquisition activities, geographically organizing hydrologic data, and planning and describing water-use and related land-use activities.
A scaling relationship for impact-induced melt volume
NASA Astrophysics Data System (ADS)
Nakajima, M.; Rubie, D. C.; Melosh, H., IV; Jacobson, S. A.; Golabek, G.; Nimmo, F.; Morbidelli, A.
2016-12-01
During the late stages of planetary accretion, protoplanets experience a number of giant impacts and extensive mantle melting. The impactor's core sinks through the molten part of the target mantle (magma ocean) and experiences metal-silicate partitioning (e.g., Stevenson, 1990). For understanding the chemical evolution of the planetary mantle and core, we need to determine the impact-induced melt volume because the partitioning strongly depends on the ranges of the pressures and temperatures within the magma ocean. Previous studies have investigated the effects of small impacts (i.e. impact cratering) on melt volume, but those for giant impacts are not well understood yet. Here, we perform giant impact simulations to derive a scaling law for melt volume as a function of impact velocity, impact angle, and impactor-to-target mass ratio. We use two different numerical codes, namely smoothed particle hydrodynamics we developed (SPH, a particle method) and the code iSALE (a grid-based method) to compare their outcomes. Our simulations show that these two codes generally agree as long as the same equation of state is used. We also find that some of the previous studies developed for small impacts (e.g., Abramov et al., 2012) overestimate giant impact melt volume by orders of magnitudes partly because these models do not consider self-gravity of the impacting bodies. Therefore, these models may not be extrapolated to large impacts. Our simulations also show that melt volume can be scaled by the total mass of the system. In this presentation, we further discuss geochemical implications for giant impacts on planets, including Earth and Mars.
Swan, D; Hannigan, A; Higgins, S; McDonnell, R; Meagher, D; Cullen, W
2017-02-01
In Ireland, as in many other healthcare systems, mental health service provision is being reconfigured with a move toward more care in the community, and particularly primary care. Recording and surveillance systems for mental health information and activities in primary care are needed for service planning and quality improvement. We describe the development and initial implementation of a software tool ('mental health finder') within a widely used primary care electronic medical record system (EMR) in Ireland to enable large-scale data collection on the epidemiology and management of mental health and substance use problems among patients attending general practice. In collaboration with the Irish Primary Care Research Network (IPCRN), we developed the 'Mental Health Finder' as a software plug-in to a commonly used primary care EMR system to facilitate data collection on mental health diagnoses and pharmacological treatments among patients. The finder searches for and identifies patients based on diagnostic coding and/or prescribed medicines. It was initially implemented among a convenience sample of six GP practices. Prevalence of mental health and substance use problems across the six practices, as identified by the finder, was 9.4% (range 6.9-12.7%). 61.9% of identified patients were female; 25.8% were private patients. One-third (33.4%) of identified patients were prescribed more than one class of psychotropic medication. Of the patients identified by the finder, 89.9% were identifiable via prescribing data, 23.7% via diagnostic coding. The finder is a feasible and promising methodology for large-scale data collection on mental health problems in primary care.
NASA Astrophysics Data System (ADS)
Ramanathan, Ramya; Guin, Arijit; Ritzi, Robert W.; Dominic, David F.; Freedman, Vicky L.; Scheibe, Timothy D.; Lunt, Ian A.
2010-04-01
A geometric-based simulation methodology was developed and incorporated into a computer code to model the hierarchical stratal architecture, and the corresponding spatial distribution of permeability, in braided channel belt deposits. The code creates digital models of these deposits as a three-dimensional cubic lattice, which can be used directly in numerical aquifer or reservoir models for fluid flow. The digital models have stratal units defined from the kilometer scale to the centimeter scale. These synthetic deposits are intended to be used as high-resolution base cases in various areas of computational research on multiscale flow and transport processes, including the testing of upscaling theories. The input parameters are primarily univariate statistics. These include the mean and variance for characteristic lengths of sedimentary unit types at each hierarchical level, and the mean and variance of log-permeability for unit types defined at only the lowest level (smallest scale) of the hierarchy. The code has been written for both serial and parallel execution. The methodology is described in part 1 of this paper. In part 2 (Guin et al., 2010), models generated by the code are presented and evaluated.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Ramanathan, Ramya; Guin, Arijit; Ritzi, Robert W.
A geometric-based simulation methodology was developed and incorporated into a computer code to model the hierarchical stratal architecture, and the corresponding spatial distribution of permeability, in braided channel belt deposits. The code creates digital models of these deposits as a three-dimensional cubic lattice, which can be used directly in numerical aquifer or reservoir models for fluid flow. The digital models have stratal units defined from the km scale to the cm scale. These synthetic deposits are intended to be used as high-resolution base cases in various areas of computational research on multiscale flow and transport processes, including the testing ofmore » upscaling theories. The input parameters are primarily univariate statistics. These include the mean and variance for characteristic lengths of sedimentary unit types at each hierarchical level, and the mean and variance of log-permeability for unit types defined at only the lowest level (smallest scale) of the hierarchy. The code has been written for both serial and parallel execution. The methodology is described in Part 1 of this series. In Part 2, models generated by the code are presented and evaluated.« less
A new hybrid code (CHIEF) implementing the inertial electron fluid equation without approximation
NASA Astrophysics Data System (ADS)
Muñoz, P. A.; Jain, N.; Kilian, P.; Büchner, J.
2018-03-01
We present a new hybrid algorithm implemented in the code CHIEF (Code Hybrid with Inertial Electron Fluid) for simulations of electron-ion plasmas. The algorithm treats the ions kinetically, modeled by the Particle-in-Cell (PiC) method, and electrons as an inertial fluid, modeled by electron fluid equations without any of the approximations used in most of the other hybrid codes with an inertial electron fluid. This kind of code is appropriate to model a large variety of quasineutral plasma phenomena where the electron inertia and/or ion kinetic effects are relevant. We present here the governing equations of the model, how these are discretized and implemented numerically, as well as six test problems to validate our numerical approach. Our chosen test problems, where the electron inertia and ion kinetic effects play the essential role, are: 0) Excitation of parallel eigenmodes to check numerical convergence and stability, 1) parallel (to a background magnetic field) propagating electromagnetic waves, 2) perpendicular propagating electrostatic waves (ion Bernstein modes), 3) ion beam right-hand instability (resonant and non-resonant), 4) ion Landau damping, 5) ion firehose instability, and 6) 2D oblique ion firehose instability. Our results reproduce successfully the predictions of linear and non-linear theory for all these problems, validating our code. All properties of this hybrid code make it ideal to study multi-scale phenomena between electron and ion scales such as collisionless shocks, magnetic reconnection and kinetic plasma turbulence in the dissipation range above the electron scales.
Fast transform decoding of nonsystematic Reed-Solomon codes
NASA Technical Reports Server (NTRS)
Truong, T. K.; Cheung, K.-M.; Reed, I. S.; Shiozaki, A.
1989-01-01
A Reed-Solomon (RS) code is considered to be a special case of a redundant residue polynomial (RRP) code, and a fast transform decoding algorithm to correct both errors and erasures is presented. This decoding scheme is an improvement of the decoding algorithm for the RRP code suggested by Shiozaki and Nishida, and can be realized readily on very large scale integration chips.
A full scale hydrodynamic simulation of pyrotechnic combustion
NASA Astrophysics Data System (ADS)
Kim, Bohoon; Jang, Seung-Gyo; Yoh, Jack
2017-06-01
A full scale hydrodynamic simulation that requires an accurate reproduction of shock-induced detonation was conducted for design of an energetic component system. A series of small scale gap tests and detailed hydrodynamic simulations were used to validate the reactive flow model for predicting the shock propagation in a train configuration and to quantify the shock sensitivity of the energetic materials. The energetic component system is composed of four main components, namely a donor unit (HNS + HMX), a bulkhead (STS), an acceptor explosive (RDX), and a propellant (BKNO3) for gas generation. The pressurized gases generated from the burning propellant were purged into a 10 cc release chamber for study of the inherent oscillatory flow induced by the interferences between shock and rarefaction waves. The pressure fluctuations measured from experiment and calculation were investigated to further validate the peculiar peak at specific characteristic frequency (ωc = 8.3 kHz). In this paper, a step-by-step numerical description of detonation of high explosive components, deflagration of propellant component, and deformation of metal component is given in order to facilitate the proper implementation of the outlined formulation into a shock physics code for a full scale hydrodynamic simulation of the energetic component system.
Zhang, Yimei; Li, Shuai; Wang, Fei; Chen, Zhuang; Chen, Jie; Wang, Liqun
2018-09-01
Toxicity of heavy metals from industrialization poses critical concern, and analysis of sources associated with potential human health risks is of unique significance. Assessing human health risk of pollution sources (factored health risk) concurrently in the whole and the sub region can provide more instructive information to protect specific potential victims. In this research, we establish a new expression model of human health risk based on quantitative analysis of sources contribution in different spatial scales. The larger scale grids and their spatial codes are used to initially identify the level of pollution risk, the type of pollution source and the sensitive population at high risk. The smaller scale grids and their spatial codes are used to identify the contribution of various sources of pollution to each sub region (larger grid) and to assess the health risks posed by each source for each sub region. The results of case study show that, for children (sensitive populations, taking school and residential area as major region of activity), the major pollution source is from the abandoned lead-acid battery plant (ALP), traffic emission and agricultural activity. The new models and results of this research present effective spatial information and useful model for quantifying the hazards of source categories and human health a t complex industrial system in the future. Copyright © 2018 Elsevier Ltd. All rights reserved.
NASA Astrophysics Data System (ADS)
Einkemmer, Lukas
2016-05-01
The recently developed semi-Lagrangian discontinuous Galerkin approach is used to discretize hyperbolic partial differential equations (usually first order equations). Since these methods are conservative, local in space, and able to limit numerical diffusion, they are considered a promising alternative to more traditional semi-Lagrangian schemes (which are usually based on polynomial or spline interpolation). In this paper, we consider a parallel implementation of a semi-Lagrangian discontinuous Galerkin method for distributed memory systems (so-called clusters). Both strong and weak scaling studies are performed on the Vienna Scientific Cluster 2 (VSC-2). In the case of weak scaling we observe a parallel efficiency above 0.8 for both two and four dimensional problems and up to 8192 cores. Strong scaling results show good scalability to at least 512 cores (we consider problems that can be run on a single processor in reasonable time). In addition, we study the scaling of a two dimensional Vlasov-Poisson solver that is implemented using the framework provided. All of the simulations are conducted in the context of worst case communication overhead; i.e., in a setting where the CFL (Courant-Friedrichs-Lewy) number increases linearly with the problem size. The framework introduced in this paper facilitates a dimension independent implementation of scientific codes (based on C++ templates) using both an MPI and a hybrid approach to parallelization. We describe the essential ingredients of our implementation.
Vaccarino, Anthony L.; Dharsee, Moyez; Strother, Stephen; Aldridge, Don; Arnott, Stephen R.; Behan, Brendan; Dafnas, Costas; Dong, Fan; Edgecombe, Kenneth; El-Badrawi, Rachad; El-Emam, Khaled; Gee, Tom; Evans, Susan G.; Javadi, Mojib; Jeanson, Francis; Lefaivre, Shannon; Lutz, Kristen; MacPhee, F. Chris; Mikkelsen, Jordan; Mikkelsen, Tom; Mirotchnick, Nicholas; Schmah, Tanya; Studzinski, Christa M.; Stuss, Donald T.; Theriault, Elizabeth; Evans, Kenneth R.
2018-01-01
Historically, research databases have existed in isolation with no practical avenue for sharing or pooling medical data into high dimensional datasets that can be efficiently compared across databases. To address this challenge, the Ontario Brain Institute’s “Brain-CODE” is a large-scale neuroinformatics platform designed to support the collection, storage, federation, sharing and analysis of different data types across several brain disorders, as a means to understand common underlying causes of brain dysfunction and develop novel approaches to treatment. By providing researchers access to aggregated datasets that they otherwise could not obtain independently, Brain-CODE incentivizes data sharing and collaboration and facilitates analyses both within and across disorders and across a wide array of data types, including clinical, neuroimaging and molecular. The Brain-CODE system architecture provides the technical capabilities to support (1) consolidated data management to securely capture, monitor and curate data, (2) privacy and security best-practices, and (3) interoperable and extensible systems that support harmonization, integration, and query across diverse data modalities and linkages to external data sources. Brain-CODE currently supports collaborative research networks focused on various brain conditions, including neurodevelopmental disorders, cerebral palsy, neurodegenerative diseases, epilepsy and mood disorders. These programs are generating large volumes of data that are integrated within Brain-CODE to support scientific inquiry and analytics across multiple brain disorders and modalities. By providing access to very large datasets on patients with different brain disorders and enabling linkages to provincial, national and international databases, Brain-CODE will help to generate new hypotheses about the biological bases of brain disorders, and ultimately promote new discoveries to improve patient care. PMID:29875648
Skaga, Nils O; Eken, Torsten; Hestnes, Morten; Jones, J Mary; Steen, Petter A
2007-01-01
Although several changes were implemented in the 1998 update of the abbreviated injury scale (AIS 98) versus the previous AIS 90, both are still used worldwide for coding of anatomic injury in trauma. This could possibly invalidate comparisons between systems using different AIS versions. Our aim was to evaluate whether the use of different coding dictionaries affected estimation of Injury Severity Score (ISS), New Injury Severity Score (NISS) and probability of survival (Ps) according to TRISS in a hospital-based trauma registry. In a prospective study including 1654 patients from Ulleval University Hospital, a Norwegian trauma referral centre, patients were coded according to both AIS 98 and AIS 90. Agreement between the classifications of ISS, NISS and Ps according to TRISS methodology was estimated using intraclass correlation coefficients (ICC) with 95% CI. ISS changed for 378 of 1654 patients analysed (22.9%). One hundred and forty seven (8.9%) were coded differently due to different injury descriptions and 369 patients (22.3%) had a change in ISS value in one or more regions due to the different scoring algorithm for skin injuries introduced in AIS 98. This gave a minimal change in mean ISS (14.74 versus 14.54). An ICC value of 0.997 (95% CI 0.9968-0.9974) for ISS indicates excellent agreement between the scoring systems. There were no significant changes in NISS and Ps. There was excellent agreement for the overall population between ISS, NISS and Ps values obtained using AIS 90 and AIS 98 for injury coding. Injury descriptions for hypothermia were re-introduced in the recently published AIS 2005. We support this change as coding differences due to hypothermia were encountered in 4.3% of patients in the present study.
Large-scale structural optimization
NASA Technical Reports Server (NTRS)
Sobieszczanski-Sobieski, J.
1983-01-01
Problems encountered by aerospace designers in attempting to optimize whole aircraft are discussed, along with possible solutions. Large scale optimization, as opposed to component-by-component optimization, is hindered by computational costs, software inflexibility, concentration on a single, rather than trade-off, design methodology and the incompatibility of large-scale optimization with single program, single computer methods. The software problem can be approached by placing the full analysis outside of the optimization loop. Full analysis is then performed only periodically. Problem-dependent software can be removed from the generic code using a systems programming technique, and then embody the definitions of design variables, objective function and design constraints. Trade-off algorithms can be used at the design points to obtain quantitative answers. Finally, decomposing the large-scale problem into independent subproblems allows systematic optimization of the problems by an organization of people and machines.
Kibble-Zurek scaling and string-net coarsening in topologically ordered systems.
Chandran, Anushya; Burnell, F J; Khemani, Vedika; Sondhi, S L
2013-10-09
We consider the non-equilibrium dynamics of topologically ordered systems driven across a continuous phase transition into proximate phases with no, or reduced, topological order. This dynamics exhibits scaling in the spirit of Kibble and Zurek but now without the presence of symmetry breaking and a local order parameter. The late stages of the process are seen to exhibit a slow, coarsening dynamics for the string-net that underlies the physics of the topological phase, a potentially interesting signature of topological order. We illustrate these phenomena in the context of particular phase transitions out of the Abelian Z2 topologically ordered phase of the toric code/Z2 gauge theory, and the non-Abelian SU(2)k ordered phases of the relevant Levin-Wen models.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Ionescu-Bujor, Mihaela; Jin Xuezhou; Cacuci, Dan G.
2005-09-15
The adjoint sensitivity analysis procedure for augmented systems for application to the RELAP5/MOD3.2 code system is illustrated. Specifically, the adjoint sensitivity model corresponding to the heat structure models in RELAP5/MOD3.2 is derived and subsequently augmented to the two-fluid adjoint sensitivity model (ASM-REL/TF). The end product, called ASM-REL/TFH, comprises the complete adjoint sensitivity model for the coupled fluid dynamics/heat structure packages of the large-scale simulation code RELAP5/MOD3.2. The ASM-REL/TFH model is validated by computing sensitivities to the initial conditions for various time-dependent temperatures in the test bundle of the Quench-04 reactor safety experiment. This experiment simulates the reflooding with water ofmore » uncovered, degraded fuel rods, clad with material (Zircaloy-4) that has the same composition and size as that used in typical pressurized water reactors. The most important response for the Quench-04 experiment is the time evolution of the cladding temperature of heated fuel rods. The ASM-REL/TFH model is subsequently used to perform an illustrative sensitivity analysis of this and other time-dependent temperatures within the bundle. The results computed by using the augmented adjoint sensitivity system, ASM-REL/TFH, highlight the reliability, efficiency, and usefulness of the adjoint sensitivity analysis procedure for computing time-dependent sensitivities.« less
On the Large-Scaling Issues of Cloud-based Applications for Earth Science Dat
NASA Astrophysics Data System (ADS)
Hua, H.
2016-12-01
Next generation science data systems are needed to address the incoming flood of data from new missions such as NASA's SWOT and NISAR where its SAR data volumes and data throughput rates are order of magnitude larger than present day missions. Existing missions, such as OCO-2, may also require high turn-around time for processing different science scenarios where on-premise and even traditional HPC computing environments may not meet the high processing needs. Additionally, traditional means of procuring hardware on-premise are already limited due to facilities capacity constraints for these new missions. Experiences have shown that to embrace efficient cloud computing approaches for large-scale science data systems requires more than just moving existing code to cloud environments. At large cloud scales, we need to deal with scaling and cost issues. We present our experiences on deploying multiple instances of our hybrid-cloud computing science data system (HySDS) to support large-scale processing of Earth Science data products. We will explore optimization approaches to getting best performance out of hybrid-cloud computing as well as common issues that will arise when dealing with large-scale computing. Novel approaches were utilized to do processing on Amazon's spot market, which can potentially offer 75%-90% costs savings but with an unpredictable computing environment based on market forces.
Analysis of Photogrammetry Data from ISIM Mockup, June 1, 2007
NASA Technical Reports Server (NTRS)
Nowak, Maria; Hill, Mike
2007-01-01
During ground testing of the Integrated Science Instrument Module (ISIM) for the James Webb Space Telescope (JWST), the ISIM Optics group plans to use a Photogrammetry Measurement System for cryogenic calibration of specific target points on the ISIM composite structure and Science Instrument optical benches and other GSE equipment. This testing will occur in the Space Environmental Systems (SES) chamber at Goddard Space Flight Center. Close range photogrammetry is a 3 dimensional metrology system using triangulation to locate custom targets in 3 coordinates via a collection of digital photographs taken from various locations and orientations. These photos are connected using coded targets, special targets that are recognized by the software and can thus correlate the images to provide a 3 dimensional map of the targets, and scaled via well calibrated scale bars. Photogrammetry solves for the camera location and coordinates of the targets simultaneously through the bundling procedure contained in the V-STARS software.
NASA Astrophysics Data System (ADS)
Takasaki, Koichi
This paper presents a program for the multidisciplinary optimization and identification problem of the nonlinear model of large aerospace vehicle structures. The program constructs the global matrix of the dynamic system in the time direction by the p-version finite element method (pFEM), and the basic matrix for each pFEM node in the time direction is described by a sparse matrix similarly to the static finite element problem. The algorithm used by the program does not require the Hessian matrix of the objective function and so has low memory requirements. It also has a relatively low computational cost, and is suited to parallel computation. The program was integrated as a solver module of the multidisciplinary analysis system CUMuLOUS (Computational Utility for Multidisciplinary Large scale Optimization of Undense System) which is under development by the Aerospace Research and Development Directorate (ARD) of the Japan Aerospace Exploration Agency (JAXA).
VizieR Online Data Catalog: Transiting planet WASP-50b (Tregloan-Reed+, 2013)
NASA Astrophysics Data System (ADS)
Tregloan-Reed, J.; Southworth, J.
2018-05-01
Two high-quality light curves of transits of the extrasolar planetary system WASP-50 are presented. They were obtained using the 3.6m NTT at ESO La Silla, Chile, in the Gunn r passband. The errorbars for each transit have been scaled so the best-fitting model (obtained using the JKTEBOP code) has a reduced chi-squared value of 1.0. (1 data file).
Security Considerations For Network-Centric Weapon Systems
2009-09-01
who exploits some weakness in these protective measures and impersonates a trusted network member can gain the ability to reprogram the victim node to...permitting the attacker to conduct a small-scale reprogramming and insert malicious code such as viruses or worms (McClure, Scambray, and Kurtz 2005, 218...people requesting this information because of the subconscious assumption that everything will end well because it has in the past. Defense
Design Considerations of a Virtual Laboratory for Advanced X-ray Sources
NASA Astrophysics Data System (ADS)
Luginsland, J. W.; Frese, M. H.; Frese, S. D.; Watrous, J. J.; Heileman, G. L.
2004-11-01
The field of scientific computation has greatly advanced in the last few years, resulting in the ability to perform complex computer simulations that can predict the performance of real-world experiments in a number of fields of study. Among the forces driving this new computational capability is the advent of parallel algorithms, allowing calculations in three-dimensional space with realistic time scales. Electromagnetic radiation sources driven by high-voltage, high-current electron beams offer an area to further push the state-of-the-art in high fidelity, first-principles simulation tools. The physics of these x-ray sources combine kinetic plasma physics (electron beams) with dense fluid-like plasma physics (anode plasmas) and x-ray generation (bremsstrahlung). There are a number of mature techniques and software packages for dealing with the individual aspects of these sources, such as Particle-In-Cell (PIC), Magneto-Hydrodynamics (MHD), and radiation transport codes. The current effort is focused on developing an object-oriented software environment using the Rational© Unified Process and the Unified Modeling Language (UML) to provide a framework where multiple 3D parallel physics packages, such as a PIC code (ICEPIC), a MHD code (MACH), and a x-ray transport code (ITS) can co-exist in a system-of-systems approach to modeling advanced x-ray sources. Initial software design and assessments of the various physics algorithms' fidelity will be presented.
Decay heat uncertainty for BWR used fuel due to modeling and nuclear data uncertainties
Ilas, Germina; Liljenfeldt, Henrik
2017-05-19
Characterization of the energy released from radionuclide decay in nuclear fuel discharged from reactors is essential for the design, safety, and licensing analyses of used nuclear fuel storage, transportation, and repository systems. There are a limited number of decay heat measurements available for commercial used fuel applications. Because decay heat measurements can be expensive or impractical for covering the multitude of existing fuel designs, operating conditions, and specific application purposes, decay heat estimation relies heavily on computer code prediction. Uncertainty evaluation for calculated decay heat is an important aspect when assessing code prediction and a key factor supporting decision makingmore » for used fuel applications. While previous studies have largely focused on uncertainties in code predictions due to nuclear data uncertainties, this study discusses uncertainties in calculated decay heat due to uncertainties in assembly modeling parameters as well as in nuclear data. Capabilities in the SCALE nuclear analysis code system were used to quantify the effect on calculated decay heat of uncertainties in nuclear data and selected manufacturing and operation parameters for a typical boiling water reactor (BWR) fuel assembly. Furthermore, the BWR fuel assembly used as the reference case for this study was selected from a set of assemblies for which high-quality decay heat measurements are available, to assess the significance of the results through comparison with calculated and measured decay heat data.« less
Decay heat uncertainty for BWR used fuel due to modeling and nuclear data uncertainties
DOE Office of Scientific and Technical Information (OSTI.GOV)
Ilas, Germina; Liljenfeldt, Henrik
Characterization of the energy released from radionuclide decay in nuclear fuel discharged from reactors is essential for the design, safety, and licensing analyses of used nuclear fuel storage, transportation, and repository systems. There are a limited number of decay heat measurements available for commercial used fuel applications. Because decay heat measurements can be expensive or impractical for covering the multitude of existing fuel designs, operating conditions, and specific application purposes, decay heat estimation relies heavily on computer code prediction. Uncertainty evaluation for calculated decay heat is an important aspect when assessing code prediction and a key factor supporting decision makingmore » for used fuel applications. While previous studies have largely focused on uncertainties in code predictions due to nuclear data uncertainties, this study discusses uncertainties in calculated decay heat due to uncertainties in assembly modeling parameters as well as in nuclear data. Capabilities in the SCALE nuclear analysis code system were used to quantify the effect on calculated decay heat of uncertainties in nuclear data and selected manufacturing and operation parameters for a typical boiling water reactor (BWR) fuel assembly. Furthermore, the BWR fuel assembly used as the reference case for this study was selected from a set of assemblies for which high-quality decay heat measurements are available, to assess the significance of the results through comparison with calculated and measured decay heat data.« less
NASA Astrophysics Data System (ADS)
Cao, Chao
2009-03-01
Nano-scale physical phenomena and processes, especially those in electronics, have drawn great attention in the past decade. Experiments have shown that electronic and transport properties of functionalized carbon nanotubes are sensitive to adsorption of gas molecules such as H2, NO2, and NH3. Similar measurements have also been performed to study adsorption of proteins on other semiconductor nano-wires. These experiments suggest that nano-scale systems can be useful for making future chemical and biological sensors. Aiming to understand the physical mechanisms underlying and governing property changes at nano-scale, we start off by investigating, via first-principles method, the electronic structure of Pd-CNT before and after hydrogen adsorption, and continue with coherent electronic transport using non-equilibrium Green’s function techniques combined with density functional theory. Once our results are fully analyzed they can be used to interpret and understand experimental data, with a few difficult issues to be addressed. Finally, we discuss a newly developed multi-scale computing architecture, OPAL, that coordinates simultaneous execution of multiple codes. Inspired by the capabilities of this computing framework, we present a scenario of future modeling and simulation of multi-scale, multi-physical processes.
NASA Technical Reports Server (NTRS)
Aiken, Alexander
2001-01-01
The Scalable Analysis Toolkit (SAT) project aimed to demonstrate that it is feasible and useful to statically detect software bugs in very large systems. The technical focus of the project was on a relatively new class of constraint-based techniques for analysis software, where the desired facts about programs (e.g., the presence of a particular bug) are phrased as constraint problems to be solved. At the beginning of this project, the most successful forms of formal software analysis were limited forms of automatic theorem proving (as exemplified by the analyses used in language type systems and optimizing compilers), semi-automatic theorem proving for full verification, and model checking. With a few notable exceptions these approaches had not been demonstrated to scale to software systems of even 50,000 lines of code. Realistic approaches to large-scale software analysis cannot hope to make every conceivable formal method scale. Thus, the SAT approach is to mix different methods in one application by using coarse and fast but still adequate methods at the largest scales, and reserving the use of more precise but also more expensive methods at smaller scales for critical aspects (that is, aspects critical to the analysis problem under consideration) of a software system. The principled method proposed for combining a heterogeneous collection of formal systems with different scalability characteristics is mixed constraints. This idea had been used previously in small-scale applications with encouraging results: using mostly coarse methods and narrowly targeted precise methods, useful information (meaning the discovery of bugs in real programs) was obtained with excellent scalability.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Yoon, Jihyung; Jung, Jae Won, E-mail: jungj@ecu.edu; Kim, Jong Oh
2016-05-15
Purpose: To develop and evaluate a fast Monte Carlo (MC) dose calculation model of electronic portal imaging device (EPID) based on its effective atomic number modeling in the XVMC code. Methods: A previously developed EPID model, based on the XVMC code by density scaling of EPID structures, was modified by additionally considering effective atomic number (Z{sub eff}) of each structure and adopting a phase space file from the EGSnrc code. The model was tested under various homogeneous and heterogeneous phantoms and field sizes by comparing the calculations in the model with measurements in EPID. In order to better evaluate themore » model, the performance of the XVMC code was separately tested by comparing calculated dose to water with ion chamber (IC) array measurement in the plane of EPID. Results: In the EPID plane, calculated dose to water by the code showed agreement with IC measurements within 1.8%. The difference was averaged across the in-field regions of the acquired profiles for all field sizes and phantoms. The maximum point difference was 2.8%, affected by proximity of the maximum points to penumbra and MC noise. The EPID model showed agreement with measured EPID images within 1.3%. The maximum point difference was 1.9%. The difference dropped from the higher value of the code by employing the calibration that is dependent on field sizes and thicknesses for the conversion of calculated images to measured images. Thanks to the Z{sub eff} correction, the EPID model showed a linear trend of the calibration factors unlike those of the density-only-scaled model. The phase space file from the EGSnrc code sharpened penumbra profiles significantly, improving agreement of calculated profiles with measured profiles. Conclusions: Demonstrating high accuracy, the EPID model with the associated calibration system may be used for in vivo dosimetry of radiation therapy. Through this study, a MC model of EPID has been developed, and their performance has been rigorously investigated for transit dosimetry.« less
1977-10-01
APPROVED DATE FUNCTION APPROVED jDATE WRITER J . K-olanek 2/6/76 REVISIONS CHK DESCRIPTION REV CHK DESCRIPTION IREV REVISION jJ ~ ~ ~~~ _ II SHEET NO...DOCUMENT (CDBDD) 45 5.5 COMPUTER PROGRAM PACKAGE (CPP)- j 45 5.6 COMPUTER PROGRAM OPERATOR’S MANUAL (CPOM) 45 5.7 COMPUTER PROGRAM TEST PLAN (CPTPL) 45...I LIST OF FIGURES Number Page 1 JEWS Simplified Block Diagram 4 2 System Controller Architecture 5 SIZE CODE IDENT NO DRAWING NO. A 49956 SCALE REV J
Southern, Danielle A; Burnand, Bernard; Droesler, Saskia E; Flemons, Ward; Forster, Alan J; Gurevich, Yana; Harrison, James; Quan, Hude; Pincus, Harold A; Romano, Patrick S; Sundararajan, Vijaya; Kostanjsek, Nenad; Ghali, William A
2017-03-01
Existing administrative data patient safety indicators (PSIs) have been limited by uncertainty around the timing of onset of included diagnoses. We undertook de novo PSI development through a data-driven approach that drew upon "diagnosis timing" information available in some countries' administrative hospital data. Administrative database analysis and modified Delphi rating process. All hospitalized adults in Canada in 2009. We queried all hospitalizations for ICD-10-CA diagnosis codes arising during hospital stay. We then undertook a modified Delphi panel process to rate the extent to which each of the identified diagnoses has a potential link to suboptimal quality of care. We grouped the identified quality/safety-related diagnoses into relevant clinical categories. Lastly, we queried Alberta hospital discharge data to assess the frequency of the newly defined PSI events. Among 2,416,413 national hospitalizations, we found 2590 unique ICD-10-CA codes flagged as having arisen after admission. Seven panelists evaluated these in a 2-round review process, and identified a listing of 640 ICD-10-CA diagnosis codes judged to be linked to suboptimal quality of care and thus appropriate for inclusion in PSIs. These were then grouped by patient safety experts into 18 clinically relevant PSI categories. We then analyzed data on 2,381,652 Alberta hospital discharges from 2005 through 2012, and found that 134,299 (5.2%) hospitalizations had at least 1 PSI diagnosis. The resulting work creates a foundation for a new set of PSIs for routine large-scale surveillance of hospital and health system performance.
Some Practical Universal Noiseless Coding Techniques
NASA Technical Reports Server (NTRS)
Rice, Robert F.
1994-01-01
Report discusses noiseless data-compression-coding algorithms, performance characteristics and practical consideration in implementation of algorithms in coding modules composed of very-large-scale integrated circuits. Report also has value as tutorial document on data-compression-coding concepts. Coding techniques and concepts in question "universal" in sense that, in principle, applicable to streams of data from variety of sources. However, discussion oriented toward compression of high-rate data generated by spaceborne sensors for lower-rate transmission back to earth.
Characterizing Mathematics Classroom Practice: Impact of Observation and Coding Choices
ERIC Educational Resources Information Center
Ing, Marsha; Webb, Noreen M.
2012-01-01
Large-scale observational measures of classroom practice increasingly focus on opportunities for student participation as an indicator of instructional quality. Each observational measure necessitates making design and coding choices on how to best measure student participation. This study investigated variations of coding approaches that may be…
JACOB: an enterprise framework for computational chemistry.
Waller, Mark P; Dresselhaus, Thomas; Yang, Jack
2013-06-15
Here, we present just a collection of beans (JACOB): an integrated batch-based framework designed for the rapid development of computational chemistry applications. The framework expedites developer productivity by handling the generic infrastructure tier, and can be easily extended by user-specific scientific code. Paradigms from enterprise software engineering were rigorously applied to create a scalable, testable, secure, and robust framework. A centralized web application is used to configure and control the operation of the framework. The application-programming interface provides a set of generic tools for processing large-scale noninteractive jobs (e.g., systematic studies), or for coordinating systems integration (e.g., complex workflows). The code for the JACOB framework is open sourced and is available at: www.wallerlab.org/jacob. Copyright © 2013 Wiley Periodicals, Inc.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Watanabe, Y., E-mail: watanabe@aees.kyushu-u.ac.jp; Abe, S.
Terrestrial neutron-induced soft errors in MOSFETs from a 65 nm down to a 25 nm design rule are analyzed by means of multi-scale Monte Carlo simulation using the PHITS-HyENEXSS code system. Nuclear reaction models implemented in PHITS code are validated by comparisons with experimental data. From the analysis of calculated soft error rates, it is clarified that secondary He and H ions provide a major impact on soft errors with decreasing critical charge. It is also found that the high energy component from 10 MeV up to several hundreds of MeV in secondary cosmic-ray neutrons has the most significant sourcemore » of soft errors regardless of design rule.« less
Design and construction of functional AAV vectors.
Gray, John T; Zolotukhin, Serge
2011-01-01
Using the basic principles of molecular biology and laboratory techniques presented in this chapter, researchers should be able to create a wide variety of AAV vectors for both clinical and basic research applications. Basic vector design concepts are covered for both protein coding gene expression and small non-coding RNA gene expression cassettes. AAV plasmid vector backbones (available via AddGene) are described, along with critical sequence details for a variety of modular expression components that can be inserted as needed for specific applications. Protocols are provided for assembling the various DNA components into AAV vector plasmids in Escherichia coli, as well as for transferring these vector sequences into baculovirus genomes for large-scale production of AAV in the insect cell production system.
CoreTSAR: Core Task-Size Adapting Runtime
Scogland, Thomas R. W.; Feng, Wu-chun; Rountree, Barry; ...
2014-10-27
Heterogeneity continues to increase at all levels of computing, with the rise of accelerators such as GPUs, FPGAs, and other co-processors into everything from desktops to supercomputers. As a consequence, efficiently managing such disparate resources has become increasingly complex. CoreTSAR seeks to reduce this complexity by adaptively worksharing parallel-loop regions across compute resources without requiring any transformation of the code within the loop. Lastly, our results show performance improvements of up to three-fold over a current state-of-the-art heterogeneous task scheduler as well as linear performance scaling from a single GPU to four GPUs for many codes. In addition, CoreTSAR demonstratesmore » a robust ability to adapt to both a variety of workloads and underlying system configurations.« less
Koydemir, Selda; Demir, Ayhan
2007-06-01
The purpose of the study was to report initial data on the psychometric properties of the Brief Fear of Negative Evaluation Scale. The scale was applied to a nonclinical sample of 250 (137 women, 113 men) Turkish undergraduate students selected randomly from Middle East Technical University. Their mean age was 20.4 yr. (SD= 1.9). The factor structure of the Turkish version, its criterion validity, and internal reliability coefficients were assessed. Although maximum likelihood factor analysis initially indicated that the scale had only one factor, a forced two-factor solution accounted for more variance (61%) in scale scores than a single factor. The straightforward items loaded on the first factor, and the reverse-coded items loaded on the second factor. The total score was significantly positively correlated with scores on the Revised Cheek and Buss Shyness Scale and significantly negatively correlated with scores on the Rosenberg Self-Esteem Scale. Factor 1 (straightforward items) correlated more highly with both Shyness and Self-esteem than Factor 2 (reverse-coded items). Internal consistency estimate was .94 for the Total scores, .91 for the Factor 1 (straightforward items), and .87 for the Factor 2 (reverse-coded items). No sex differences were evident for Fear of Negative Evaluation.
Quantum Kronecker sum-product low-density parity-check codes with finite rate
NASA Astrophysics Data System (ADS)
Kovalev, Alexey A.; Pryadko, Leonid P.
2013-07-01
We introduce an ansatz for quantum codes which gives the hypergraph-product (generalized toric) codes by Tillich and Zémor and generalized bicycle codes by MacKay as limiting cases. The construction allows for both the lower and the upper bounds on the minimum distance; they scale as a square root of the block length. Many thus defined codes have a finite rate and limited-weight stabilizer generators, an analog of classical low-density parity-check (LDPC) codes. Compared to the hypergraph-product codes, hyperbicycle codes generally have a wider range of parameters; in particular, they can have a higher rate while preserving the estimated error threshold.
Computer programs for smoothing and scaling airfoil coordinates
NASA Technical Reports Server (NTRS)
Morgan, H. L., Jr.
1983-01-01
Detailed descriptions are given of the theoretical methods and associated computer codes of a program to smooth and a program to scale arbitrary airfoil coordinates. The smoothing program utilizes both least-squares polynomial and least-squares cubic spline techniques to smooth interatively the second derivatives of the y-axis airfoil coordinates with respect to a transformed x-axis system which unwraps the airfoil and stretches the nose and trailing-edge regions. The corresponding smooth airfoil coordinates are then determined by solving a tridiagonal matrix of simultaneous cubic-spline equations relating the y-axis coordinates and their corresponding second derivatives. A technique for computing the camber and thickness distribution of the smoothed airfoil is also discussed. The scaling program can then be used to scale the thickness distribution generated by the smoothing program to a specific maximum thickness which is then combined with the camber distribution to obtain the final scaled airfoil contour. Computer listings of the smoothing and scaling programs are included.
The FBI wavelet/scalar quantization standard for gray-scale fingerprint image compression
DOE Office of Scientific and Technical Information (OSTI.GOV)
Bradley, J.N.; Brislawn, C.M.; Hopper, T.
1993-05-01
The FBI has recently adopted a standard for the compression of digitized 8-bit gray-scale fingerprint images. The standard is based on scalar quantization of a 64-subband discrete wavelet transform decomposition of the images, followed by Huffman coding. Novel features of the algorithm include the use of symmetric boundary conditions for transforming finite-length signals and a subband decomposition tailored for fingerprint images scanned at 500 dpi. The standard is intended for use in conjunction with ANSI/NBS-CLS 1-1993, American National Standard Data Format for the Interchange of Fingerprint Information, and the FBI`s Integrated Automated Fingerprint Identification System.
The FBI wavelet/scalar quantization standard for gray-scale fingerprint image compression
DOE Office of Scientific and Technical Information (OSTI.GOV)
Bradley, J.N.; Brislawn, C.M.; Hopper, T.
1993-01-01
The FBI has recently adopted a standard for the compression of digitized 8-bit gray-scale fingerprint images. The standard is based on scalar quantization of a 64-subband discrete wavelet transform decomposition of the images, followed by Huffman coding. Novel features of the algorithm include the use of symmetric boundary conditions for transforming finite-length signals and a subband decomposition tailored for fingerprint images scanned at 500 dpi. The standard is intended for use in conjunction with ANSI/NBS-CLS 1-1993, American National Standard Data Format for the Interchange of Fingerprint Information, and the FBI's Integrated Automated Fingerprint Identification System.
NASA Technical Reports Server (NTRS)
Corsiglia, V. R.; Olson, L. E.; Falarski, M. D.
1984-01-01
The design and testing of vane sets and air-exchange inlet for the 40 x 80/80 x 120-ft wind tunnel at NASA Ames are reported. Boundary-layer analysis and 2D and 3D inviscid panel codes are employed in computer models of the system, and a 1/10-scale 2D facility and a 1/50-scale 3D model of the entire wind tunnel are used in experimental testing of the vane sets. The results are presented in graphs, photographs, drawings, and diagrams are discussed. Generally good agreement is found between the predicted and measured performance.
Rolex: Resilience-oriented language extensions for extreme-scale systems
DOE Office of Scientific and Technical Information (OSTI.GOV)
Lucas, Robert F.; Hukerikar, Saurabh
Future exascale high-performance computing (HPC) systems will be constructed from VLSI devices that will be less reliable than those used today, and faults will become the norm, not the exception. This will pose significant problems for system designers and programmers, who for half-a-century have enjoyed an execution model that assumed correct behavior by the underlying computing system. The mean time to failure (MTTF) of the system scales inversely to the number of components in the system and therefore faults and resultant system level failures will increase, as systems scale in terms of the number of processor cores and memory modulesmore » used. However every error detected need not cause catastrophic failure. Many HPC applications are inherently fault resilient. Yet it is the application programmers who have this knowledge but lack mechanisms to convey it to the system. In this paper, we present new Resilience Oriented Language Extensions (Rolex) which facilitate the incorporation of fault resilience as an intrinsic property of the application code. We describe the syntax and semantics of the language extensions as well as the implementation of the supporting compiler infrastructure and runtime system. Furthermore, our experiments show that an approach that leverages the programmer's insight to reason about the context and significance of faults to the application outcome significantly improves the probability that an application runs to a successful conclusion.« less
Rolex: Resilience-oriented language extensions for extreme-scale systems
Lucas, Robert F.; Hukerikar, Saurabh
2016-05-26
Future exascale high-performance computing (HPC) systems will be constructed from VLSI devices that will be less reliable than those used today, and faults will become the norm, not the exception. This will pose significant problems for system designers and programmers, who for half-a-century have enjoyed an execution model that assumed correct behavior by the underlying computing system. The mean time to failure (MTTF) of the system scales inversely to the number of components in the system and therefore faults and resultant system level failures will increase, as systems scale in terms of the number of processor cores and memory modulesmore » used. However every error detected need not cause catastrophic failure. Many HPC applications are inherently fault resilient. Yet it is the application programmers who have this knowledge but lack mechanisms to convey it to the system. In this paper, we present new Resilience Oriented Language Extensions (Rolex) which facilitate the incorporation of fault resilience as an intrinsic property of the application code. We describe the syntax and semantics of the language extensions as well as the implementation of the supporting compiler infrastructure and runtime system. Furthermore, our experiments show that an approach that leverages the programmer's insight to reason about the context and significance of faults to the application outcome significantly improves the probability that an application runs to a successful conclusion.« less
Multi-scale simulations of space problems with iPIC3D
NASA Astrophysics Data System (ADS)
Lapenta, Giovanni; Bettarini, Lapo; Markidis, Stefano
The implicit Particle-in-Cell method for the computer simulation of space plasma, and its im-plementation in a three-dimensional parallel code, called iPIC3D, are presented. The implicit integration in time of the Vlasov-Maxwell system removes the numerical stability constraints and enables kinetic plasma simulations at magnetohydrodynamics scales. Simulations of mag-netic reconnection in plasma are presented to show the effectiveness of the algorithm. In particular we will show a number of simulations done for large scale 3D systems using the physical mass ratio for Hydrogen. Most notably one simulation treats kinetically a box of tens of Earth radii in each direction and was conducted using about 16000 processors of the Pleiades NASA computer. The work is conducted in collaboration with the MMS-IDS theory team from University of Colorado (M. Goldman, D. Newman and L. Andersson). Reference: Stefano Markidis, Giovanni Lapenta, Rizwan-uddin Multi-scale simulations of plasma with iPIC3D Mathematics and Computers in Simulation, Available online 17 October 2009, http://dx.doi.org/10.1016/j.matcom.2009.08.038
Using Intel Xeon Phi to accelerate the WRF TEMF planetary boundary layer scheme
NASA Astrophysics Data System (ADS)
Mielikainen, Jarno; Huang, Bormin; Huang, Allen
2014-05-01
The Weather Research and Forecasting (WRF) model is designed for numerical weather prediction and atmospheric research. The WRF software infrastructure consists of several components such as dynamic solvers and physics schemes. Numerical models are used to resolve the large-scale flow. However, subgrid-scale parameterizations are for an estimation of small-scale properties (e.g., boundary layer turbulence and convection, clouds, radiation). Those have a significant influence on the resolved scale due to the complex nonlinear nature of the atmosphere. For the cloudy planetary boundary layer (PBL), it is fundamental to parameterize vertical turbulent fluxes and subgrid-scale condensation in a realistic manner. A parameterization based on the Total Energy - Mass Flux (TEMF) that unifies turbulence and moist convection components produces a better result that the other PBL schemes. For that reason, the TEMF scheme is chosen as the PBL scheme we optimized for Intel Many Integrated Core (MIC), which ushers in a new era of supercomputing speed, performance, and compatibility. It allows the developers to run code at trillions of calculations per second using the familiar programming model. In this paper, we present our optimization results for TEMF planetary boundary layer scheme. The optimizations that were performed were quite generic in nature. Those optimizations included vectorization of the code to utilize vector units inside each CPU. Furthermore, memory access was improved by scalarizing some of the intermediate arrays. The results show that the optimization improved MIC performance by 14.8x. Furthermore, the optimizations increased CPU performance by 2.6x compared to the original multi-threaded code on quad core Intel Xeon E5-2603 running at 1.8 GHz. Compared to the optimized code running on a single CPU socket the optimized MIC code is 6.2x faster.
WOMBAT: A Scalable and High-performance Astrophysical Magnetohydrodynamics Code
DOE Office of Scientific and Technical Information (OSTI.GOV)
Mendygral, P. J.; Radcliffe, N.; Kandalla, K.
2017-02-01
We present a new code for astrophysical magnetohydrodynamics specifically designed and optimized for high performance and scaling on modern and future supercomputers. We describe a novel hybrid OpenMP/MPI programming model that emerged from a collaboration between Cray, Inc. and the University of Minnesota. This design utilizes MPI-RMA optimized for thread scaling, which allows the code to run extremely efficiently at very high thread counts ideal for the latest generation of multi-core and many-core architectures. Such performance characteristics are needed in the era of “exascale” computing. We describe and demonstrate our high-performance design in detail with the intent that it maymore » be used as a model for other, future astrophysical codes intended for applications demanding exceptional performance.« less
BISON and MARMOT Development for Modeling Fast Reactor Fuel Performance
DOE Office of Scientific and Technical Information (OSTI.GOV)
Gamble, Kyle Allan Lawrence; Williamson, Richard L.; Schwen, Daniel
2015-09-01
BISON and MARMOT are two codes under development at the Idaho National Laboratory for engineering scale and lower length scale fuel performance modeling. It is desired to add capabilities for fast reactor applications to these codes. The fast reactor fuel types under consideration are metal (U-Pu-Zr) and oxide (MOX). The cladding types of interest include 316SS, D9, and HT9. The purpose of this report is to outline the proposed plans for code development and provide an overview of the models added to the BISON and MARMOT codes for fast reactor fuel behavior. A brief overview of preliminary discussions on themore » formation of a bilateral agreement between the Idaho National Laboratory and the National Nuclear Laboratory in the United Kingdom is presented.« less
Quick Response codes for surgical safety: a prospective pilot study.
Dixon, Jennifer L; Smythe, William Roy; Momsen, Lara S; Jupiter, Daniel; Papaconstantinou, Harry T
2013-09-01
Surgical safety programs have been shown to reduce patient harm; however, there is variable compliance. The purpose of this study is to determine if innovative technology such as Quick Response (QR) codes can facilitate surgical safety initiatives. We prospectively evaluated the use of QR codes during the surgical time-out for 40 operations. Feasibility and accuracy were assessed. Perceptions of the current time-out process and the QR code application were evaluated through surveys using a 5-point Likert scale and binomial yes or no questions. At baseline (n = 53), survey results from the surgical team agreed or strongly agreed that the current time-out process was efficient (64%), easy to use (77%), and provided clear information (89%). However, 65% of surgeons felt that process improvements were needed. Thirty-seven of 40 (92.5%) QR codes scanned successfully, of which 100% were accurate. Three scan failures resulted from excessive curvature or wrinkling of the QR code label on the body. Follow-up survey results (n = 33) showed that the surgical team agreed or strongly agreed that the QR program was clearer (70%), easier to use (57%), and more accurate (84%). Seventy-four percent preferred the QR system to the current time-out process. QR codes accurately transmit patient information during the time-out procedure and are preferred to the current process by surgical team members. The novel application of this technology may improve compliance, accuracy, and outcomes. Copyright © 2013 Elsevier Inc. All rights reserved.
Overview of the NASA Glenn Flux Reconstruction Based High-Order Unstructured Grid Code
NASA Technical Reports Server (NTRS)
Spiegel, Seth C.; DeBonis, James R.; Huynh, H. T.
2016-01-01
A computational fluid dynamics code based on the flux reconstruction (FR) method is currently being developed at NASA Glenn Research Center to ultimately provide a large- eddy simulation capability that is both accurate and efficient for complex aeropropulsion flows. The FR approach offers a simple and efficient method that is easy to implement and accurate to an arbitrary order on common grid cell geometries. The governing compressible Navier-Stokes equations are discretized in time using various explicit Runge-Kutta schemes, with the default being the 3-stage/3rd-order strong stability preserving scheme. The code is written in modern Fortran (i.e., Fortran 2008) and parallelization is attained through MPI for execution on distributed-memory high-performance computing systems. An h- refinement study of the isentropic Euler vortex problem is able to empirically demonstrate the capability of the FR method to achieve super-accuracy for inviscid flows. Additionally, the code is applied to the Taylor-Green vortex problem, performing numerous implicit large-eddy simulations across a range of grid resolutions and solution orders. The solution found by a pseudo-spectral code is commonly used as a reference solution to this problem, and the FR code is able to reproduce this solution using approximately the same grid resolution. Finally, an examination of the code's performance demonstrates good parallel scaling, as well as an implementation of the FR method with a computational cost/degree- of-freedom/time-step that is essentially independent of the solution order of accuracy for structured geometries.
How to review 4 million lines of ATLAS code
NASA Astrophysics Data System (ADS)
Stewart, Graeme A.; Lampl, Walter;
2017-10-01
As the ATLAS Experiment prepares to move to a multi-threaded framework (AthenaMT) for Run3, we are faced with the problem of how to migrate 4 million lines of C++ source code. This code has been written over the past 15 years and has often been adapted, re-written or extended to the changing requirements and circumstances of LHC data taking. The code was developed by different authors, many of whom are no longer active, and under the deep assumption that processing ATLAS data would be done in a serial fashion. In order to understand the scale of the problem faced by the ATLAS software community, and to plan appropriately the significant efforts posed by the new AthenaMT framework, ATLAS embarked on a wide ranging review of our offline code, covering all areas of activity: event generation, simulation, trigger, reconstruction. We discuss the difficulties in even logistically organising such reviews in an already busy community, how to examine areas in sufficient depth to learn key areas in need of upgrade, yet also to finish the reviews in a timely fashion. We show how the reviews were organised and how the ouptuts were captured in a way that the sub-system communities could then tackle the problems uncovered on a realistic timeline. Further, we discuss how the review has inuenced the overall planning for the Run 3 ATLAS offline code.
Sam, Jonathan; Pierse, Michael; Al-Qahtani, Abdullah; Cheng, Adam
2012-02-01
To develop, implement and evaluate a simulation-based acute care curriculum in a paediatric residency program using an integrated and longitudinal approach. Curriculum framework consisting of three modular, year-specific courses and longitudinal just-in-time, in situ mock codes. Paediatric residency program at BC Children's Hospital, Vancouver, British Columbia. The three year-specific courses focused on the critical first 5 min, complex medical management and crisis resource management, respectively. The just-in-time in situ mock codes simulated the acute deterioration of an existing ward patient, prepared the actual multidisciplinary code team, and primed the surrounding crisis support systems. Each curriculum component was evaluated with surveys using a five-point Likert scale. A total of 40 resident surveys were completed after each of the modular courses, and an additional 28 surveys were completed for the overall simulation curriculum. The highest Likert scores were for hands-on skill stations, immersive simulation environment and crisis resource management teaching. Survey results also suggested that just-in-time mock codes were realistic, reinforced learning, and prepared ward teams for patient deterioration. A simulation-based acute care curriculum was successfully integrated into a paediatric residency program. It provides a model for integrating simulation-based learning into other training programs, as well as a model for any hospital that wishes to improve paediatric resuscitation outcomes using just-in-time in situ mock codes.
Studying the genetic basis of speciation in high gene flow marine invertebrates
2016-01-01
A growing number of genes responsible for reproductive incompatibilities between species (barrier loci) exhibit the signals of positive selection. However, the possibility that genes experiencing positive selection diverge early in speciation and commonly cause reproductive incompatibilities has not been systematically investigated on a genome-wide scale. Here, I outline a research program for studying the genetic basis of speciation in broadcast spawning marine invertebrates that uses a priori genome-wide information on a large, unbiased sample of genes tested for positive selection. A targeted sequence capture approach is proposed that scores single-nucleotide polymorphisms (SNPs) in widely separated species populations at an early stage of allopatric divergence. The targeted capture of both coding and non-coding sequences enables SNPs to be characterized at known locations across the genome and at genes with known selective or neutral histories. The neutral coding and non-coding SNPs provide robust background distributions for identifying FST-outliers within genes that can, in principle, identify specific mutations experiencing diversifying selection. If natural hybridization occurs between species, the neutral coding and non-coding SNPs can provide a neutral admixture model for genomic clines analyses aimed at finding genes exhibiting strong blocks to introgression. Strongylocentrotid sea urchins are used as a model system to outline the approach but it can be used for any group that has a complete reference genome available. PMID:29491951
DOE Office of Scientific and Technical Information (OSTI.GOV)
Ansong, Charles; Tolic, Nikola; Purvine, Samuel O.
Complete and accurate genome annotation is crucial for comprehensive and systematic studies of biological systems. For example systems biology-oriented genome scale modeling efforts greatly benefit from accurate annotation of protein-coding genes to develop proper functioning models. However, determining protein-coding genes for most new genomes is almost completely performed by inference, using computational predictions with significant documented error rates (> 15%). Furthermore, gene prediction programs provide no information on biologically important post-translational processing events critical for protein function. With the ability to directly measure peptides arising from expressed proteins, mass spectrometry-based proteomics approaches can be used to augment and verify codingmore » regions of a genomic sequence and importantly detect post-translational processing events. In this study we utilized “shotgun” proteomics to guide accurate primary genome annotation of the bacterial pathogen Salmonella Typhimurium 14028 to facilitate a systems-level understanding of Salmonella biology. The data provides protein-level experimental confirmation for 44% of predicted protein-coding genes, suggests revisions to 48 genes assigned incorrect translational start sites, and uncovers 13 non-annotated genes missed by gene prediction programs. We also present a comprehensive analysis of post-translational processing events in Salmonella, revealing a wide range of complex chemical modifications (70 distinct modifications) and confirming more than 130 signal peptide and N-terminal methionine cleavage events in Salmonella. This study highlights several ways in which proteomics data applied during the primary stages of annotation can improve the quality of genome annotations, especially with regards to the annotation of mature protein products.« less
Coded continuous wave meteor radar
NASA Astrophysics Data System (ADS)
Chau, J. L.; Vierinen, J.; Pfeffer, N.; Clahsen, M.; Stober, G.
2016-12-01
The concept of a coded continuous wave specular meteor radar (SMR) is described. The radar uses a continuously transmitted pseudorandom phase-modulated waveform, which has several advantages compared to conventional pulsed SMRs. The coding avoids range and Doppler aliasing, which are in some cases problematic with pulsed radars. Continuous transmissions maximize pulse compression gain, allowing operation at lower peak power than a pulsed system. With continuous coding, the temporal and spectral resolution are not dependent on the transmit waveform and they can be fairly flexibly changed after performing a measurement. The low signal-to-noise ratio before pulse compression, combined with independent pseudorandom transmit waveforms, allows multiple geographically separated transmitters to be used in the same frequency band simultaneously without significantly interfering with each other. Because the same frequency band can be used by multiple transmitters, the same interferometric receiver antennas can be used to receive multiple transmitters at the same time. The principles of the signal processing are discussed, in addition to discussion of several practical ways to increase computation speed, and how to optimally detect meteor echoes. Measurements from a campaign performed with a coded continuous wave SMR are shown and compared with two standard pulsed SMR measurements. The type of meteor radar described in this paper would be suited for use in a large-scale multi-static network of meteor radar transmitters and receivers. Such a system would be useful for increasing the number of meteor detections to obtain improved meteor radar data products, such as wind fields. This type of a radar would also be useful for over-the-horizon radar, ionosondes, and observations of field-aligned-irregularities.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Kim, Kyungjoo; Parks, Michael L.; Perego, Mauro
2016-11-09
ISPH code is developed to solve multi-physics meso-scale flow problems using implicit SPH method. In particular, the code can provides solutions for incompressible, multi phase flow and electro-kinetic flows.
System Design Description for the TMAD Code
DOE Office of Scientific and Technical Information (OSTI.GOV)
Finfrock, S.H.
This document serves as the System Design Description (SDD) for the TMAD Code System, which includes the TMAD code and the LIBMAKR code. The SDD provides a detailed description of the theory behind the code, and the implementation of that theory. It is essential for anyone who is attempting to review or modify the code or who otherwise needs to understand the internal workings of the code. In addition, this document includes, in Appendix A, the System Requirements Specification for the TMAD System.
Stochastic many-body problems in ecology, evolution, neuroscience, and systems biology
NASA Astrophysics Data System (ADS)
Butler, Thomas C.
Using the tools of many-body theory, I analyze problems in four different areas of biology dominated by strong fluctuations: The evolutionary history of the genetic code, spatiotemporal pattern formation in ecology, spatiotemporal pattern formation in neuroscience and the robustness of a model circadian rhythm circuit in systems biology. In the first two research chapters, I demonstrate that the genetic code is extremely optimal (in the sense that it manages the effects of point mutations or mistranslations efficiently), more than an order of magnitude beyond what was previously thought. I further show that the structure of the genetic code implies that early proteins were probably only loosely defined. Both the nature of early proteins and the extreme optimality of the genetic code are interpreted in light of recent theory [1] as evidence that the evolution of the genetic code was driven by evolutionary dynamics that were dominated by horizontal gene transfer. I then explore the optimality of a proposed precursor to the genetic code. The results show that the precursor code has only limited optimality, which is interpreted as evidence that the precursor emerged prior to translation, or else never existed. In the next part of the dissertation, I introduce a many-body formalism for reaction-diffusion systems described at the mesoscopic scale with master equations. I first apply this formalism to spatially-extended predator-prey ecosystems, resulting in the prediction that many-body correlations and fluctuations drive population cycles in time, called quasicycles. Most of these results were previously known, but were derived using the system size expansion [2, 3]. I next apply the analytical techniques developed in the study of quasi-cycles to a simple model of Turing patterns in a predator-prey ecosystem. This analysis shows that fluctuations drive the formation of a new kind of spatiotemporal pattern formation that I name "quasi-patterns." These quasi-patterns exist over a much larger range of physically accessible parameters than the patterns predicted in mean field theory and therefore account for the apparent observations in ecology of patterns in regimes where Turing patterns do not occur. I further show that quasi-patterns have statistical properties that allow them to be distinguished empirically from mean field Turing patterns. I next analyze a model of visual cortex in the brain that has striking similarities to the activator-inhibitor model of ecosystem quasi-pattern formation. Through analysis of the resulting phase diagram, I show that the architecture of the neural network in the visual cortex is configured to make the visual cortex robust to unwanted internally generated spatial structure that interferes with normal visual function. I also predict that some geometric visual hallucinations are quasi-patterns and that the visual cortex supports a new phase of spatially scale invariant behavior present far from criticality. In the final chapter, I explore the effects of fluctuations on cycles in systems biology, specifically the pervasive phenomenon of circadian rhythms. By exploring the behavior of a generic stochastic model of circadian rhythms, I show that the circadian rhythm circuit exploits leaky mRNA production to safeguard the cycle from failure. I also show that this safeguard mechanism is highly robust to changes in the rate of leaky mRNA production. Finally, I explore the failure of the deterministic model in two different contexts, one where the deterministic model predicts cycles where they do not exist, and another context in which cycles are not predicted by the deterministic model.
NASA Astrophysics Data System (ADS)
Gökgöz, Türkay; Ozulu, Murat; Erdoǧan, Mustafa; Seyrek, Kemal
2016-04-01
From the view of integrated river basin management, basin/sub-basin boundaries should be determined and encoded systematically with sufficient accuracy and precision. Today basin/sub-basin boundaries are mostly derived from digital elevation models (DEM) in geographic information systems (GIS). The accuracy and precision of the basin/sub-basin boundaries depend primarily on the accuracy and resolution of the DEMs. In this regard, in Turkey, a survey was made for the first time within the scope of this project to identify current situation, problems and needs in General Directorates of State Hydraulic Works, Water Management, Forestry, Meteorology, Combating Desertification and Erosion, which are the major institutions with responsibility and authority. Another factor that determines the accuracy and precision of basin/sub-basin boundaries is the flow accumulation threshold value to be determined at a certain stage according to a specific methodology in deriving the basin/sub-basin boundaries from DEM. Generally, in Turkey, either the default value given by GIS tool is used directly without any geomorphological, hydrological and cartographic bases or it is determined by trial and error. Although there is a system of catchments and rivers network at 1:250,000 scale and a proper method has already been developed on systematic coding of the basin by the General Directorate of State Hydraulic Works, it is stated that a new system of catchments, rivers network and coding at larger scale (i.e. 1:25,000) is needed. In short, the basin/sub-basin boundaries and codes are not available currently at the required accuracy and precision for the fulfilment of the obligations described in European Union (EU) Water Framework Directive (WFD). In this case, it is clear that there is not yet any methodology to obtain such products. However, a series of projects should be completed such that the basin/sub-basin boundaries and codes are the fundamental data infrastructure. This task must be accomplished by the end of the negotiation process with the EU. For these reasons this subject is chosen as primary and important goal in this project issue and it is aimed to develop an original methodology for determining the boundaries and codes of the drainage basins/sub-basins at required accuracy and precision for the fulfilment of obligations described in the WFD. In Turkey, existing highest accuracy and reliable elevation and hydrography data will be used for the first time, in this project. Along with the widely known and used flow accumulation threshold approaches, the approach developed by Gökgöz et al. (2006) will be used as well. The practicability and suitability of the encoding method developed by the General Directorate of State Hydraulic Works and the Infrastructure for Spatial Information in Europe will be verified respectively. The resulting drainage network, basin/sub-basin boundaries and codes will be compared to CCM2 (Catchment Characterisation and Modelling), ECRINS1.5 (European Catchments and Rivers Network System) and Catchments and Rivers Network System of General Directorates of State Hydraulic Works. This project is being supported by The Scientific and Technological Research Council of Turkey, under the project number TUBITAK-115Y411.
Knoblauch, Andreas; Palm, Günther
2002-09-01
To investigate scene segmentation in the visual system we present a model of two reciprocally connected visual areas using spiking neurons. Area P corresponds to the orientation-selective subsystem of the primary visual cortex, while the central visual area C is modeled as associative memory representing stimulus objects according to Hebbian learning. Without feedback from area C, a single stimulus results in relatively slow and irregular activity, synchronized only for neighboring patches (slow state), while in the complete model activity is faster with an enlarged synchronization range (fast state). When presenting a superposition of several stimulus objects, scene segmentation happens on a time scale of hundreds of milliseconds by alternating epochs of the slow and fast states, where neurons representing the same object are simultaneously in the fast state. Correlation analysis reveals synchronization on different time scales as found in experiments (designated as tower, castle, and hill peaks). On the fast time scale (tower peaks, gamma frequency range), recordings from two sites coding either different or the same object lead to correlograms that are either flat or exhibit oscillatory modulations with a central peak. This is in agreement with experimental findings, whereas standard phase-coding models would predict shifted peaks in the case of different objects.
Liwo, Adam; Ołdziej, Stanisław; Czaplewski, Cezary; Kleinerman, Dana S.; Blood, Philip; Scheraga, Harold A.
2010-01-01
We report the implementation of our united-residue UNRES force field for simulations of protein structure and dynamics with massively parallel architectures. In addition to coarse-grained parallelism already implemented in our previous work, in which each conformation was treated by a different task, we introduce a fine-grained level in which energy and gradient evaluation are split between several tasks. The Message Passing Interface (MPI) libraries have been utilized to construct the parallel code. The parallel performance of the code has been tested on a professional Beowulf cluster (Xeon Quad Core), a Cray XT3 supercomputer, and two IBM BlueGene/P supercomputers with canonical and replica-exchange molecular dynamics. With IBM BlueGene/P, about 50 % efficiency and 120-fold speed-up of the fine-grained part was achieved for a single trajectory of a 767-residue protein with use of 256 processors/trajectory. Because of averaging over the fast degrees of freedom, UNRES provides an effective 1000-fold speed-up compared to the experimental time scale and, therefore, enables us to effectively carry out millisecond-scale simulations of proteins with 500 and more amino-acid residues in days of wall-clock time. PMID:20305729
Error-correction coding for digital communications
NASA Astrophysics Data System (ADS)
Clark, G. C., Jr.; Cain, J. B.
This book is written for the design engineer who must build the coding and decoding equipment and for the communication system engineer who must incorporate this equipment into a system. It is also suitable as a senior-level or first-year graduate text for an introductory one-semester course in coding theory. Fundamental concepts of coding are discussed along with group codes, taking into account basic principles, practical constraints, performance computations, coding bounds, generalized parity check codes, polynomial codes, and important classes of group codes. Other topics explored are related to simple nonalgebraic decoding techniques for group codes, soft decision decoding of block codes, algebraic techniques for multiple error correction, the convolutional code structure and Viterbi decoding, syndrome decoding techniques, and sequential decoding techniques. System applications are also considered, giving attention to concatenated codes, coding for the white Gaussian noise channel, interleaver structures for coded systems, and coding for burst noise channels.
Core Physics and Kinetics Calculations for the Fissioning Plasma Core Reactor
NASA Technical Reports Server (NTRS)
Butler, C.; Albright, D.
2007-01-01
Highly efficient, compact nuclear reactors would provide high specific impulse spacecraft propulsion. This analysis and numerical simulation effort has focused on the technical feasibility issues related to the nuclear design characteristics of a novel reactor design. The Fissioning Plasma Core Reactor (FPCR) is a shockwave-driven gaseous-core nuclear reactor, which uses Magneto Hydrodynamic effects to generate electric power to be used for propulsion. The nuclear design of the system depends on two major calculations: core physics calculations and kinetics calculations. Presently, core physics calculations have concentrated on the use of the MCNP4C code. However, initial results from other codes such as COMBINE/VENTURE and SCALE4a. are also shown. Several significant modifications were made to the ISR-developed QCALC1 kinetics analysis code. These modifications include testing the state of the core materials, an improvement to the calculation of the material properties of the core, the addition of an adiabatic core temperature model and improvement of the first order reactivity correction model. The accuracy of these modifications has been verified, and the accuracy of the point-core kinetics model used by the QCALC1 code has also been validated. Previously calculated kinetics results for the FPCR were described in the ISR report, "QCALC1: A code for FPCR Kinetics Model Feasibility Analysis" dated June 1, 2002.
ALEGRA -- A massively parallel h-adaptive code for solid dynamics
DOE Office of Scientific and Technical Information (OSTI.GOV)
Summers, R.M.; Wong, M.K.; Boucheron, E.A.
1997-12-31
ALEGRA is a multi-material, arbitrary-Lagrangian-Eulerian (ALE) code for solid dynamics designed to run on massively parallel (MP) computers. It combines the features of modern Eulerian shock codes, such as CTH, with modern Lagrangian structural analysis codes using an unstructured grid. ALEGRA is being developed for use on the teraflop supercomputers to conduct advanced three-dimensional (3D) simulations of shock phenomena important to a variety of systems. ALEGRA was designed with the Single Program Multiple Data (SPMD) paradigm, in which the mesh is decomposed into sub-meshes so that each processor gets a single sub-mesh with approximately the same number of elements. Usingmore » this approach the authors have been able to produce a single code that can scale from one processor to thousands of processors. A current major effort is to develop efficient, high precision simulation capabilities for ALEGRA, without the computational cost of using a global highly resolved mesh, through flexible, robust h-adaptivity of finite elements. H-adaptivity is the dynamic refinement of the mesh by subdividing elements, thus changing the characteristic element size and reducing numerical error. The authors are working on several major technical challenges that must be met to make effective use of HAMMER on MP computers.« less
Mendes-Junior, C T; Castelli, E C; Meyer, D; Simões, A L; Donadi, E A
2013-12-01
HLA-G has an important role in the modulation of the maternal immune system during pregnancy, and evidence that balancing selection acts in the promoter and 3'UTR regions has been previously reported. To determine whether selection acts on the HLA-G coding region in the Amazon Rainforest, exons 2, 3 and 4 were analyzed in a sample of 142 Amerindians from nine villages of five isolated tribes that inhabit the Central Amazon. Six previously described single-nucleotide polymorphisms (SNPs) were identified and the Expectation-Maximization (EM) and PHASE algorithms were used to computationally reconstruct SNP haplotypes (HLA-G alleles). A new HLA-G allele, which originated in Amerindian populations by a crossing-over event between two widespread HLA-G alleles, was identified in 18 individuals. Neutrality tests evidenced that natural selection has a complex part in the HLA-G coding region. Although balancing selection is the type of selection that shapes variability at a local level (Native American populations), we have also shown that purifying selection may occur on a worldwide scale. Moreover, the balancing selection does not seem to act on the coding region as strongly as it acts on the flanking regulatory regions, and such coding signature may actually reflect a hitchhiking effect.
Self-Scheduling Parallel Methods for Multiple Serial Codes with Application to WOPWOP
NASA Technical Reports Server (NTRS)
Long, Lyle N.; Brentner, Kenneth S.
2000-01-01
This paper presents a scheme for efficiently running a large number of serial jobs on parallel computers. Two examples are given of computer programs that run relatively quickly, but often they must be run numerous times to obtain all the results needed. It is very common in science and engineering to have codes that are not massive computing challenges in themselves, but due to the number of instances that must be run, they do become large-scale computing problems. The two examples given here represent common problems in aerospace engineering: aerodynamic panel methods and aeroacoustic integral methods. The first example simply solves many systems of linear equations. This is representative of an aerodynamic panel code where someone would like to solve for numerous angles of attack. The complete code for this first example is included in the appendix so that it can be readily used by others as a template. The second example is an aeroacoustics code (WOPWOP) that solves the Ffowcs Williams Hawkings equation to predict the far-field sound due to rotating blades. In this example, one quite often needs to compute the sound at numerous observer locations, hence parallelization is utilized to automate the noise computation for a large number of observers.
Xiao, Bo; Huang, Chewei; Imel, Zac E; Atkins, David C; Georgiou, Panayiotis; Narayanan, Shrikanth S
2016-04-01
Scaling up psychotherapy services such as for addiction counseling is a critical societal need. One challenge is ensuring quality of therapy, due to the heavy cost of manual observational assessment. This work proposes a speech technology-based system to automate the assessment of therapist empathy-a key therapy quality index-from audio recordings of the psychotherapy interactions. We designed a speech processing system that includes voice activity detection and diarization modules, and an automatic speech recognizer plus a speaker role matching module to extract the therapist's language cues. We employed Maximum Entropy models, Maximum Likelihood language models, and a Lattice Rescoring method to characterize high vs. low empathic language. We estimated therapy-session level empathy codes using utterance level evidence obtained from these models. Our experiments showed that the fully automated system achieved a correlation of 0.643 between expert annotated empathy codes and machine-derived estimations, and an accuracy of 81% in classifying high vs. low empathy, in comparison to a 0.721 correlation and 86% accuracy in the oracle setting using manual transcripts. The results show that the system provides useful information that can contribute to automatic quality insurance and therapist training.
Xiao, Bo; Huang, Chewei; Imel, Zac E.; Atkins, David C.; Georgiou, Panayiotis; Narayanan, Shrikanth S.
2016-01-01
Scaling up psychotherapy services such as for addiction counseling is a critical societal need. One challenge is ensuring quality of therapy, due to the heavy cost of manual observational assessment. This work proposes a speech technology-based system to automate the assessment of therapist empathy—a key therapy quality index—from audio recordings of the psychotherapy interactions. We designed a speech processing system that includes voice activity detection and diarization modules, and an automatic speech recognizer plus a speaker role matching module to extract the therapist's language cues. We employed Maximum Entropy models, Maximum Likelihood language models, and a Lattice Rescoring method to characterize high vs. low empathic language. We estimated therapy-session level empathy codes using utterance level evidence obtained from these models. Our experiments showed that the fully automated system achieved a correlation of 0.643 between expert annotated empathy codes and machine-derived estimations, and an accuracy of 81% in classifying high vs. low empathy, in comparison to a 0.721 correlation and 86% accuracy in the oracle setting using manual transcripts. The results show that the system provides useful information that can contribute to automatic quality insurance and therapist training. PMID:28286867
High Performance Input/Output for Parallel Computer Systems
NASA Technical Reports Server (NTRS)
Ligon, W. B.
1996-01-01
The goal of our project is to study the I/O characteristics of parallel applications used in Earth Science data processing systems such as Regional Data Centers (RDCs) or EOSDIS. Our approach is to study the runtime behavior of typical programs and the effect of key parameters of the I/O subsystem both under simulation and with direct experimentation on parallel systems. Our three year activity has focused on two items: developing a test bed that facilitates experimentation with parallel I/O, and studying representative programs from the Earth science data processing application domain. The Parallel Virtual File System (PVFS) has been developed for use on a number of platforms including the Tiger Parallel Architecture Workbench (TPAW) simulator, The Intel Paragon, a cluster of DEC Alpha workstations, and the Beowulf system (at CESDIS). PVFS provides considerable flexibility in configuring I/O in a UNIX- like environment. Access to key performance parameters facilitates experimentation. We have studied several key applications fiom levels 1,2 and 3 of the typical RDC processing scenario including instrument calibration and navigation, image classification, and numerical modeling codes. We have also considered large-scale scientific database codes used to organize image data.
Solving large scale structure in ten easy steps with COLA
NASA Astrophysics Data System (ADS)
Tassev, Svetlin; Zaldarriaga, Matias; Eisenstein, Daniel J.
2013-06-01
We present the COmoving Lagrangian Acceleration (COLA) method: an N-body method for solving for Large Scale Structure (LSS) in a frame that is comoving with observers following trajectories calculated in Lagrangian Perturbation Theory (LPT). Unlike standard N-body methods, the COLA method can straightforwardly trade accuracy at small-scales in order to gain computational speed without sacrificing accuracy at large scales. This is especially useful for cheaply generating large ensembles of accurate mock halo catalogs required to study galaxy clustering and weak lensing, as those catalogs are essential for performing detailed error analysis for ongoing and future surveys of LSS. As an illustration, we ran a COLA-based N-body code on a box of size 100 Mpc/h with particles of mass ≈ 5 × 109Msolar/h. Running the code with only 10 timesteps was sufficient to obtain an accurate description of halo statistics down to halo masses of at least 1011Msolar/h. This is only at a modest speed penalty when compared to mocks obtained with LPT. A standard detailed N-body run is orders of magnitude slower than our COLA-based code. The speed-up we obtain with COLA is due to the fact that we calculate the large-scale dynamics exactly using LPT, while letting the N-body code solve for the small scales, without requiring it to capture exactly the internal dynamics of halos. Achieving a similar level of accuracy in halo statistics without the COLA method requires at least 3 times more timesteps than when COLA is employed.
2012-10-01
using the open-source code Large-scale Atomic/Molecular Massively Parallel Simulator ( LAMMPS ) (http://lammps.sandia.gov) (23). The commercial...parameters are proprietary and cannot be ported to the LAMMPS 4 simulation code. In our molecular dynamics simulations at the atomistic resolution, we...IBI iterative Boltzmann inversion LAMMPS Large-scale Atomic/Molecular Massively Parallel Simulator MAPS Materials Processes and Simulations MS
A surface code quantum computer in silicon
Hill, Charles D.; Peretz, Eldad; Hile, Samuel J.; House, Matthew G.; Fuechsle, Martin; Rogge, Sven; Simmons, Michelle Y.; Hollenberg, Lloyd C. L.
2015-01-01
The exceptionally long quantum coherence times of phosphorus donor nuclear spin qubits in silicon, coupled with the proven scalability of silicon-based nano-electronics, make them attractive candidates for large-scale quantum computing. However, the high threshold of topological quantum error correction can only be captured in a two-dimensional array of qubits operating synchronously and in parallel—posing formidable fabrication and control challenges. We present an architecture that addresses these problems through a novel shared-control paradigm that is particularly suited to the natural uniformity of the phosphorus donor nuclear spin qubit states and electronic confinement. The architecture comprises a two-dimensional lattice of donor qubits sandwiched between two vertically separated control layers forming a mutually perpendicular crisscross gate array. Shared-control lines facilitate loading/unloading of single electrons to specific donors, thereby activating multiple qubits in parallel across the array on which the required operations for surface code quantum error correction are carried out by global spin control. The complexities of independent qubit control, wave function engineering, and ad hoc quantum interconnects are explicitly avoided. With many of the basic elements of fabrication and control based on demonstrated techniques and with simulated quantum operation below the surface code error threshold, the architecture represents a new pathway for large-scale quantum information processing in silicon and potentially in other qubit systems where uniformity can be exploited. PMID:26601310
A surface code quantum computer in silicon.
Hill, Charles D; Peretz, Eldad; Hile, Samuel J; House, Matthew G; Fuechsle, Martin; Rogge, Sven; Simmons, Michelle Y; Hollenberg, Lloyd C L
2015-10-01
The exceptionally long quantum coherence times of phosphorus donor nuclear spin qubits in silicon, coupled with the proven scalability of silicon-based nano-electronics, make them attractive candidates for large-scale quantum computing. However, the high threshold of topological quantum error correction can only be captured in a two-dimensional array of qubits operating synchronously and in parallel-posing formidable fabrication and control challenges. We present an architecture that addresses these problems through a novel shared-control paradigm that is particularly suited to the natural uniformity of the phosphorus donor nuclear spin qubit states and electronic confinement. The architecture comprises a two-dimensional lattice of donor qubits sandwiched between two vertically separated control layers forming a mutually perpendicular crisscross gate array. Shared-control lines facilitate loading/unloading of single electrons to specific donors, thereby activating multiple qubits in parallel across the array on which the required operations for surface code quantum error correction are carried out by global spin control. The complexities of independent qubit control, wave function engineering, and ad hoc quantum interconnects are explicitly avoided. With many of the basic elements of fabrication and control based on demonstrated techniques and with simulated quantum operation below the surface code error threshold, the architecture represents a new pathway for large-scale quantum information processing in silicon and potentially in other qubit systems where uniformity can be exploited.
Quartic scaling MP2 for solids: A highly parallelized algorithm in the plane wave basis
NASA Astrophysics Data System (ADS)
Schäfer, Tobias; Ramberger, Benjamin; Kresse, Georg
2017-03-01
We present a low-complexity algorithm to calculate the correlation energy of periodic systems in second-order Møller-Plesset (MP2) perturbation theory. In contrast to previous approximation-free MP2 codes, our implementation possesses a quartic scaling, O ( N 4 ) , with respect to the system size N and offers an almost ideal parallelization efficiency. The general issue that the correlation energy converges slowly with the number of basis functions is eased by an internal basis set extrapolation. The key concept to reduce the scaling is to eliminate all summations over virtual orbitals which can be elegantly achieved in the Laplace transformed MP2 formulation using plane wave basis sets and fast Fourier transforms. Analogously, this approach could allow us to calculate second order screened exchange as well as particle-hole ladder diagrams with a similar low complexity. Hence, the presented method can be considered as a step towards systematically improved correlation energies.
Parallel Index and Query for Large Scale Data Analysis
DOE Office of Scientific and Technical Information (OSTI.GOV)
Chou, Jerry; Wu, Kesheng; Ruebel, Oliver
2011-07-18
Modern scientific datasets present numerous data management and analysis challenges. State-of-the-art index and query technologies are critical for facilitating interactive exploration of large datasets, but numerous challenges remain in terms of designing a system for process- ing general scientific datasets. The system needs to be able to run on distributed multi-core platforms, efficiently utilize underlying I/O infrastructure, and scale to massive datasets. We present FastQuery, a novel software framework that address these challenges. FastQuery utilizes a state-of-the-art index and query technology (FastBit) and is designed to process mas- sive datasets on modern supercomputing platforms. We apply FastQuery to processing ofmore » a massive 50TB dataset generated by a large scale accelerator modeling code. We demonstrate the scalability of the tool to 11,520 cores. Motivated by the scientific need to search for inter- esting particles in this dataset, we use our framework to reduce search time from hours to tens of seconds.« less
Scale invariance in chaotic time series: Classical and quantum examples
NASA Astrophysics Data System (ADS)
Landa, Emmanuel; Morales, Irving O.; Stránský, Pavel; Fossion, Rubén; Velázquez, Victor; López Vieyra, J. C.; Frank, Alejandro
Important aspects of chaotic behavior appear in systems of low dimension, as illustrated by the Map Module 1. It is indeed a remarkable fact that all systems tha make a transition from order to disorder display common properties, irrespective of their exacta functional form. We discuss evidence for 1/f power spectra in the chaotic time series associated in classical and quantum examples, the one-dimensional map module 1 and the spectrum of 48Ca. A Detrended Fluctuation Analysis (DFA) method is applied to investigate the scaling properties of the energy fluctuations in the spectrum of 48Ca obtained with a large realistic shell model calculation (ANTOINE code) and with a random shell model (TBRE) calculation also in the time series obtained with the map mod 1. We compare the scale invariant properties of the 48Ca nuclear spectrum sith similar analyses applied to the RMT ensambles GOE and GDE. A comparison with the corresponding power spectra is made in both cases. The possible consequences of the results are discussed.
Lerna, Anna; Esposito, Dalila; Conson, Massimiliano; Russo, Luigi; Massagli, Angelo
2012-01-01
The Picture Exchange Communication System (PECS) is a common treatment choice for non-verbal children with autism. However, little empirical evidence is available on the usefulness of PECS in treating social-communication impairments in autism. To test the effects of PECS on social-communicative skills in children with autism, concurrently taking into account standardized psychometric data, standardized functional assessment of adaptive behaviour, and information on social-communicative variables coded in an unstructured setting. Eighteen preschool children (mean age = 38.78 months) were assigned to two intervention approaches, i.e. PECS and Conventional Language Therapy (CLT). Both PECS (Phases I-IV) and CLT were delivered three times per week, in 30-min sessions, for 6 months. Outcome measures were the following: Autism Diagnostic Observation Schedule (ADOS) domain scores for Communication and Reciprocal Social Interaction; Language and Personal-Social subscales of the Griffiths' Mental Developmental Scales (GMDS); Communication and Social Abilities domains of the Vineland Adaptive Behavior Scales (VABS); and several social-communicative variables coded in an unstructured setting. Results demonstrated that the two groups did not differ at Time 1 (pre-treatment assessment), whereas at Time 2 (post-test) the PECS group showed a significant improvement with respect to the CLT group on the VABS social domain score and on almost all the social-communicative abilities coded in the unstructured setting (i.e. joint attention, request, initiation, cooperative play, but not eye contact). These findings showed that PECS intervention (Phases I-IV) can improve social-communicative skills in children with autism. This improvement is especially evident in standardized measures of adaptive behaviour and measures derived from the observation of children in an unstructured setting. © 2012 Royal College of Speech and Language Therapists.
NASA Astrophysics Data System (ADS)
Nijssen, B.; Hamman, J.; Bohn, T. J.
2015-12-01
The Variable Infiltration Capacity (VIC) model is a macro-scale semi-distributed hydrologic model. VIC development began in the early 1990s and it has been used extensively, applied from basin to global scales. VIC has been applied in a many use cases, including the construction of hydrologic data sets, trend analysis, data evaluation and assimilation, forecasting, coupled climate modeling, and climate change impact analysis. Ongoing applications of the VIC model include the University of Washington's drought monitor and forecast systems, and NASA's land data assimilation systems. The development of VIC version 5.0 focused on reconfiguring the legacy VIC source code to support a wider range of modern modeling applications. The VIC source code has been moved to a public Github repository to encourage participation by the model development community-at-large. The reconfiguration has separated the physical core of the model from the driver, which is responsible for memory allocation, pre- and post-processing and I/O. VIC 5.0 includes four drivers that use the same physical model core: classic, image, CESM, and Python. The classic driver supports legacy VIC configurations and runs in the traditional time-before-space configuration. The image driver includes a space-before-time configuration, netCDF I/O, and uses MPI for parallel processing. This configuration facilitates the direct coupling of streamflow routing, reservoir, and irrigation processes within VIC. The image driver is the foundation of the CESM driver; which couples VIC to CESM's CPL7 and a prognostic atmosphere. Finally, we have added a Python driver that provides access to the functions and datatypes of VIC's physical core from a Python interface. This presentation demonstrates how reconfiguring legacy source code extends the life and applicability of a research model.
Palmer, Cameron S; Niggemeyer, Louise E; Charman, Debra
2010-09-01
The 2005 version of the Abbreviated Injury Scale (AIS05) potentially represents a significant change in injury spectrum classification, due to a substantial increase in the codeset size and alterations to the agreed severity of many injuries compared to the previous version (AIS98). Whilst many trauma registries around the world are moving to adopt AIS05 or its 2008 update (AIS08), its effect on patient classification in existing registries, and the optimum method of comparing existing data collections with new AIS05 collections are unknown. The present study aimed to assess the potential impact of adopting the AIS05 codeset in an established trauma system, and to identify issues associated with this change. A current subset of consecutive major trauma patients admitted to two large hospitals in the Australian state of Victoria were double-coded in AIS98 and AIS05. Assigned codesets were also mapped to the other AIS version using code lists supplied in the AIS05 manual, giving up to four AIS codes per injury sustained. Resulting codesets were assessed for agreement in codes used, injury severity and calculated severity scores. 602 injuries sustained by 109 patients were compared. Adopting AIS05 would lead to a decrease in the number of designated major trauma patients in Victoria, estimated at 22% (95% confidence interval, 15-31%). Differences in AIS level between versions were significantly more likely to occur amongst head and chest injuries. Data mapped to a different codeset performed better in paired comparisons than raw AIS98 and AIS05 codesets, with data mapping of AIS05 codes back to AIS98 giving significantly higher levels of agreement in AIS level, ISS and NISS than other potential comparisons, and resulting in significantly fewer conversion problems than attempting to map AIS98 codes to AIS05. This study provides new insights into AIS codeset change impact. Adoption of AIS05 or AIS08 in established registries will decrease major trauma patient numbers. Code mapping between AIS versions can improve comparisons between datasets in different AIS versions, although the injury profile of a trauma population will affect the degree of comparability. At present, mapping AIS05 data back to AIS98 is recommended. 2009 Elsevier Ltd. All rights reserved.
NASA Technical Reports Server (NTRS)
Lee, L.-N.
1977-01-01
Concatenated coding systems utilizing a convolutional code as the inner code and a Reed-Solomon code as the outer code are considered. In order to obtain very reliable communications over a very noisy channel with relatively modest coding complexity, it is proposed to concatenate a byte-oriented unit-memory convolutional code with an RS outer code whose symbol size is one byte. It is further proposed to utilize a real-time minimal-byte-error probability decoding algorithm, together with feedback from the outer decoder, in the decoder for the inner convolutional code. The performance of the proposed concatenated coding system is studied, and the improvement over conventional concatenated systems due to each additional feature is isolated.
NASA Technical Reports Server (NTRS)
Lee, L. N.
1976-01-01
Concatenated coding systems utilizing a convolutional code as the inner code and a Reed-Solomon code as the outer code are considered. In order to obtain very reliable communications over a very noisy channel with relatively small coding complexity, it is proposed to concatenate a byte oriented unit memory convolutional code with an RS outer code whose symbol size is one byte. It is further proposed to utilize a real time minimal byte error probability decoding algorithm, together with feedback from the outer decoder, in the decoder for the inner convolutional code. The performance of the proposed concatenated coding system is studied, and the improvement over conventional concatenated systems due to each additional feature is isolated.
NASA Technical Reports Server (NTRS)
Spirkovska, Lilly; Reid, Max B.
1993-01-01
A higher-order neural network (HONN) can be designed to be invariant to changes in scale, translation, and inplane rotation. Invariances are built directly into the architecture of a HONN and do not need to be learned. Consequently, fewer training passes and a smaller training set are required to learn to distinguish between objects. The size of the input field is limited, however, because of the memory required for the large number of interconnections in a fully connected HONN. By coarse coding the input image, the input field size can be increased to allow the larger input scenes required for practical object recognition problems. We describe a coarse coding technique and present simulation results illustrating its usefulness and its limitations. Our simulations show that a third-order neural network can be trained to distinguish between two objects in a 4096 x 4096 pixel input field independent of transformations in translation, in-plane rotation, and scale in less than ten passes through the training set. Furthermore, we empirically determine the limits of the coarse coding technique in the object recognition domain.
A distribution model for the aerial application of granular agricultural particles
NASA Technical Reports Server (NTRS)
Fernandes, S. T.; Ormsbee, A. I.
1978-01-01
A model is developed to predict the shape of the distribution of granular agricultural particles applied by aircraft. The particle is assumed to have a random size and shape and the model includes the effect of air resistance, distributor geometry and aircraft wake. General requirements for the maintenance of similarity of the distribution for scale model tests are derived and are addressed to the problem of a nongeneral drag law. It is shown that if the mean and variance of the particle diameter and density are scaled according to the scaling laws governing the system, the shape of the distribution will be preserved. Distributions are calculated numerically and show the effect of a random initial lateral position, particle size and drag coefficient. A listing of the computer code is included.
Navigating the Neural Space in Search of the Neural Code.
Jazayeri, Mehrdad; Afraz, Arash
2017-03-08
The advent of powerful perturbation tools, such as optogenetics, has created new frontiers for probing causal dependencies in neural and behavioral states. These approaches have significantly enhanced the ability to characterize the contribution of different cells and circuits to neural function in health and disease. They have shifted the emphasis of research toward causal interrogations and increased the demand for more precise and powerful tools to control and manipulate neural activity. Here, we clarify the conditions under which measurements and perturbations support causal inferences. We note that the brain functions at multiple scales and that causal dependencies may be best inferred with perturbation tools that interface with the system at the appropriate scale. Finally, we develop a geometric framework to facilitate the interpretation of causal experiments when brain perturbations do or do not respect the intrinsic patterns of brain activity. We describe the challenges and opportunities of applying perturbations in the presence of dynamics, and we close with a general perspective on navigating the activity space of neurons in the search for neural codes. Copyright © 2017 Elsevier Inc. All rights reserved.
Procacci, Piero
2016-06-27
We present a new release (6.0β) of the ORAC program [Marsili et al. J. Comput. Chem. 2010, 31, 1106-1116] with a hybrid OpenMP/MPI (open multiprocessing message passing interface) multilevel parallelism tailored for generalized ensemble (GE) and fast switching double annihilation (FS-DAM) nonequilibrium technology aimed at evaluating the binding free energy in drug-receptor system on high performance computing platforms. The production of the GE or FS-DAM trajectories is handled using a weak scaling parallel approach on the MPI level only, while a strong scaling force decomposition scheme is implemented for intranode computations with shared memory access at the OpenMP level. The efficiency, simplicity, and inherent parallel nature of the ORAC implementation of the FS-DAM algorithm, project the code as a possible effective tool for a second generation high throughput virtual screening in drug discovery and design. The code, along with documentation, testing, and ancillary tools, is distributed under the provisions of the General Public License and can be freely downloaded at www.chim.unifi.it/orac .
NASA Technical Reports Server (NTRS)
Pamadi, Bandu N.; Pei, Jing; Covell, Peter F.; Favaregh, Noah M.; Gumbert, Clyde R.; Hanke, Jeremy L.
2011-01-01
NASA Langley Research Center, in partnership with NASA Marshall Space Flight Center and NASA Ames Research Center, was involved in the aerodynamic analyses, testing, and database development for the Ares I A106 crew launch vehicle in support of the Ares Design and Analysis Cycle. This paper discusses the development of lift-off/transition and ascent databases. The lift-off/transition database was developed using data from tests on a 1.75% scale model of the A106 configuration in the NASA Langley 14x22 Subsonic Wind Tunnel. The power-off ascent database was developed using test data on a 1% A106 scale model from two different facilities, the Boeing Polysonic Wind Tunnel and the NASA Langley Unitary Plan Wind Tunnel. The ascent database was adjusted for differences in wind tunnel and flight Reynolds numbers using USM3D CFD code. The aerodynamic jet interaction effects due to first stage roll control system were modeled using USM3D and OVERFLOW CFD codes.
ECCD-induced tearing mode stabilization in coupled IPS/NIMROD/GENRAY HPC simulations
NASA Astrophysics Data System (ADS)
Jenkins, Thomas; Kruger, S. E.; Held, E. D.; Harvey, R. W.; Elwasif, W. R.; Schnack, D. D.; SWIM Project Team
2011-10-01
We present developments toward an integrated, predictive model for determining optimal ECCD-based NTM stabilization strategies in ITER. We demonstrate the capability of the SWIM Project's Integrated Plasma Simulator (IPS) framework to choreograph multiple executions of, and data exchanges between, physics codes modeling various spatiotemporal scales of this coupled RF/MHD problem on several thousand HPC processors. As NIMROD evolves fluid equations to model bulk plasma behavior, self-consistent propagation/deposition of RF power in the ensuing plasma profiles is calculated by GENRAY. A third code (QLCALC) then interfaces with computational geometry packages to construct the RF-induced quasilinear diffusion tensor from NIMROD/GENRAY data, and the moments of this tensor (entering as additional terms in NIMROD's fluid equations due to the disparity in RF/MHD spatiotemporal scales) influence the dynamics of current, momentum, and energy evolution. Initial results are shown to correctly capture the physics of magnetic island stabilization [Jenkins et al., PoP 17, 012502 (2010)]; we also discuss the development of a numerical plasma control system for active feedback stabilization of tearing modes. Funded by USDoE SciDAC.
Williams, M. L.; Wiarda, D.; Ilas, G.; ...
2014-06-15
Recently, we processed a new covariance data library based on ENDF/B-VII.1 for the SCALE nuclear analysis code system. The multigroup covariance data are discussed here, along with testing and application results for critical benchmark experiments. Moreover, the cross section covariance library, along with covariances for fission product yields and decay data, is used to compute uncertainties in the decay heat produced by a burned reactor fuel assembly.
System for Initial Assessment, Management, and Physiologic Monitoring of Battlefield Casualties
1986-03-01
largest volume of closed head injury patients of any single hospital in the United States, the ABER study has become a screening procedure of choice...ABBREVIATED INJURY SCALE Severity Category/Injury Description Severity code GENERAL 1. -- Aches all over -- Minor lacerations, contusions, and...surface 2* or 3* burns 5. -- Over 50% body surface 20 or 30 burns HEAD AND NECK 1. -- Cerebral injury with headache; dizziness; no loss of consciousness