Automated Estimation Of Software-Development Costs
NASA Technical Reports Server (NTRS)
Roush, George B.; Reini, William
1993-01-01
COSTMODL is automated software development-estimation tool. Yields significant reduction in risk of cost overruns and failed projects. Accepts description of software product developed and computes estimates of effort required to produce it, calendar schedule required, and distribution of effort and staffing as function of defined set of development life-cycle phases. Written for IBM PC(R)-compatible computers.
NASA Astrophysics Data System (ADS)
Gerjuoy, Edward
2005-06-01
The security of messages encoded via the widely used RSA public key encryption system rests on the enormous computational effort required to find the prime factors of a large number N using classical (conventional) computers. In 1994 Peter Shor showed that for sufficiently large N, a quantum computer could perform the factoring with much less computational effort. This paper endeavors to explain, in a fashion comprehensible to the nonexpert, the RSA encryption protocol; the various quantum computer manipulations constituting the Shor algorithm; how the Shor algorithm performs the factoring; and the precise sense in which a quantum computer employing Shor's algorithm can be said to accomplish the factoring of very large numbers with less computational effort than a classical computer. It is made apparent that factoring N generally requires many successive runs of the algorithm. Our analysis reveals that the probability of achieving a successful factorization on a single run is about twice as large as commonly quoted in the literature.
Turbulence modeling of free shear layers for high performance aircraft
NASA Technical Reports Server (NTRS)
Sondak, Douglas
1993-01-01
In many flowfield computations, accuracy of the turbulence model employed is frequently a limiting factor in the overall accuracy of the computation. This is particularly true for complex flowfields such as those around full aircraft configurations. Free shear layers such as wakes, impinging jets (in V/STOL applications), and mixing layers over cavities are often part of these flowfields. Although flowfields have been computed for full aircraft, the memory and CPU requirements for these computations are often excessive. Additional computer power is required for multidisciplinary computations such as coupled fluid dynamics and conduction heat transfer analysis. Massively parallel computers show promise in alleviating this situation, and the purpose of this effort was to adapt and optimize CFD codes to these new machines. The objective of this research effort was to compute the flowfield and heat transfer for a two-dimensional jet impinging normally on a cool plate. The results of this research effort were summarized in an AIAA paper titled 'Parallel Implementation of the k-epsilon Turbulence Model'. Appendix A contains the full paper.
Interactive Electronic Storybooks for Kindergartners to Promote Vocabulary Growth
ERIC Educational Resources Information Center
Smeets, Daisy J. H.; Bus, Adriana G.
2012-01-01
The goals of this study were to examine (a) whether extratextual vocabulary instructions embedded in electronic storybooks facilitated word learning over reading alone and (b) whether instructional formats that required children to invest more effort were more effective than formats that required less effort. A computer-based "assistant" was added…
Measuring and Modeling Change in Examinee Effort on Low-Stakes Tests across Testing Occasions
ERIC Educational Resources Information Center
Sessoms, John; Finney, Sara J.
2015-01-01
Because schools worldwide use low-stakes tests to make important decisions, value-added indices computed from test scores must accurately reflect student learning, which requires equal test-taking effort across testing occasions. Evaluating change in effort assumes effort is measured equivalently across occasions. We evaluated the longitudinal…
Assignment Choice: Do Students Choose Briefer Assignments or Finishing What They Started?
ERIC Educational Resources Information Center
Hawthorn-Embree, Meredith L.; Skinner, Christopher H.; Parkhurst, John; O'Neil, Michael; Conley, Elisha
2010-01-01
Academic skill development requires engagement in effortful academic behaviors. Although students may be more likely to choose to engage in behaviors that require less effort, they also may be motivated to complete assignments that they have already begun. Seventh-grade students (N = 88) began a mathematics computation worksheet, but were stopped…
NASA Astrophysics Data System (ADS)
Yoon, S.
2016-12-01
To define geodetic reference frame using GPS data collected by Continuously Operating Reference Stations (CORS) network, historical GPS data needs to be reprocessed regularly. Reprocessing GPS data collected by upto 2000 CORS sites for the last two decades requires a lot of computational resource. At National Geodetic Survey (NGS), there has been one completed reprocessing in 2011, and currently, the second reprocessing is undergoing. For the first reprocessing effort, in-house computing resource was utilized. In the current second reprocessing effort, outsourced cloud computing platform is being utilized. In this presentation, the outline of data processing strategy at NGS is described as well as the effort to parallelize the data processing procedure in order to maximize the benefit of the cloud computing. The time and cost savings realized by utilizing cloud computing approach will also be discussed.
[Earth Science Technology Office's Computational Technologies Project
NASA Technical Reports Server (NTRS)
Fischer, James (Technical Monitor); Merkey, Phillip
2005-01-01
This grant supported the effort to characterize the problem domain of the Earth Science Technology Office's Computational Technologies Project, to engage the Beowulf Cluster Computing Community as well as the High Performance Computing Research Community so that we can predict the applicability of said technologies to the scientific community represented by the CT project and formulate long term strategies to provide the computational resources necessary to attain the anticipated scientific objectives of the CT project. Specifically, the goal of the evaluation effort is to use the information gathered over the course of the Round-3 investigations to quantify the trends in scientific expectations, the algorithmic requirements and capabilities of high-performance computers to satisfy this anticipated need.
Future Computer Requirements for Computational Aerodynamics
NASA Technical Reports Server (NTRS)
1978-01-01
Recent advances in computational aerodynamics are discussed as well as motivations for and potential benefits of a National Aerodynamic Simulation Facility having the capability to solve fluid dynamic equations at speeds two to three orders of magnitude faster than presently possible with general computers. Two contracted efforts to define processor architectures for such a facility are summarized.
Automated Boundary Conditions for Wind Tunnel Simulations
NASA Technical Reports Server (NTRS)
Carlson, Jan-Renee
2018-01-01
Computational fluid dynamic (CFD) simulations of models tested in wind tunnels require a high level of fidelity and accuracy particularly for the purposes of CFD validation efforts. Considerable effort is required to ensure the proper characterization of both the physical geometry of the wind tunnel and recreating the correct flow conditions inside the wind tunnel. The typical trial-and-error effort used for determining the boundary condition values for a particular tunnel configuration are time and computer resource intensive. This paper describes a method for calculating and updating the back pressure boundary condition in wind tunnel simulations by using a proportional-integral-derivative controller. The controller methodology and equations are discussed, and simulations using the controller to set a tunnel Mach number in the NASA Langley 14- by 22-Foot Subsonic Tunnel are demonstrated.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Nakhai, B.
A new method for solving radiation transport problems is presented. The heart of the technique is a new cross section processing procedure for the calculation of group-to-point and point-to-group cross sections sets. The method is ideally suited for problems which involve media with highly fluctuating cross sections, where the results of the traditional multigroup calculations are beclouded by the group averaging procedures employed. Extensive computational efforts, which would be required to evaluate double integrals in the multigroup treatment numerically, prohibit iteration to optimize the energy boundaries. On the other hand, use of point-to-point techniques (as in the stochastic technique) ismore » often prohibitively expensive due to the large computer storage requirement. The pseudo-point code is a hybrid of the two aforementioned methods (group-to-group and point-to-point) - hence the name pseudo-point - that reduces the computational efforts of the former and the large core requirements of the latter. The pseudo-point code generates the group-to-point or the point-to-group transfer matrices, and can be coupled with the existing transport codes to calculate pointwise energy-dependent fluxes. This approach yields much more detail than is available from the conventional energy-group treatments. Due to the speed of this code, several iterations could be performed (in affordable computing efforts) to optimize the energy boundaries and the weighting functions. The pseudo-point technique is demonstrated by solving six problems, each depicting a certain aspect of the technique. The results are presented as flux vs energy at various spatial intervals. The sensitivity of the technique to the energy grid and the savings in computational effort are clearly demonstrated.« less
Status of Computational Aerodynamic Modeling Tools for Aircraft Loss-of-Control
NASA Technical Reports Server (NTRS)
Frink, Neal T.; Murphy, Patrick C.; Atkins, Harold L.; Viken, Sally A.; Petrilli, Justin L.; Gopalarathnam, Ashok; Paul, Ryan C.
2016-01-01
A concerted effort has been underway over the past several years to evolve computational capabilities for modeling aircraft loss-of-control under the NASA Aviation Safety Program. A principal goal has been to develop reliable computational tools for predicting and analyzing the non-linear stability & control characteristics of aircraft near stall boundaries affecting safe flight, and for utilizing those predictions for creating augmented flight simulation models that improve pilot training. Pursuing such an ambitious task with limited resources required the forging of close collaborative relationships with a diverse body of computational aerodynamicists and flight simulation experts to leverage their respective research efforts into the creation of NASA tools to meet this goal. Considerable progress has been made and work remains to be done. This paper summarizes the status of the NASA effort to establish computational capabilities for modeling aircraft loss-of-control and offers recommendations for future work.
Overview 1993: Computational applications
NASA Technical Reports Server (NTRS)
Benek, John A.
1993-01-01
Computational applications include projects that apply or develop computationally intensive computer programs. Such programs typically require supercomputers to obtain solutions in a timely fashion. This report describes two CSTAR projects involving Computational Fluid Dynamics (CFD) technology. The first, the Parallel Processing Initiative, is a joint development effort and the second, the Chimera Technology Development, is a transfer of government developed technology to American industry.
Psychological Issues in Online Adaptive Task Allocation
NASA Technical Reports Server (NTRS)
Morris, N. M.; Rouse, W. B.; Ward, S. L.; Frey, P. R.
1984-01-01
Adaptive aiding is an idea that offers potential for improvement over many current approaches to aiding in human-computer systems. The expected return of tailoring the system to fit the user could be in the form of improved system performance and/or increased user satisfaction. Issues such as the manner in which information is shared between human and computer, the appropriate division of labor between them, and the level of autonomy of the aid are explored. A simulated visual search task was developed. Subjects are required to identify targets in a moving display while performing a compensatory sub-critical tracking task. By manipulating characteristics of the situation such as imposed task-related workload and effort required to communicate with the computer, it is possible to create conditions in which interaction with the computer would be more or less desirable. The results of preliminary research using this experimental scenario are presented, and future directions for this research effort are discussed.
[Earth and Space Sciences Project Services for NASA HPCC
NASA Technical Reports Server (NTRS)
Merkey, Phillip
2002-01-01
This grant supported the effort to characterize the problem domain of the Earth Science Technology Office's Computational Technologies Project, to engage the Beowulf Cluster Computing Community as well as the High Performance Computing Research Community so that we can predict the applicability of said technologies to the scientific community represented by the CT project and formulate long term strategies to provide the computational resources necessary to attain the anticipated scientific objectives of the CT project. Specifically, the goal of the evaluation effort is to use the information gathered over the course of the Round-3 investigations to quantify the trends in scientific expectations, the algorithmic requirements and capabilities of high-performance computers to satisfy this anticipated need.
Conjugate Gradient Algorithms For Manipulator Simulation
NASA Technical Reports Server (NTRS)
Fijany, Amir; Scheid, Robert E.
1991-01-01
Report discusses applicability of conjugate-gradient algorithms to computation of forward dynamics of robotic manipulators. Rapid computation of forward dynamics essential to teleoperation and other advanced robotic applications. Part of continuing effort to find algorithms meeting requirements for increased computational efficiency and speed. Method used for iterative solution of systems of linear equations.
Computational analysis of high resolution unsteady airloads for rotor aeroacoustics
NASA Technical Reports Server (NTRS)
Quackenbush, Todd R.; Lam, C.-M. Gordon; Wachspress, Daniel A.; Bliss, Donald B.
1994-01-01
The study of helicopter aerodynamic loading for acoustics applications requires the application of efficient yet accurate simulations of the velocity field induced by the rotor's vortex wake. This report summarizes work to date on the development of such an analysis, which builds on the Constant Vorticity Contour (CVC) free wake model, previously implemented for the study of vibratory loading in the RotorCRAFT computer code. The present effort has focused on implementation of an airload reconstruction approach that computes high resolution airload solutions of rotor/rotor-wake interactions required for acoustics computations. Supplementary efforts on the development of improved vortex core modeling, unsteady aerodynamic effects, higher spatial resolution of rotor loading, and fast vortex wake implementations have substantially enhanced the capabilities of the resulting software, denoted RotorCRAFT/AA (AeroAcoustics). Results of validation calculations using recently acquired model rotor data show that by employing airload reconstruction it is possible to apply the CVC wake analysis with temporal and spatial resolution suitable for acoustics applications while reducing the computation time required by one to two orders of magnitude relative to that required by direct calculations. Promising correlation with this body of airload and noise data has been obtained for a variety of rotor configurations and operating conditions.
Evolution of a standard microprocessor-based space computer
NASA Technical Reports Server (NTRS)
Fernandez, M.
1980-01-01
An existing in inventory computer hardware/software package (B-1 RFS/ECM) was repackaged and applied to multiple missile/space programs. Concurrent with the application efforts, low risk modifications were made to the computer from program to program to take advantage of newer, advanced technology and to meet increasingly more demanding requirements (computational and memory capabilities, longer life, and fault tolerant autonomy). It is concluded that microprocessors hold promise in a number of critical areas for future space computer applications. However, the benefits of the DoD VHSIC Program are required and the old proliferation problem must be revised.
On the evaluation of derivatives of Gaussian integrals
NASA Technical Reports Server (NTRS)
Helgaker, Trygve; Taylor, Peter R.
1992-01-01
We show that by a suitable change of variables, the derivatives of molecular integrals over Gaussian-type functions required for analytic energy derivatives can be evaluated with significantly less computational effort than current formulations. The reduction in effort increases with the order of differentiation.
PREPARING FOR EXASCALE: ORNL Leadership Computing Application Requirements and Strategy
DOE Office of Scientific and Technical Information (OSTI.GOV)
Joubert, Wayne; Kothe, Douglas B; Nam, Hai Ah
2009-12-01
In 2009 the Oak Ridge Leadership Computing Facility (OLCF), a U.S. Department of Energy (DOE) facility at the Oak Ridge National Laboratory (ORNL) National Center for Computational Sciences (NCCS), elicited petascale computational science requirements from leading computational scientists in the international science community. This effort targeted science teams whose projects received large computer allocation awards on OLCF systems. A clear finding of this process was that in order to reach their science goals over the next several years, multiple projects will require computational resources in excess of an order of magnitude more powerful than those currently available. Additionally, for themore » longer term, next-generation science will require computing platforms of exascale capability in order to reach DOE science objectives over the next decade. It is generally recognized that achieving exascale in the proposed time frame will require disruptive changes in computer hardware and software. Processor hardware will become necessarily heterogeneous and will include accelerator technologies. Software must undergo the concomitant changes needed to extract the available performance from this heterogeneous hardware. This disruption portends to be substantial, not unlike the change to the message passing paradigm in the computational science community over 20 years ago. Since technological disruptions take time to assimilate, we must aggressively embark on this course of change now, to insure that science applications and their underlying programming models are mature and ready when exascale computing arrives. This includes initiation of application readiness efforts to adapt existing codes to heterogeneous architectures, support of relevant software tools, and procurement of next-generation hardware testbeds for porting and testing codes. The 2009 OLCF requirements process identified numerous actions necessary to meet this challenge: (1) Hardware capabilities must be advanced on multiple fronts, including peak flops, node memory capacity, interconnect latency, interconnect bandwidth, and memory bandwidth. (2) Effective parallel programming interfaces must be developed to exploit the power of emerging hardware. (3) Science application teams must now begin to adapt and reformulate application codes to the new hardware and software, typified by hierarchical and disparate layers of compute, memory and concurrency. (4) Algorithm research must be realigned to exploit this hierarchy. (5) When possible, mathematical libraries must be used to encapsulate the required operations in an efficient and useful way. (6) Software tools must be developed to make the new hardware more usable. (7) Science application software must be improved to cope with the increasing complexity of computing systems. (8) Data management efforts must be readied for the larger quantities of data generated by larger, more accurate science models. Requirements elicitation, analysis, validation, and management comprise a difficult and inexact process, particularly in periods of technological change. Nonetheless, the OLCF requirements modeling process is becoming increasingly quantitative and actionable, as the process becomes more developed and mature, and the process this year has identified clear and concrete steps to be taken. This report discloses (1) the fundamental science case driving the need for the next generation of computer hardware, (2) application usage trends that illustrate the science need, (3) application performance characteristics that drive the need for increased hardware capabilities, (4) resource and process requirements that make the development and deployment of science applications on next-generation hardware successful, and (5) summary recommendations for the required next steps within the computer and computational science communities.« less
NASA Technical Reports Server (NTRS)
Iida, H. T.
1966-01-01
Computational procedure reduces the numerical effort whenever the method of finite differences is used to solve ablation problems for which the surface recession is large relative to the initial slab thickness. The number of numerical operations required for a given maximum space mesh size is reduced.
An efficient method for hybrid density functional calculation with spin-orbit coupling
NASA Astrophysics Data System (ADS)
Wang, Maoyuan; Liu, Gui-Bin; Guo, Hong; Yao, Yugui
2018-03-01
In first-principles calculations, hybrid functional is often used to improve accuracy from local exchange correlation functionals. A drawback is that evaluating the hybrid functional needs significantly more computing effort. When spin-orbit coupling (SOC) is taken into account, the non-collinear spin structure increases computing effort by at least eight times. As a result, hybrid functional calculations with SOC are intractable in most cases. In this paper, we present an approximate solution to this problem by developing an efficient method based on a mixed linear combination of atomic orbital (LCAO) scheme. We demonstrate the power of this method using several examples and we show that the results compare very well with those of direct hybrid functional calculations with SOC, yet the method only requires a computing effort similar to that without SOC. The presented technique provides a good balance between computing efficiency and accuracy, and it can be extended to magnetic materials.
NASA Technical Reports Server (NTRS)
Vickers, John
2015-01-01
The Materials Genome Initiative (MGI) project element is a cross-Center effort that is focused on the integration of computational tools to simulate manufacturing processes and materials behavior. These computational simulations will be utilized to gain understanding of processes and materials behavior to accelerate process development and certification to more efficiently integrate new materials in existing NASA projects and to lead to the design of new materials for improved performance. This NASA effort looks to collaborate with efforts at other government agencies and universities working under the national MGI. MGI plans to develop integrated computational/experimental/ processing methodologies for accelerating discovery and insertion of materials to satisfy NASA's unique mission demands. The challenges include validated design tools that incorporate materials properties, processes, and design requirements; and materials process control to rapidly mature emerging manufacturing methods and develop certified manufacturing processes
Limits on fundamental limits to computation.
Markov, Igor L
2014-08-14
An indispensable part of our personal and working lives, computing has also become essential to industries and governments. Steady improvements in computer hardware have been supported by periodic doubling of transistor densities in integrated circuits over the past fifty years. Such Moore scaling now requires ever-increasing efforts, stimulating research in alternative hardware and stirring controversy. To help evaluate emerging technologies and increase our understanding of integrated-circuit scaling, here I review fundamental limits to computation in the areas of manufacturing, energy, physical space, design and verification effort, and algorithms. To outline what is achievable in principle and in practice, I recapitulate how some limits were circumvented, and compare loose and tight limits. Engineering difficulties encountered by emerging technologies may indicate yet unknown limits.
Flight program language requirements. Volume 2: Requirements and evaluations
NASA Technical Reports Server (NTRS)
1972-01-01
The efforts and results are summarized for a study to establish requirements for a flight programming language for future onboard computer applications. Several different languages were available as potential candidates for future NASA flight programming efforts. The study centered around an evaluation of the four most pertinent existing aerospace languages. Evaluation criteria were established, and selected kernels from the current Saturn 5 and Skylab flight programs were used as benchmark problems for sample coding. An independent review of the language specifications incorporated anticipated future programming requirements into the evaluation. A set of detailed language requirements was synthesized from these activities. The details of program language requirements and of the language evaluations are described.
Robotic tape library system level testing at NSA: Present and planned
NASA Technical Reports Server (NTRS)
Shields, Michael F.
1994-01-01
In the present of declining Defense budgets, increased pressure has been placed on the DOD to utilize Commercial Off the Shelf (COTS) solutions to incrementally solve a wide variety of our computer processing requirements. With the rapid growth in processing power, significant expansion of high performance networking, and the increased complexity of applications data sets, the requirement for high performance, large capacity, reliable and secure, and most of all affordable robotic tape storage libraries has greatly increased. Additionally, the migration to a heterogeneous, distributed computing environment has further complicated the problem. With today's open system compute servers approaching yesterday's supercomputer capabilities, the need for affordable, reliable secure Mass Storage Systems (MSS) has taken on an ever increasing importance to our processing center's ability to satisfy operational mission requirements. To that end, NSA has established an in-house capability to acquire, test, and evaluate COTS products. Its goal is to qualify a set of COTS MSS libraries, thereby achieving a modicum of standardization for robotic tape libraries which can satisfy our low, medium, and high performance file and volume serving requirements. In addition, NSA has established relations with other Government Agencies to complete this in-house effort and to maximize our research, testing, and evaluation work. While the preponderance of the effort is focused at the high end of the storage ladder, considerable effort will be extended this year and next at the server class or mid range storage systems.
Modifications Of Hydrostatic-Bearing Computer Program
NASA Technical Reports Server (NTRS)
Hibbs, Robert I., Jr.; Beatty, Robert F.
1991-01-01
Several modifications made to enhance utility of HBEAR, computer program for analysis and design of hydrostatic bearings. Modifications make program applicable to more realistic cases and reduce time and effort necessary to arrive at a suitable design. Uses search technique to iterate on size of orifice to obtain required pressure ratio.
UNIX Micros for Students Majoring in Computer Science and Personal Information Retrieval.
ERIC Educational Resources Information Center
Fox, Edward A.; Birch, Sandra
1986-01-01
Traces the history of Virginia Tech's requirement that incoming freshmen majoring in computer science each acquire a microcomputer running the UNIX operating system; explores rationale for the decision; explains system's key features; and describes program implementation and research and development efforts to provide personal information…
The Effort Paradox: Effort Is Both Costly and Valued.
Inzlicht, Michael; Shenhav, Amitai; Olivola, Christopher Y
2018-04-01
According to prominent models in cognitive psychology, neuroscience, and economics, effort (be it physical or mental) is costly: when given a choice, humans and non-human animals alike tend to avoid effort. Here, we suggest that the opposite is also true and review extensive evidence that effort can also add value. Not only can the same outcomes be more rewarding if we apply more (not less) effort, sometimes we select options precisely because they require effort. Given the increasing recognition of effort's role in motivation, cognitive control, and value-based decision-making, considering this neglected side of effort will not only improve formal computational models, but also provide clues about how to promote sustained mental effort across time. Copyright © 2018 Elsevier Ltd. All rights reserved.
Large Scale Computing and Storage Requirements for High Energy Physics
DOE Office of Scientific and Technical Information (OSTI.GOV)
Gerber, Richard A.; Wasserman, Harvey
2010-11-24
The National Energy Research Scientific Computing Center (NERSC) is the leading scientific computing facility for the Department of Energy's Office of Science, providing high-performance computing (HPC) resources to more than 3,000 researchers working on about 400 projects. NERSC provides large-scale computing resources and, crucially, the support and expertise needed for scientists to make effective use of them. In November 2009, NERSC, DOE's Office of Advanced Scientific Computing Research (ASCR), and DOE's Office of High Energy Physics (HEP) held a workshop to characterize the HPC resources needed at NERSC to support HEP research through the next three to five years. Themore » effort is part of NERSC's legacy of anticipating users needs and deploying resources to meet those demands. The workshop revealed several key points, in addition to achieving its goal of collecting and characterizing computing requirements. The chief findings: (1) Science teams need access to a significant increase in computational resources to meet their research goals; (2) Research teams need to be able to read, write, transfer, store online, archive, analyze, and share huge volumes of data; (3) Science teams need guidance and support to implement their codes on future architectures; and (4) Projects need predictable, rapid turnaround of their computational jobs to meet mission-critical time constraints. This report expands upon these key points and includes others. It also presents a number of case studies as representative of the research conducted within HEP. Workshop participants were asked to codify their requirements in this case study format, summarizing their science goals, methods of solution, current and three-to-five year computing requirements, and software and support needs. Participants were also asked to describe their strategy for computing in the highly parallel, multi-core environment that is expected to dominate HPC architectures over the next few years. The report includes a section that describes efforts already underway or planned at NERSC that address requirements collected at the workshop. NERSC has many initiatives in progress that address key workshop findings and are aligned with NERSC's strategic plans.« less
Online Secondary Research in the Advertising Research Class: A Friendly Introduction to Computing.
ERIC Educational Resources Information Center
Adler, Keith
In an effort to promote computer literacy among advertising students, an assignment was devised that required the use of online database search techniques to find secondary research materials. The search program, chosen for economical reasons, was "Classroom Instruction Program" offered by Dialog Information Services. Available for a…
DOE Office of Scientific and Technical Information (OSTI.GOV)
Johanna H Oxstrand; Katya L Le Blanc
The nuclear industry is constantly trying to find ways to decrease the human error rate, especially the human errors associated with procedure use. As a step toward the goal of improving procedure use performance, researchers, together with the nuclear industry, have been looking at replacing the current paper-based procedures with computer-based procedure systems. The concept of computer-based procedures is not new by any means; however most research has focused on procedures used in the main control room. Procedures reviewed in these efforts are mainly emergency operating procedures and normal operating procedures. Based on lessons learned for these previous efforts wemore » are now exploring a more unknown application for computer based procedures - field procedures, i.e. procedures used by nuclear equipment operators and maintenance technicians. The Idaho National Laboratory, the Institute for Energy Technology, and participants from the U.S. commercial nuclear industry are collaborating in an applied research effort with the objective of developing requirements and specifications for a computer-based procedure system to be used by field operators. The goal is to identify the types of human errors that can be mitigated by using computer-based procedures and how to best design the computer-based procedures to do this. The underlying philosophy in the research effort is “Stop – Start – Continue”, i.e. what features from the use of paper-based procedures should we not incorporate (Stop), what should we keep (Continue), and what new features or work processes should be added (Start). One step in identifying the Stop – Start – Continue was to conduct a baseline study where affordances related to the current usage of paper-based procedures were identified. The purpose of the study was to develop a model of paper based procedure use which will help to identify desirable features for computer based procedure prototypes. Affordances such as note taking, markups, sharing procedures between fellow coworkers, the use of multiple procedures at once, etc. were considered. The model describes which affordances associated with paper based procedures should be transferred to computer-based procedures as well as what features should not be incorporated. The model also provides a means to identify what new features not present in paper based procedures need to be added to the computer-based procedures to further enhance performance. The next step is to use the requirements and specifications to develop concepts and prototypes of computer-based procedures. User tests and other data collection efforts will be conducted to ensure that the real issues with field procedures and their usage are being addressed and solved in the best manner possible. This paper describes the baseline study, the construction of the model of procedure use, and the requirements and specifications for computer-based procedures that were developed based on the model. It also addresses how the model and the insights gained from it were used to develop concepts and prototypes for computer based procedures.« less
Practical Use of Computationally Frugal Model Analysis Methods
Hill, Mary C.; Kavetski, Dmitri; Clark, Martyn; ...
2015-03-21
Computationally frugal methods of model analysis can provide substantial benefits when developing models of groundwater and other environmental systems. Model analysis includes ways to evaluate model adequacy and to perform sensitivity and uncertainty analysis. Frugal methods typically require 10s of parallelizable model runs; their convenience allows for other uses of the computational effort. We suggest that model analysis be posed as a set of questions used to organize methods that range from frugal to expensive (requiring 10,000 model runs or more). This encourages focus on method utility, even when methods have starkly different theoretical backgrounds. We note that many frugalmore » methods are more useful when unrealistic process-model nonlinearities are reduced. Inexpensive diagnostics are identified for determining when frugal methods are advantageous. Examples from the literature are used to demonstrate local methods and the diagnostics. We suggest that the greater use of computationally frugal model analysis methods would allow questions such as those posed in this work to be addressed more routinely, allowing the environmental sciences community to obtain greater scientific insight from the many ongoing and future modeling efforts« less
Fernando, Rohan L; Cheng, Hao; Golden, Bruce L; Garrick, Dorian J
2016-12-08
Two types of models have been used for single-step genomic prediction and genome-wide association studies that include phenotypes from both genotyped animals and their non-genotyped relatives. The two types are breeding value models (BVM) that fit breeding values explicitly and marker effects models (MEM) that express the breeding values in terms of the effects of observed or imputed genotypes. MEM can accommodate a wider class of analyses, including variable selection or mixture model analyses. The order of the equations that need to be solved and the inverses required in their construction vary widely, and thus the computational effort required depends upon the size of the pedigree, the number of genotyped animals and the number of loci. We present computational strategies to avoid storing large, dense blocks of the MME that involve imputed genotypes. Furthermore, we present a hybrid model that fits a MEM for animals with observed genotypes and a BVM for those without genotypes. The hybrid model is computationally attractive for pedigree files containing millions of animals with a large proportion of those being genotyped. We demonstrate the practicality on both the original MEM and the hybrid model using real data with 6,179,960 animals in the pedigree with 4,934,101 phenotypes and 31,453 animals genotyped at 40,214 informative loci. To complete a single-trait analysis on a desk-top computer with four graphics cards required about 3 h using the hybrid model to obtain both preconditioned conjugate gradient solutions and 42,000 Markov chain Monte-Carlo (MCMC) samples of breeding values, which allowed making inferences from posterior means, variances and covariances. The MCMC sampling required one quarter of the effort when the hybrid model was used compared to the published MEM. We present a hybrid model that fits a MEM for animals with genotypes and a BVM for those without genotypes. Its practicality and considerable reduction in computing effort was demonstrated. This model can readily be extended to accommodate multiple traits, multiple breeds, maternal effects, and additional random effects such as polygenic residual effects.
Software Engineering for Scientific Computer Simulations
NASA Astrophysics Data System (ADS)
Post, Douglass E.; Henderson, Dale B.; Kendall, Richard P.; Whitney, Earl M.
2004-11-01
Computer simulation is becoming a very powerful tool for analyzing and predicting the performance of fusion experiments. Simulation efforts are evolving from including only a few effects to many effects, from small teams with a few people to large teams, and from workstations and small processor count parallel computers to massively parallel platforms. Successfully making this transition requires attention to software engineering issues. We report on the conclusions drawn from a number of case studies of large scale scientific computing projects within DOE, academia and the DoD. The major lessons learned include attention to sound project management including setting reasonable and achievable requirements, building a good code team, enforcing customer focus, carrying out verification and validation and selecting the optimum computational mathematics approaches.
The Nuclear Energy Advanced Modeling and Simulation Enabling Computational Technologies FY09 Report
DOE Office of Scientific and Technical Information (OSTI.GOV)
Diachin, L F; Garaizar, F X; Henson, V E
2009-10-12
In this document we report on the status of the Nuclear Energy Advanced Modeling and Simulation (NEAMS) Enabling Computational Technologies (ECT) effort. In particular, we provide the context for ECT In the broader NEAMS program and describe the three pillars of the ECT effort, namely, (1) tools and libraries, (2) software quality assurance, and (3) computational facility (computers, storage, etc) needs. We report on our FY09 deliverables to determine the needs of the integrated performance and safety codes (IPSCs) in these three areas and lay out the general plan for software quality assurance to meet the requirements of DOE andmore » the DOE Advanced Fuel Cycle Initiative (AFCI). We conclude with a brief description of our interactions with the Idaho National Laboratory computer center to determine what is needed to expand their role as a NEAMS user facility.« less
Elimination sequence optimization for SPAR
NASA Technical Reports Server (NTRS)
Hogan, Harry A.
1986-01-01
SPAR is a large-scale computer program for finite element structural analysis. The program allows user specification of the order in which the joints of a structure are to be eliminated since this order can have significant influence over solution performance, in terms of both storage requirements and computer time. An efficient elimination sequence can improve performance by over 50% for some problems. Obtaining such sequences, however, requires the expertise of an experienced user and can take hours of tedious effort to affect. Thus, an automatic elimination sequence optimizer would enhance productivity by reducing the analysts' problem definition time and by lowering computer costs. Two possible methods for automating the elimination sequence specifications were examined. Several algorithms based on the graph theory representations of sparse matrices were studied with mixed results. Significant improvement in the program performance was achieved, but sequencing by an experienced user still yields substantially better results. The initial results provide encouraging evidence that the potential benefits of such an automatic sequencer would be well worth the effort.
Learning Teamwork Skills in University Programming Courses
ERIC Educational Resources Information Center
Sancho-Thomas, Pilar; Fuentes-Fernandez, Ruben; Fernandez-Manjon, Baltasar
2009-01-01
University courses about computer programming usually seek to provide students not only with technical knowledge, but also with the skills required to work in real-life software projects. Nowadays, the development of software applications requires the coordinated efforts of the members of one or more teams. Therefore, it is important for software…
Paper simulation techniques in user requirements analysis for interactive computer systems
NASA Technical Reports Server (NTRS)
Ramsey, H. R.; Atwood, M. E.; Willoughby, J. K.
1979-01-01
This paper describes the use of a technique called 'paper simulation' in the analysis of user requirements for interactive computer systems. In a paper simulation, the user solves problems with the aid of a 'computer', as in normal man-in-the-loop simulation. In this procedure, though, the computer does not exist, but is simulated by the experimenters. This allows simulated problem solving early in the design effort, and allows the properties and degree of structure of the system and its dialogue to be varied. The technique, and a method of analyzing the results, are illustrated with examples from a recent paper simulation exercise involving a Space Shuttle flight design task
Shielding requirements for the Space Station habitability modules
NASA Technical Reports Server (NTRS)
Avans, Sherman L.; Horn, Jennifer R.; Williamsen, Joel E.
1990-01-01
The design, analysis, development, and tests of the total meteoroid/debris protection system for the Space Station Freedom habitability modules, such as the habitation module, the laboratory module, and the node structures, are described. Design requirements are discussed along with development efforts, including a combination of hypervelocity testing and analyses. Computer hydrocode analysis of hypervelocity impact phenomena associated with Space Station habitability structures is covered and the use of optimization techniques, engineering models, and parametric analyses is assessed. Explosive rail gun development efforts and protective capability and damage tolerance of multilayer insulation due to meteoroid/debris impact are considered. It is concluded that anticipated changes in the debris environment definition and requirements will require rescoping the tests and analysis required to develop a protection system.
Computer simulation modeling of recreation use: Current status, case studies, and future directions
David N. Cole
2005-01-01
This report compiles information about recent progress in the application of computer simulation modeling to planning and management of recreation use, particularly in parks and wilderness. Early modeling efforts are described in a chapter that provides an historical perspective. Another chapter provides an overview of modeling options, common data input requirements,...
Troubleshooting Microcomputers. A Technical Guide for Polk County Schools.
ERIC Educational Resources Information Center
Black, B. R.; And Others
This guide was started in 1986 as an effort to pull together a collection of several computer guides that had been written over the previous several years to assist schools in making simple computer repairs. The first of six sections contains general tips and hints, including sections on tool requirements, strobe disk speed adjustment, static…
Expanding Computer Science Education in Schools: Understanding Teacher Experiences and Challenges
ERIC Educational Resources Information Center
Yadav, Aman; Gretter, Sarah; Hambrusch, Susanne; Sands, Phil
2017-01-01
The increased push for teaching computer science (CS) in schools in the United States requires training a large number of new K-12 teachers. The current efforts to increase the number of CS teachers have predominantly focused on training teachers from other content areas. In order to support these beginning CS teachers, we need to better…
Using a cloud to replenish parched groundwater modeling efforts.
Hunt, Randall J; Luchette, Joseph; Schreuder, Willem A; Rumbaugh, James O; Doherty, John; Tonkin, Matthew J; Rumbaugh, Douglas B
2010-01-01
Groundwater models can be improved by introduction of additional parameter flexibility and simultaneous use of soft-knowledge. However, these sophisticated approaches have high computational requirements. Cloud computing provides unprecedented access to computing power via the Internet to facilitate the use of these techniques. A modeler can create, launch, and terminate "virtual" computers as needed, paying by the hour, and save machine images for future use. Such cost-effective and flexible computing power empowers groundwater modelers to routinely perform model calibration and uncertainty analysis in ways not previously possible.
Using a cloud to replenish parched groundwater modeling efforts
Hunt, Randall J.; Luchette, Joseph; Schreuder, Willem A.; Rumbaugh, James O.; Doherty, John; Tonkin, Matthew J.; Rumbaugh, Douglas B.
2010-01-01
Groundwater models can be improved by introduction of additional parameter flexibility and simultaneous use of soft-knowledge. However, these sophisticated approaches have high computational requirements. Cloud computing provides unprecedented access to computing power via the Internet to facilitate the use of these techniques. A modeler can create, launch, and terminate “virtual” computers as needed, paying by the hour, and save machine images for future use. Such cost-effective and flexible computing power empowers groundwater modelers to routinely perform model calibration and uncertainty analysis in ways not previously possible.
Vassena, Eliana; Deraeve, James; Alexander, William H
2017-10-01
Human behavior is strongly driven by the pursuit of rewards. In daily life, however, benefits mostly come at a cost, often requiring that effort be exerted to obtain potential benefits. Medial PFC (MPFC) and dorsolateral PFC (DLPFC) are frequently implicated in the expectation of effortful control, showing increased activity as a function of predicted task difficulty. Such activity partially overlaps with expectation of reward and has been observed both during decision-making and during task preparation. Recently, novel computational frameworks have been developed to explain activity in these regions during cognitive control, based on the principle of prediction and prediction error (predicted response-outcome [PRO] model [Alexander, W. H., & Brown, J. W. Medial prefrontal cortex as an action-outcome predictor. Nature Neuroscience, 14, 1338-1344, 2011], hierarchical error representation [HER] model [Alexander, W. H., & Brown, J. W. Hierarchical error representation: A computational model of anterior cingulate and dorsolateral prefrontal cortex. Neural Computation, 27, 2354-2410, 2015]). Despite the broad explanatory power of these models, it is not clear whether they can also accommodate effects related to the expectation of effort observed in MPFC and DLPFC. Here, we propose a translation of these computational frameworks to the domain of effort-based behavior. First, we discuss how the PRO model, based on prediction error, can explain effort-related activity in MPFC, by reframing effort-based behavior in a predictive context. We propose that MPFC activity reflects monitoring of motivationally relevant variables (such as effort and reward), by coding expectations and discrepancies from such expectations. Moreover, we derive behavioral and neural model-based predictions for healthy controls and clinical populations with impairments of motivation. Second, we illustrate the possible translation to effort-based behavior of the HER model, an extended version of PRO model based on hierarchical error prediction, developed to explain MPFC-DLPFC interactions. We derive behavioral predictions that describe how effort and reward information is coded in PFC and how changing the configuration of such environmental information might affect decision-making and task performance involving motivation.
Climate Science Performance, Data and Productivity on Titan
DOE Office of Scientific and Technical Information (OSTI.GOV)
Mayer, Benjamin W; Worley, Patrick H; Gaddis, Abigail L
2015-01-01
Climate Science models are flagship codes for the largest of high performance computing (HPC) resources, both in visibility, with the newly launched Department of Energy (DOE) Accelerated Climate Model for Energy (ACME) effort, and in terms of significant fractions of system usage. The performance of the DOE ACME model is captured with application level timers and examined through a sizeable run archive. Performance and variability of compute, queue time and ancillary services are examined. As Climate Science advances in the use of HPC resources there has been an increase in the required human and data systems to achieve programs goals.more » A description of current workflow processes (hardware, software, human) and planned automation of the workflow, along with historical and projected data in motion and at rest data usage, are detailed. The combination of these two topics motivates a description of future systems requirements for DOE Climate Modeling efforts, focusing on the growth of data storage and network and disk bandwidth required to handle data at an acceptable rate.« less
Sample Size and Allocation of Effort in Point Count Sampling of Birds in Bottomland Hardwood Forests
Winston P. Smith; Daniel J. Twedt; Robert J. Cooper; David A. Wiedenfeld; Paul B. Hamel; Robert P. Ford
1995-01-01
To examine sample size requirements and optimum allocation of effort in point count sampling of bottomland hardwood forests, we computed minimum sample sizes from variation recorded during 82 point counts (May 7-May 16, 1992) from three localities containing three habitat types across three regions of the Mississippi Alluvial Valley (MAV). Also, we estimated the effect...
2007-06-08
Lightwave VDE /200 KVM-over-Fiber (Keyboard, Video and Mouse) devices installed throughout the TARDEC campus. Implementation of this system required...development effort through the pursuit of an Army-funded Phase-II Small Business Innovative Research (SBIR) effort with IP Video Systems (formerly known as...visualization capabilities of a DoD High- Performance Computing facility, many advanced features are necessary. TARDEC-HPC’s SBIR with IP Video Systems
Experimental Evaluation and Workload Characterization for High-Performance Computer Architectures
NASA Technical Reports Server (NTRS)
El-Ghazawi, Tarek A.
1995-01-01
This research is conducted in the context of the Joint NSF/NASA Initiative on Evaluation (JNNIE). JNNIE is an inter-agency research program that goes beyond typical.bencbking to provide and in-depth evaluations and understanding of the factors that limit the scalability of high-performance computing systems. Many NSF and NASA centers have participated in the effort. Our research effort was an integral part of implementing JNNIE in the NASA ESS grand challenge applications context. Our research work under this program was composed of three distinct, but related activities. They include the evaluation of NASA ESS high- performance computing testbeds using the wavelet decomposition application; evaluation of NASA ESS testbeds using astrophysical simulation applications; and developing an experimental model for workload characterization for understanding workload requirements. In this report, we provide a summary of findings that covers all three parts, a list of the publications that resulted from this effort, and three appendices with the details of each of the studies using a key publication developed under the respective work.
Quadratic Programming for Allocating Control Effort
NASA Technical Reports Server (NTRS)
Singh, Gurkirpal
2005-01-01
A computer program calculates an optimal allocation of control effort in a system that includes redundant control actuators. The program implements an iterative (but otherwise single-stage) algorithm of the quadratic-programming type. In general, in the quadratic-programming problem, one seeks the values of a set of variables that minimize a quadratic cost function, subject to a set of linear equality and inequality constraints. In this program, the cost function combines control effort (typically quantified in terms of energy or fuel consumed) and control residuals (differences between commanded and sensed values of variables to be controlled). In comparison with prior control-allocation software, this program offers approximately equal accuracy but much greater computational efficiency. In addition, this program offers flexibility, robustness to actuation failures, and a capability for selective enforcement of control requirements. The computational efficiency of this program makes it suitable for such complex, real-time applications as controlling redundant aircraft actuators or redundant spacecraft thrusters. The program is written in the C language for execution in a UNIX operating system.
ERIC Educational Resources Information Center
Stevens, Mary Elizabeth
The series, of which this is the initial report, is intended to give a selective overview of research and development efforts and requirements in the computer and information sciences. The operations of information acquisition, sensing, and input to information processing systems are considered in generalized terms. Specific topics include but are…
New Mexico district work-effort analysis computer program
Hiss, W.L.; Trantolo, A.P.; Sparks, J.L.
1972-01-01
The computer program (CAN 2) described in this report is one of several related programs used in the New Mexico District cost-analysis system. The work-effort information used in these programs is accumulated and entered to the nearest hour on forms completed by each employee. Tabulating cards are punched directly from these forms after visual examinations for errors are made. Reports containing detailed work-effort data itemized by employee within each project and account and by account and project for each employee are prepared for both current-month and year-to-date periods by the CAN 2 computer program. An option allowing preparation of reports for a specified 3-month period is provided. The total number of hours worked on each account and project and a grand total of hours worked in the New Mexico District is computed and presented in a summary report for each period. Work effort not chargeable directly to individual projects or accounts is considered as overhead and can be apportioned to the individual accounts and projects on the basis of the ratio of the total hours of work effort for the individual accounts or projects to the total New Mexico District work effort at the option of the user. The hours of work performed by a particular section, such as General Investigations or Surface Water, are prorated and charged to the projects or accounts within the particular section. A number of surveillance or buffer accounts are employed to account for the hours worked on special events or on those parts of large projects or accounts that require a more detailed analysis. Any part of the New Mexico District operation can be separated and analyzed in detail by establishing an appropriate buffer account. With the exception of statements associated with word size, the computer program is written in FORTRAN IV in a relatively low and standard language level to facilitate its use on different digital computers. The program has been run only on a Control Data Corporation 6600 computer system. Central processing computer time has seldom exceeded 5 minutes on the longest year-to-date runs.
Preliminary development of digital signal processing in microwave radiometers
NASA Technical Reports Server (NTRS)
Stanley, W. D.
1980-01-01
Topics covered involve a number of closely related tasks including: the development of several control loop and dynamic noise model computer programs for simulating microwave radiometer measurements; computer modeling of an existing stepped frequency radiometer in an effort to determine its optimum operational characteristics; investigation of the classical second order analog control loop to determine its ability to reduce the estimation error in a microwave radiometer; investigation of several digital signal processing unit designs; initiation of efforts to develop required hardware and software for implementation of the digital signal processing unit; and investigation of the general characteristics and peculiarities of digital processing noiselike microwave radiometer signals.
Virtual reality neurosurgery: a simulator blueprint.
Spicer, Mark A; van Velsen, Martin; Caffrey, John P; Apuzzo, Michael L J
2004-04-01
This article details preliminary studies undertaken to integrate the most relevant advancements across multiple disciplines in an effort to construct a highly realistic neurosurgical simulator based on a distributed computer architecture. Techniques based on modified computational modeling paradigms incorporating finite element analysis are presented, as are current and projected efforts directed toward the implementation of a novel bidirectional haptic device. Patient-specific data derived from noninvasive magnetic resonance imaging sequences are used to construct a computational model of the surgical region of interest. Magnetic resonance images of the brain may be coregistered with those obtained from magnetic resonance angiography, magnetic resonance venography, and diffusion tensor imaging to formulate models of varying anatomic complexity. The majority of the computational burden is encountered in the presimulation reduction of the computational model and allows realization of the required threshold rates for the accurate and realistic representation of real-time visual animations. Intracranial neurosurgical procedures offer an ideal testing site for the development of a totally immersive virtual reality surgical simulator when compared with the simulations required in other surgical subspecialties. The material properties of the brain as well as the typically small volumes of tissue exposed in the surgical field, coupled with techniques and strategies to minimize computational demands, provide unique opportunities for the development of such a simulator. Incorporation of real-time haptic and visual feedback is approached here and likely will be accomplished soon.
Methods for Prediction of High-Speed Reacting Flows in Aerospace Propulsion
NASA Technical Reports Server (NTRS)
Drummond, J. Philip
2014-01-01
Research to develop high-speed airbreathing aerospace propulsion systems was underway in the late 1950s. A major part of the effort involved the supersonic combustion ramjet, or scramjet, engine. Work had also begun to develop computational techniques for solving the equations governing the flow through a scramjet engine. However, scramjet technology and the computational methods to assist in its evolution would remain apart for another decade. The principal barrier was that the computational methods needed for engine evolution lacked the computer technology required for solving the discrete equations resulting from the numerical methods. Even today, computer resources remain a major pacing item in overcoming this barrier. Significant advances have been made over the past 35 years, however, in modeling the supersonic chemically reacting flow in a scramjet combustor. To see how scramjet development and the required computational tools finally merged, we briefly trace the evolution of the technology in both areas.
Developing a Science Commons for Geosciences
NASA Astrophysics Data System (ADS)
Lenhardt, W. C.; Lander, H.
2016-12-01
Many scientific communities, recognizing the research possibilities inherent in data sets, have created domain specific archives such as the Incorporated Research Institutions for Seismology (iris.edu) and ClinicalTrials.gov. Though this is an important step forward, most scientists, including geoscientists, also use a variety of software tools and at least some amount of computation to conduct their research. While the archives make it simpler for scientists to locate the required data, provisioning disk space, compute resources, and network bandwidth can still require significant efforts. This challenge exists despite the wealth of resources available to researchers, namely lab IT resources, institutional IT resources, national compute resources (XSEDE, OSG), private clouds, public clouds, and the development of cyberinfrastructure technologies meant to facilitate use of those resources. Further tasks include obtaining and installing required tools for analysis and visualization. If the research effort is a collaboration or involves certain types of data, then the partners may well have additional non-scientific tasks such as securing the data and developing secure sharing methods for the data. These requirements motivate our investigations into the "Science Commons". This paper will present a working definition of a science commons, compare and contrast examples of existing science commons, and describe a project based at RENCI to implement a science commons for risk analytics. We will then explore what a similar tool might look like for the geosciences.
1987-02-01
landmark set, and for computing a plan as an ordered list of of recursively executable sub-goals. The key to the search is to use the landmark database...Directed Object Extraction Using a Combined Region and Line Repretrentation, /Voc. of the Workshop on Computer Vision: Representation and Con... computational capability as well, such as the floating point calculations as required in this application . One such PE design which made effort to meet these
Simulation Models for the Electric Power Requirements in a Guideway Transit System
DOT National Transportation Integrated Search
1980-04-01
This report describes a computer simulation model developed at the Transportation Systems Center to study the electrical power distribution characteristics of Automated Guideway Transit (AGT) systems. The objective of this simulation effort is to pro...
Computational protein design-the next generation tool to expand synthetic biology applications.
Gainza-Cirauqui, Pablo; Correia, Bruno Emanuel
2018-05-02
One powerful approach to engineer synthetic biology pathways is the assembly of proteins sourced from one or more natural organisms. However, synthetic pathways often require custom functions or biophysical properties not displayed by natural proteins, limitations that could be overcome through modern protein engineering techniques. Structure-based computational protein design is a powerful tool to engineer new functional capabilities in proteins, and it is beginning to have a profound impact in synthetic biology. Here, we review efforts to increase the capabilities of synthetic biology using computational protein design. We focus primarily on computationally designed proteins not only validated in vitro, but also shown to modulate different activities in living cells. Efforts made to validate computational designs in cells can illustrate both the challenges and opportunities in the intersection of protein design and synthetic biology. We also highlight protein design approaches, which although not validated as conveyors of new cellular function in situ, may have rapid and innovative applications in synthetic biology. We foresee that in the near-future, computational protein design will vastly expand the functional capabilities of synthetic cells. Copyright © 2018. Published by Elsevier Ltd.
Development of an hp-version finite element method for computational optimal control
NASA Technical Reports Server (NTRS)
Hodges, Dewey H.; Warner, Michael S.
1993-01-01
The purpose of this research effort was to begin the study of the application of hp-version finite elements to the numerical solution of optimal control problems. Under NAG-939, the hybrid MACSYMA/FORTRAN code GENCODE was developed which utilized h-version finite elements to successfully approximate solutions to a wide class of optimal control problems. In that code the means for improvement of the solution was the refinement of the time-discretization mesh. With the extension to hp-version finite elements, the degrees of freedom include both nodal values and extra interior values associated with the unknown states, co-states, and controls, the number of which depends on the order of the shape functions in each element. One possible drawback is the increased computational effort within each element required in implementing hp-version finite elements. We are trying to determine whether this computational effort is sufficiently offset by the reduction in the number of time elements used and improved Newton-Raphson convergence so as to be useful in solving optimal control problems in real time. Because certain of the element interior unknowns can be eliminated at the element level by solving a small set of nonlinear algebraic equations in which the nodal values are taken as given, the scheme may turn out to be especially powerful in a parallel computing environment. A different processor could be assigned to each element. The number of processors, strictly speaking, is not required to be any larger than the number of sub-regions which are free of discontinuities of any kind.
Artificial Intelligence In Computational Fluid Dynamics
NASA Technical Reports Server (NTRS)
Vogel, Alison Andrews
1991-01-01
Paper compares four first-generation artificial-intelligence (Al) software systems for computational fluid dynamics. Includes: Expert Cooling Fan Design System (EXFAN), PAN AIR Knowledge System (PAKS), grid-adaptation program MITOSIS, and Expert Zonal Grid Generation (EZGrid). Focuses on knowledge-based ("expert") software systems. Analyzes intended tasks, kinds of knowledge possessed, magnitude of effort required to codify knowledge, how quickly constructed, performances, and return on investment. On basis of comparison, concludes Al most successful when applied to well-formulated problems solved by classifying or selecting preenumerated solutions. In contrast, application of Al to poorly understood or poorly formulated problems generally results in long development time and large investment of effort, with no guarantee of success.
Neurocomputational mechanisms underlying subjective valuation of effort costs
Giehl, Kathrin; Sillence, Annie
2017-01-01
In everyday life, we have to decide whether it is worth exerting effort to obtain rewards. Effort can be experienced in different domains, with some tasks requiring significant cognitive demand and others being more physically effortful. The motivation to exert effort for reward is highly subjective and varies considerably across the different domains of behaviour. However, very little is known about the computational or neural basis of how different effort costs are subjectively weighed against rewards. Is there a common, domain-general system of brain areas that evaluates all costs and benefits? Here, we used computational modelling and functional magnetic resonance imaging (fMRI) to examine the mechanisms underlying value processing in both the cognitive and physical domains. Participants were trained on two novel tasks that parametrically varied either cognitive or physical effort. During fMRI, participants indicated their preferences between a fixed low-effort/low-reward option and a variable higher-effort/higher-reward offer for each effort domain. Critically, reward devaluation by both cognitive and physical effort was subserved by a common network of areas, including the dorsomedial and dorsolateral prefrontal cortex, the intraparietal sulcus, and the anterior insula. Activity within these domain-general areas also covaried negatively with reward and positively with effort, suggesting an integration of these parameters within these areas. Additionally, the amygdala appeared to play a unique, domain-specific role in processing the value of rewards associated with cognitive effort. These results are the first to reveal the neurocomputational mechanisms underlying subjective cost–benefit valuation across different domains of effort and provide insight into the multidimensional nature of motivation. PMID:28234892
Toward a VPH/Physiome ToolKit.
Garny, Alan; Cooper, Jonathan; Hunter, Peter J
2010-01-01
The Physiome Project was officially launched in 1997 and has since brought together teams from around the world to work on the development of a computational framework for the modeling of the human body. At the European level, this effort is focused around patient-specific solutions and is known as the Virtual Physiological Human (VPH) Initiative.Such modeling is both multiscale (in space and time) and multiphysics. This, therefore, requires careful interaction and collaboration between the teams involved in the VPH/Physiome effort, if we are to produce computer models that are not only quantitative, but also integrative and predictive.In that context, several technologies and solutions are already available, developed both by groups involved in the VPH/Physiome effort, and by others. They address areas such as data handling/fusion, markup languages, model repositories, ontologies, tools (for simulation, imaging, data fitting, etc.), as well as grid, middleware, and workflow.Here, we provide an overview of resources that should be considered for inclusion in the VPH/Physiome ToolKit (i.e., the set of tools that addresses the needs and requirements of the Physiome Project and VPH Initiative) and discuss some of the challenges that we are still facing.
Data Movement Dominates: Advanced Memory Technology to Address the Real Exascale Power Problem
DOE Office of Scientific and Technical Information (OSTI.GOV)
Bergman, Keren
Energy is the fundamental barrier to Exascale supercomputing and is dominated by the cost of moving data from one point to another, not computation. Similarly, performance is dominated by data movement, not computation. The solution to this problem requires three critical technologies: 3D integration, optical chip-to-chip communication, and a new communication model. The central goal of the Sandia led "Data Movement Dominates" project aimed to develop memory systems and new architectures based on these technologies that have the potential to lower the cost of local memory accesses by orders of magnitude and provide substantially more bandwidth. Only through these transformationalmore » advances can future systems reach the goals of Exascale computing with a manageable power budgets. The Sandia led team included co-PIs from Columbia University, Lawrence Berkeley Lab, and the University of Maryland. The Columbia effort of Data Movement Dominates focused on developing a physically accurate simulation environment and experimental verification for optically-connected memory (OCM) systems that can enable continued performance scaling through high-bandwidth capacity, energy-efficient bit-rate transparency, and time-of-flight latency. With OCM, memory device parallelism and total capacity can scale to match future high-performance computing requirements without sacrificing data-movement efficiency. When we consider systems with integrated photonics, links to memory can be seamlessly integrated with the interconnection network-in a sense, memory becomes a primary aspect of the interconnection network. At the core of the Columbia effort, toward expanding our understanding of OCM enabled computing we have created an integrated modeling and simulation environment that uniquely integrates the physical behavior of the optical layer. The PhoenxSim suite of design and software tools developed under this effort has enabled the co-design of and performance evaluation photonics-enabled OCM architectures on Exascale computing systems.« less
Parallel computing for automated model calibration
DOE Office of Scientific and Technical Information (OSTI.GOV)
Burke, John S.; Danielson, Gary R.; Schulz, Douglas A.
2002-07-29
Natural resources model calibration is a significant burden on computing and staff resources in modeling efforts. Most assessments must consider multiple calibration objectives (for example magnitude and timing of stream flow peak). An automated calibration process that allows real time updating of data/models, allowing scientists to focus effort on improving models is needed. We are in the process of building a fully featured multi objective calibration tool capable of processing multiple models cheaply and efficiently using null cycle computing. Our parallel processing and calibration software routines have been generically, but our focus has been on natural resources model calibration. Somore » far, the natural resources models have been friendly to parallel calibration efforts in that they require no inter-process communication, only need a small amount of input data and only output a small amount of statistical information for each calibration run. A typical auto calibration run might involve running a model 10,000 times with a variety of input parameters and summary statistical output. In the past model calibration has been done against individual models for each data set. The individual model runs are relatively fast, ranging from seconds to minutes. The process was run on a single computer using a simple iterative process. We have completed two Auto Calibration prototypes and are currently designing a more feature rich tool. Our prototypes have focused on running the calibration in a distributed computing cross platform environment. They allow incorporation of?smart? calibration parameter generation (using artificial intelligence processing techniques). Null cycle computing similar to SETI@Home has also been a focus of our efforts. This paper details the design of the latest prototype and discusses our plans for the next revision of the software.« less
Regional Sustainability: The San Luis Basin Metrics Project
There are a number of established, scientifically supported metrics of sustainability. Many of the metrics are data intensive and require extensive effort to collect data and compute. Moreover, individual metrics may not capture all aspects of a system that are relevant to sust...
Development of a Multidisciplinary Approach to Access Sustainability
There are a number of established, scientifically supported metrics of sustainability. Many of the metrics are data intensive and require extensive effort to collect data and compute the metrics. Moreover, individual metrics do not capture all aspects of a system that are relevan...
Reducing obesity will require involvement of all sectors of society.
Hill, James O; Peters, John C; Blair, Steven N
2015-02-01
We need all sectors of society involved in reducing obesity. The food industry's effort to reduce energy intake as part of the Healthy Weight Commitment Foundation is a significant step in the right direction and should be recognized as such by the public health community. We also need to get organizations that promote physical inactivity, such as computer, automobile, and entertainment industries, to become engaged in efforts to reduce obesity. © 2014 The Obesity Society.
CFD Based Computations of Flexible Helicopter Blades for Stability Analysis
NASA Technical Reports Server (NTRS)
Guruswamy, Guru P.
2011-01-01
As a collaborative effort among government aerospace research laboratories an advanced version of a widely used computational fluid dynamics code, OVERFLOW, was recently released. This latest version includes additions to model flexible rotating multiple blades. In this paper, the OVERFLOW code is applied to improve the accuracy of airload computations from the linear lifting line theory that uses displacements from beam model. Data transfers required at every revolution are managed through a Unix based script that runs jobs on large super-cluster computers. Results are demonstrated for the 4-bladed UH-60A helicopter. Deviations of computed data from flight data are evaluated. Fourier analysis post-processing that is suitable for aeroelastic stability computations are performed.
Monitoring and control technologies for bioregenerative life support systems/CELSS
NASA Technical Reports Server (NTRS)
Knott, William M.; Sager, John C.
1991-01-01
The development of a controlled Ecological Life Support System (CELSS) will require NASA to develop innovative monitoring and control technologies to operate the different components of the system. Primary effort over the past three to four years has been directed toward the development of technologies to operate a biomass production module. Computer hardware and software required to operate, collect, and summarize environmental data for a large plant growth chamber facility were developed and refined. Sensors and controls required to collect information on such physical parameters as relative humidity, temperature, irradiance, pressure, and gases in the atmosphere; and PH, dissolved oxygen, fluid flow rates, and electrical conductivity in the nutrient solutions are being developed and tested. Technologies required to produce high artificial irradiance for plant growth and those required to collect and transport natural light into a plant growth chamber are also being evaluated. Significant effort was directed towards the development and testing of a membrane nutrient delivery system required to manipulate, seed, and harvest crops, and to determine plant health prior to stress impacting plant productivity are also being researched. Tissue culture technologies are being developed for use in management and propagation of crop plants. Though previous efforts have focussed on development of technologies required to operate a biomass production module for a CELSS, current efforts are expanding to include technologies required to operate modules such as food preparation, biomass processing, and resource (waste) recovery which are integral parts of the CELSS.
Air Force construction automation/robotics
NASA Technical Reports Server (NTRS)
Nease, AL; Dusseault, Christopher
1994-01-01
The Air Force has several unique requirements that are being met through the development of construction robotic technology. The missions associated with these requirements place construction/repair equipment operators in potentially harmful situations. Additionally, force reductions require that human resources be leveraged to the maximum extent possible and that more stringent construction repair requirements push for increased automation. To solve these problems, the U.S. Air Force is undertaking a research and development effort at Tyndall AFB, FL to develop robotic teleoperation, telerobotics, robotic vehicle communications, automated damage assessment, vehicle navigation, mission/vehicle task control architecture, and associated computing environment. The ultimate goal is the fielding of robotic repair capability operating at the level of supervised autonomy. The authors of this paper will discuss current and planned efforts in construction/repair, explosive ordnance disposal, hazardous waste cleanup, fire fighting, and space construction.
NASA Technical Reports Server (NTRS)
Harp, J. L., Jr.; Oatway, T. P.
1975-01-01
A research effort was conducted with the goal of reducing computer time of a Navier Stokes Computer Code for prediction of viscous flow fields about lifting bodies. A two-dimensional, time-dependent, laminar, transonic computer code (STOKES) was modified to incorporate a non-uniform timestep procedure. The non-uniform time-step requires updating of a zone only as often as required by its own stability criteria or that of its immediate neighbors. In the uniform timestep scheme each zone is updated as often as required by the least stable zone of the finite difference mesh. Because of less frequent update of program variables it was expected that the nonuniform timestep would result in a reduction of execution time by a factor of five to ten. Available funding was exhausted prior to successful demonstration of the benefits to be derived from the non-uniform time-step method.
Hands-Free, Heads-Up Control System for Unmanned Ground Vehicles
2011-08-10
interface evaluation Industry evaluated two commercial-off-the-shelf (COTS) brain computer interfaces from two companies – Neurosky and Emotiv ...useless, resulting in very low command recognition accuracy. In addition, latency issues plagued the system. Figure 6 Emotiv Headset The... Emotiv system, unlike the Neurosky, required great effort to use and calibrate. It requires 16 foam tips to be wet with saline solution and then
Parallelizing a peanut butter sandwich
NASA Astrophysics Data System (ADS)
Quenette, S. M.
2005-12-01
This poster aims to demonstrate, in a novel way, why contemporary computational code development is seemingly hard to a geodynamics modeler (i.e. a non-computer-scientist). For example, to utilise comtemporary computer hardware, parallelisation is required. But why do we chose the explicit approach (MPI) over an implicit (OpenMP) one? How does this relate to the typical geodynamics codes. And do we face this same style of problems in every day life? We aim to demonstrate that the little bit of complexity, fore-thought and effort is worth its while.
Generic approach to access barriers in dehydrogenation reactions
DOE Office of Scientific and Technical Information (OSTI.GOV)
Yu, Liang; Vilella, Laia; Abild-Pedersen, Frank
The introduction of linear energy correlations, which explicitly relate adsorption energies of reaction intermediates and activation energies in heterogeneous catalysis, has proven to be a key component in the computational search for new and promising catalysts. A simple linear approach to estimate activation energies still requires a significant computational effort. To simplify this process and at the same time incorporate the need for enhanced complexity of reaction intermediates, we generalize a recently proposed approach that evaluates transition state energies based entirely on bond-order conservation arguments. Here, we show that similar variation of the local electronic structure along the reaction coordinatemore » introduces a set of general functions that accurately defines the transition state energy and are transferable to other reactions with similar bonding nature. With such an approach, more complex reaction intermediates can be targeted with an insignificant increase in computational effort and without loss of accuracy.« less
Generic approach to access barriers in dehydrogenation reactions
Yu, Liang; Vilella, Laia; Abild-Pedersen, Frank
2018-03-08
The introduction of linear energy correlations, which explicitly relate adsorption energies of reaction intermediates and activation energies in heterogeneous catalysis, has proven to be a key component in the computational search for new and promising catalysts. A simple linear approach to estimate activation energies still requires a significant computational effort. To simplify this process and at the same time incorporate the need for enhanced complexity of reaction intermediates, we generalize a recently proposed approach that evaluates transition state energies based entirely on bond-order conservation arguments. Here, we show that similar variation of the local electronic structure along the reaction coordinatemore » introduces a set of general functions that accurately defines the transition state energy and are transferable to other reactions with similar bonding nature. With such an approach, more complex reaction intermediates can be targeted with an insignificant increase in computational effort and without loss of accuracy.« less
Formal Representations of Eligibility Criteria: A Literature Review
Weng, Chunhua; Tu, Samson W.; Sim, Ida; Richesson, Rachel
2010-01-01
Standards-based, computable knowledge representations for eligibility criteria are increasingly needed to provide computer-based decision support for automated research participant screening, clinical evidence application, and clinical research knowledge management. We surveyed the literature and identified five aspects of eligibility criteria knowledge representations that contribute to the various research and clinical applications: the intended use of computable eligibility criteria, the classification of eligibility criteria, the expression language for representing eligibility rules, the encoding of eligibility concepts, and the modeling of patient data. We consider three of them (expression language, codification of eligibility concepts, and patient data modeling), to be essential constructs of a formal knowledge representation for eligibility criteria. The requirements for each of the three knowledge constructs vary for different use cases, which therefore should inform the development and choice of the constructs toward cost-effective knowledge representation efforts. We discuss the implications of our findings for standardization efforts toward sharable knowledge representation of eligibility criteria. PMID:20034594
NASA Technical Reports Server (NTRS)
Jothiprasad, Giridhar; Mavriplis, Dimitri J.; Caughey, David A.
2002-01-01
The rapid increase in available computational power over the last decade has enabled higher resolution flow simulations and more widespread use of unstructured grid methods for complex geometries. While much of this effort has been focused on steady-state calculations in the aerodynamics community, the need to accurately predict off-design conditions, which may involve substantial amounts of flow separation, points to the need to efficiently simulate unsteady flow fields. Accurate unsteady flow simulations can easily require several orders of magnitude more computational effort than a corresponding steady-state simulation. For this reason, techniques for improving the efficiency of unsteady flow simulations are required in order to make such calculations feasible in the foreseeable future. The purpose of this work is to investigate possible reductions in computer time due to the choice of an efficient time-integration scheme from a series of schemes differing in the order of time-accuracy, and by the use of more efficient techniques to solve the nonlinear equations which arise while using implicit time-integration schemes. This investigation is carried out in the context of a two-dimensional unstructured mesh laminar Navier-Stokes solver.
Open source tools for large-scale neuroscience.
Freeman, Jeremy
2015-06-01
New technologies for monitoring and manipulating the nervous system promise exciting biology but pose challenges for analysis and computation. Solutions can be found in the form of modern approaches to distributed computing, machine learning, and interactive visualization. But embracing these new technologies will require a cultural shift: away from independent efforts and proprietary methods and toward an open source and collaborative neuroscience. Copyright © 2015 The Author. Published by Elsevier Ltd.. All rights reserved.
Regional sustainable environmental management: sustainability metrics research for decision makers
There are a number of established, scientifically supported metrics of sustainability. Many of the metrics are data intensive and require extensive effort to collect data and compute. Moreover, individual metrics may not capture all aspects of a system that are relevant to sust...
Development of a multidisciplinary approach to assess regional sustainability
There are a number of established, scientifically supported metrics of sustainability. Many of the metrics are data intensive and require extensive effort to collect data and compute the metrics. Moreover, individual metrics do not capture all aspects of a system that are relev...
Air Force construction automation/robotics
NASA Technical Reports Server (NTRS)
Nease, A. D.; Alexander, E. F.
1993-01-01
The Air Force has several missions which generate unique requirements that are being met through the development of construction robotic technology. One especially important mission will be the conduct of Department of Defense (DOD) space activities. Space operations and other missions place construction/repair equipment operators in dangerous environments and potentially harmful situations. Additionally, force reductions require that human resources be leveraged to the maximum extent possible, and more stringent construction repair requirements push for increased automation. To solve these problems, the U.S. Air Force is undertaking a research and development effort at Tyndall AFB, FL, to develop robotic construction/repair equipment. This development effort involves the following technologies: teleoperation, telerobotics, construction operations (excavation, grading, leveling, tool change), robotic vehicle communications, vehicle navigation, mission/vehicle task control architecture, and associated computing environment. The ultimate goal is the fielding of a robotic repair capability operating at the level of supervised autonomy. This paper will discuss current and planned efforts in space construction/repair, explosive ordnance disposal, hazardous waste cleanup, and fire fighting.
DOE Office of Scientific and Technical Information (OSTI.GOV)
De Jong, Wibe A.; Walker, Andrew M.; Hanwell, Marcus D.
Background Multidisciplinary integrated research requires the ability to couple the diverse sets of data obtained from a range of complex experiments and computer simulations. Integrating data requires semantically rich information. In this paper the generation of semantically rich data from the NWChem computational chemistry software is discussed within the Chemical Markup Language (CML) framework. Results The NWChem computational chemistry software has been modified and coupled to the FoX library to write CML compliant XML data files. The FoX library was expanded to represent the lexical input files used by the computational chemistry software. Conclusions The production of CML compliant XMLmore » files for the computational chemistry software NWChem can be relatively easily accomplished using the FoX library. A unified computational chemistry or CompChem convention and dictionary needs to be developed through a community-based effort. The long-term goal is to enable a researcher to do Google-style chemistry and physics searches.« less
Mirror neurons and imitation: a computationally guided review.
Oztop, Erhan; Kawato, Mitsuo; Arbib, Michael
2006-04-01
Neurophysiology reveals the properties of individual mirror neurons in the macaque while brain imaging reveals the presence of 'mirror systems' (not individual neurons) in the human. Current conceptual models attribute high level functions such as action understanding, imitation, and language to mirror neurons. However, only the first of these three functions is well-developed in monkeys. We thus distinguish current opinions (conceptual models) on mirror neuron function from more detailed computational models. We assess the strengths and weaknesses of current computational models in addressing the data and speculations on mirror neurons (macaque) and mirror systems (human). In particular, our mirror neuron system (MNS), mental state inference (MSI) and modular selection and identification for control (MOSAIC) models are analyzed in more detail. Conceptual models often overlook the computational requirements for posited functions, while too many computational models adopt the erroneous hypothesis that mirror neurons are interchangeable with imitation ability. Our meta-analysis underlines the gap between conceptual and computational models and points out the research effort required from both sides to reduce this gap.
Computational efficiency for the surface renewal method
NASA Astrophysics Data System (ADS)
Kelley, Jason; Higgins, Chad
2018-04-01
Measuring surface fluxes using the surface renewal (SR) method requires programmatic algorithms for tabulation, algebraic calculation, and data quality control. A number of different methods have been published describing automated calibration of SR parameters. Because the SR method utilizes high-frequency (10 Hz+) measurements, some steps in the flux calculation are computationally expensive, especially when automating SR to perform many iterations of these calculations. Several new algorithms were written that perform the required calculations more efficiently and rapidly, and that tested for sensitivity to length of flux averaging period, ability to measure over a large range of lag timescales, and overall computational efficiency. These algorithms utilize signal processing techniques and algebraic simplifications that demonstrate simple modifications that dramatically improve computational efficiency. The results here complement efforts by other authors to standardize a robust and accurate computational SR method. Increased speed of computation time grants flexibility to implementing the SR method, opening new avenues for SR to be used in research, for applied monitoring, and in novel field deployments.
Load Balancing Strategies for Multiphase Flows on Structured Grids
NASA Astrophysics Data System (ADS)
Olshefski, Kristopher; Owkes, Mark
2017-11-01
The computation time required to perform large simulations of complex systems is currently one of the leading bottlenecks of computational research. Parallelization allows multiple processing cores to perform calculations simultaneously and reduces computational times. However, load imbalances between processors waste computing resources as processors wait for others to complete imbalanced tasks. In multiphase flows, these imbalances arise due to the additional computational effort required at the gas-liquid interface. However, many current load balancing schemes are only designed for unstructured grid applications. The purpose of this research is to develop a load balancing strategy while maintaining the simplicity of a structured grid. Several approaches are investigated including brute force oversubscription, node oversubscription through Message Passing Interface (MPI) commands, and shared memory load balancing using OpenMP. Each of these strategies are tested with a simple one-dimensional model prior to implementation into the three-dimensional NGA code. Current results show load balancing will reduce computational time by at least 30%.
Space mapping method for the design of passive shields
NASA Astrophysics Data System (ADS)
Sergeant, Peter; Dupré, Luc; Melkebeek, Jan
2006-04-01
The aim of the paper is to find the optimal geometry of a passive shield for the reduction of the magnetic stray field of an axisymmetric induction heater. For the optimization, a space mapping algorithm is used that requires two models. The first is an accurate model with a high computational effort as it contains finite element models. The second is less accurate, but it has a low computational effort as it uses an analytical model: the shield is replaced by a number of mutually coupled coils. The currents in the shield are found by solving an electrical circuit. Space mapping combines both models to obtain the optimal passive shield fast and accurately. The presented optimization technique is compared with gradient, simplex, and genetic algorithms.
Prosocial apathy for helping others when effort is required
Lockwood, Patricia L.; Hamonet, Mathilde; Zhang, Samuel H.; Ratnavel, Anya; Salmony, Florentine U.; Husain, Masud; Apps, Matthew A. J.
2017-01-01
Summary Prosocial acts – those that are costly to ourselves but benefit others – are a central component of human co-existence1–3. While the financial and moral costs of prosocial behaviours are well understood4–6, everyday prosocial acts do not typically come at such costs. Instead, they require effort. Here, using computational modelling of an effort-based task we show that people are prosocially apathetic. They are less willing to choose to initiate highly effortful acts that benefit others compared to benefitting themselves. Moreover, even when choosing to initiate effortful prosocial acts, people show superficiality, exerting less force into actions that benefit others than themselves. These findings replicated, were present when the other was anonymous or not, and when choices were made to earn rewards or avoid losses. Importantly, the least prosocially motivated people had higher subclinical levels of psychopathy and social apathy. Thus, although people sometimes ‘help out’, they are less motivated to benefit others and sometimes ‘superficially prosocial’, which may characterise everyday prosociality and its disruption in social disorders. PMID:28819649
Computation of turbulent boundary layers employing the defect wall-function method. M.S. Thesis
NASA Technical Reports Server (NTRS)
Brown, Douglas L.
1994-01-01
In order to decrease overall computational time requirements of spatially-marching parabolized Navier-Stokes finite-difference computer code when applied to turbulent fluid flow, a wall-function methodology, originally proposed by R. Barnwell, was implemented. This numerical effort increases computational speed and calculates reasonably accurate wall shear stress spatial distributions and boundary-layer profiles. Since the wall shear stress is analytically determined from the wall-function model, the computational grid near the wall is not required to spatially resolve the laminar-viscous sublayer. Consequently, a substantially increased computational integration step size is achieved resulting in a considerable decrease in net computational time. This wall-function technique is demonstrated for adiabatic flat plate test cases from Mach 2 to Mach 8. These test cases are analytically verified employing: (1) Eckert reference method solutions, (2) experimental turbulent boundary-layer data of Mabey, and (3) finite-difference computational code solutions with fully resolved laminar-viscous sublayers. Additionally, results have been obtained for two pressure-gradient cases: (1) an adiabatic expansion corner and (2) an adiabatic compression corner.
Software handlers for process interfaces
NASA Technical Reports Server (NTRS)
Bercaw, R. W.
1976-01-01
Process interfaces are developed in an effort to reduce the time, effort, and money required to install computer systems. Probably the chief obstacle to the achievement of these goals lies in the problem of developing software handlers having the same degree of generality and modularity as the hardware. The problem of combining the advantages of modular instrumentation with those of modern multitask operating systems has not been completely solved, but there are a number of promising developments. The essential principles involved are considered.
An efficient dynamic load balancing algorithm
NASA Astrophysics Data System (ADS)
Lagaros, Nikos D.
2014-01-01
In engineering problems, randomness and uncertainties are inherent. Robust design procedures, formulated in the framework of multi-objective optimization, have been proposed in order to take into account sources of randomness and uncertainty. These design procedures require orders of magnitude more computational effort than conventional analysis or optimum design processes since a very large number of finite element analyses is required to be dealt. It is therefore an imperative need to exploit the capabilities of computing resources in order to deal with this kind of problems. In particular, parallel computing can be implemented at the level of metaheuristic optimization, by exploiting the physical parallelization feature of the nondominated sorting evolution strategies method, as well as at the level of repeated structural analyses required for assessing the behavioural constraints and for calculating the objective functions. In this study an efficient dynamic load balancing algorithm for optimum exploitation of available computing resources is proposed and, without loss of generality, is applied for computing the desired Pareto front. In such problems the computation of the complete Pareto front with feasible designs only, constitutes a very challenging task. The proposed algorithm achieves linear speedup factors and almost 100% speedup factor values with reference to the sequential procedure.
Cloud-Based Virtual Laboratory for Network Security Education
ERIC Educational Resources Information Center
Xu, Le; Huang, Dijiang; Tsai, Wei-Tek
2014-01-01
Hands-on experiments are essential for computer network security education. Existing laboratory solutions usually require significant effort to build, configure, and maintain and often do not support reconfigurability, flexibility, and scalability. This paper presents a cloud-based virtual laboratory education platform called V-Lab that provides a…
Knowledge management: an application to wildfire prevention planning
Daniel L Schmoldt
1989-01-01
Residential encroachment into wildland areas places an additional burden on fire management activities. Prevention programs, fuel management efforts, and suppression strategies, previously employed in wildland areas, require modification for protection of increased values at risk in this interface area. Knowledge-based computer systems are being investigated as...
Automatic Dynamic Aircraft Modeler (ADAM) for the Computer Program NASTRAN
NASA Technical Reports Server (NTRS)
Griffis, H.
1985-01-01
Large general purpose finite element programs require users to develop large quantities of input data. General purpose pre-processors are used to decrease the effort required to develop structural models. Further reduction of effort can be achieved by specific application pre-processors. Automatic Dynamic Aircraft Modeler (ADAM) is one such application specific pre-processor. General purpose pre-processors use points, lines and surfaces to describe geometric shapes. Specifying that ADAM is used only for aircraft structures allows generic structural sections, wing boxes and bodies, to be pre-defined. Hence with only gross dimensions, thicknesses, material properties and pre-defined boundary conditions a complete model of an aircraft can be created.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Ostlund, Neil
This research showed the feasibility of applying the concepts of the Semantic Web to Computation Chemistry. We have created the first web portal (www.chemsem.com) that allows data created in the calculations of quantum chemistry, and other such chemistry calculations to be placed on the web in a way that makes the data accessible to scientists in a semantic form never before possible. The semantic web nature of the portal allows data to be searched, found, and used as an advance over the usual approach of a relational database. The semantic data on our portal has the nature of a Giantmore » Global Graph (GGG) that can be easily merged with related data and searched globally via a SPARQL Protocol and RDF Query Language (SPARQL) that makes global searches for data easier than with traditional methods. Our Semantic Web Portal requires that the data be understood by a computer and hence defined by an ontology (vocabulary). This ontology is used by the computer in understanding the data. We have created such an ontology for computational chemistry (purl.org/gc) that encapsulates a broad knowledge of the field of computational chemistry. We refer to this ontology as the Gainesville Core. While it is perhaps the first ontology for computational chemistry and is used by our portal, it is only a start of what must be a long multi-partner effort to define computational chemistry. In conjunction with the above efforts we have defined a new potential file standard (Common Standard for eXchange – CSX for computational chemistry data). This CSX file is the precursor of data in the Resource Description Framework (RDF) form that the semantic web requires. Our portal translates CSX files (as well as other computational chemistry data files) into RDF files that are part of the graph database that the semantic web employs. We propose a CSX file as a convenient way to encapsulate computational chemistry data.« less
Onboard processor technology review
NASA Technical Reports Server (NTRS)
Benz, Harry F.
1990-01-01
The general need and requirements for the onboard embedded processors necessary to control and manipulate data in spacecraft systems are discussed. The current known requirements are reviewed from a user perspective, based on current practices in the spacecraft development process. The current capabilities of available processor technologies are then discussed, and these are projected to the generation of spacecraft computers currently under identified, funded development. An appraisal is provided for the current national developmental effort.
Large-Scale Distributed Computational Fluid Dynamics on the Information Power Grid Using Globus
NASA Technical Reports Server (NTRS)
Barnard, Stephen; Biswas, Rupak; Saini, Subhash; VanderWijngaart, Robertus; Yarrow, Maurice; Zechtzer, Lou; Foster, Ian; Larsson, Olle
1999-01-01
This paper describes an experiment in which a large-scale scientific application development for tightly-coupled parallel machines is adapted to the distributed execution environment of the Information Power Grid (IPG). A brief overview of the IPG and a description of the computational fluid dynamics (CFD) algorithm are given. The Globus metacomputing toolkit is used as the enabling device for the geographically-distributed computation. Modifications related to latency hiding and Load balancing were required for an efficient implementation of the CFD application in the IPG environment. Performance results on a pair of SGI Origin 2000 machines indicate that real scientific applications can be effectively implemented on the IPG; however, a significant amount of continued effort is required to make such an environment useful and accessible to scientists and engineers.
NASA Technical Reports Server (NTRS)
1990-01-01
NASA's Space Station Freedom Program (SSFP) planning efforts have identified a need for a payload training simulator system to serve as both a training facility and as a demonstrator to validate operational concepts. The envisioned MSFC Payload Training Complex (PTC) required to meet this need will train the Space Station payload scientists, station scientists, and ground controllers to operate the wide variety of experiments that will be onboard the Space Station Freedom. The Simulation Computer System (SCS) is the computer hardware, software, and workstations that will support the Payload Training Complex at MSFC. The purpose of this SCS Study is to investigate issues related to the SCS, alternative requirements, simulator approaches, and state-of-the-art technologies to develop candidate concepts and designs.
Space Station Simulation Computer System (SCS) study for NASA/MSFC. Phased development plan
NASA Technical Reports Server (NTRS)
1990-01-01
NASA's Space Station Freedom Program (SSFP) planning efforts have identified a need for a payload training simulator system to serve as both a training facility and as a demonstrator to validate operational concepts. The envisioned MSFC Payload Training Complex (PTC) required to meet this need will train the Space Station payload scientists, station scientists and ground controllers to operate the wide variety of experiments that will be onboard the Space Station Freedom. The Simulation Computer System (SCS) is made up of computer hardware, software, and workstations that will support the Payload Training Complex at MSFC. The purpose of this SCS Study is to investigate issues related to the SCS, alternative requirements, simulator approaches, and state-of-the-art technologies to develop candidate concepts and designs.
NASA Technical Reports Server (NTRS)
1990-01-01
NASA's Space Station Freedom Program (SSFP) planning efforts have identified a need for a payload training simulator system to serve as both a training facility and as a demonstrator to validate operational concepts. The envisioned MSFC Payload Training Complex (PTC) required to meet this need will train the Space Station payload scientists, station scientists, and ground controllers to operate the wide variety of experiments that will be onboard the Space Station Freedom. The Simulation Computer System (SCS) is made up of the computer hardware, software, and workstations that will support the Payload Training Complex at MSFC. The purpose of this SCS Study is to investigate issues related to the SCS, alternative requirements, simulator approaches, and state-of-the-art technologies to develop candidate concepts and designs.
Space Station Simulation Computer System (SCS) study for NASA/MSFC. Operations concept report
NASA Technical Reports Server (NTRS)
1990-01-01
NASA's Space Station Freedom Program (SSFP) planning efforts have identified a need for a payload training simulator system to serve as both a training facility and as a demonstrator to validate operational concepts. The envisioned MSFC Payload Training Complex (PTC) required to meet this need will train the Space Station payload scientists, station scientists, and ground controllers to operate the wide variety of experiments that will be onboard the Space Station Freedom. The Simulation Computer System (SCS) is made up of computer hardware, software, and workstations that will support the Payload Training Complex at MSFC. The purpose of this SCS Study is to investigate issues related to the SCS, alternative requirements, simulator approaches, and state-of-the-art technologies to develop candidate concepts and designs.
NASA Astrophysics Data System (ADS)
Curci, Vita; Dassisti, Michele; Josefa, Mula Bru; Manuel, Díaz Madroñero
2014-10-01
Supply chain model (SCM) are potentially capable to integrate different aspects in supporting decision making for enterprise management tasks. The aim of the paper is to propose an hybrid mathematical programming model for optimization of production requirements resources planning. The preliminary model was conceived bottom-up from a real industrial case analysed oriented to maximize cash flow. Despite the intense computational effort required to converge to a solution, optimisation done brought good result in solving the objective function.
NASTRAN users' experience of Avco Aerostructures Division
NASA Technical Reports Server (NTRS)
Blackburn, C. L.; Wilhelm, C. A.
1973-01-01
The NASTRAN experiences of a major structural design and fabrication subcontractor that has less engineering personnel and computer facilities than those available to large prime contractors are discussed. Efforts to obtain sufficient computer capacity and the development and implementation of auxiliary programs to reduce manpower requirements are described. Applications of the NASTRAN program for training users, checking out auxiliary programs, performing in-house research and development, and structurally analyzing an Avco designed and manufactured missile case are presented.
High resolution frequency analysis techniques with application to the redshift experiment
NASA Technical Reports Server (NTRS)
Decher, R.; Teuber, D.
1975-01-01
High resolution frequency analysis methods, with application to the gravitational probe redshift experiment, are discussed. For this experiment a resolution of .00001 Hz is required to measure a slowly varying, low frequency signal of approximately 1 Hz. Major building blocks include fast Fourier transform, discrete Fourier transform, Lagrange interpolation, golden section search, and adaptive matched filter technique. Accuracy, resolution, and computer effort of these methods are investigated, including test runs on an IBM 360/65 computer.
Dynamic optimization of chemical processes using ant colony framework.
Rajesh, J; Gupta, K; Kusumakar, H S; Jayaraman, V K; Kulkarni, B D
2001-11-01
Ant colony framework is illustrated by considering dynamic optimization of six important bench marking examples. This new computational tool is simple to implement and can tackle problems with state as well as terminal constraints in a straightforward fashion. It requires fewer grid points to reach the global optimum at relatively very low computational effort. The examples with varying degree of complexities, analyzed here, illustrate its potential for solving a large class of process optimization problems in chemical engineering.
NASA Technical Reports Server (NTRS)
Semenov, Boris V.; Acton, Charles H., Jr.; Bachman, Nathaniel J.; Elson, Lee S.; Wright, Edward D.
2005-01-01
The SPICE system of navigation and ancillary data possesses a number of traits that make its use in modern space missions of all types highly cost efficient. The core of the system is a software library providing API interfaces for storing and retrieving such data as trajectories, orientations, time conversions, and instrument geometry parameters. Applications used at any stage of a mission life cycle can call SPICE APIs to access this data and compute geometric quantities required for observation planning, engineering assessment and science data analysis. SPICE is implemented in three different languages, supported on 20+ computer environments, and distributed with complete source code and documentation. It includes capabilities that are extensively tested by everyday use in many active projects and are applicable to all types of space missions - flyby, orbiters, observatories, landers and rovers. While a customer's initial SPICE adaptation for the first mission or experiment requires a modest effort, this initial effort pays off because adaptation for subsequent missions/experiments is just a small fraction of the initial investment, with the majority of tools based on SPICE requiring no or very minor changes.
A structure adapted multipole method for electrostatic interactions in protein dynamics
NASA Astrophysics Data System (ADS)
Niedermeier, Christoph; Tavan, Paul
1994-07-01
We present an algorithm for rapid approximate evaluation of electrostatic interactions in molecular dynamics simulations of proteins. Traditional algorithms require computational work of the order O(N2) for a system of N particles. Truncation methods which try to avoid that effort entail untolerably large errors in forces, energies and other observables. Hierarchical multipole expansion algorithms, which can account for the electrostatics to numerical accuracy, scale with O(N log N) or even with O(N) if they become augmented by a sophisticated scheme for summing up forces. To further reduce the computational effort we propose an algorithm that also uses a hierarchical multipole scheme but considers only the first two multipole moments (i.e., charges and dipoles). Our strategy is based on the consideration that numerical accuracy may not be necessary to reproduce protein dynamics with sufficient correctness. As opposed to previous methods, our scheme for hierarchical decomposition is adjusted to structural and dynamical features of the particular protein considered rather than chosen rigidly as a cubic grid. As compared to truncation methods we manage to reduce errors in the computation of electrostatic forces by a factor of 10 with only marginal additional effort.
Coulombic Models in Chemical Bonding.
ERIC Educational Resources Information Center
Sacks, Lawrence J.
1986-01-01
Describes a bonding theory which provides a framework for the description of a wide range of substances and provides quantitative information of remarkable accuracy with far less computational effort than that required of other approaches. Includes applications, such as calculation of bond energies of two binary hydrides (methane and diborane).…
MONSOON Image Acquisition System | CTIO
Visitor's Computer Guidelines Network Connection Request Instruments Instruments by Telescope IR Instruments flexible solution for the acquisition of pixel data from scientific CDD and IR detectors. The architecture requirements for both IR and CCD large focal planes that NOAO developed for instrumentation efforts in the
SURFACE WATER FLOW IN LANDSCAPE MODELS: 1. EVERGLADES CASE STUDY. (R824766)
Many landscape models require extensive computational effort using a large array of grid cells that represent the landscape. The number of spatial cells may be in the thousands and millions, while the ecological component run in each of the cells to account for landscape dynamics...
NASA Astrophysics Data System (ADS)
Landgrebe, Anton J.
1987-03-01
An overview of research activities at the United Technologies Research Center (UTRC) in the area of Computational Fluid Dynamics (CFD) is presented. The requirement and use of various levels of computers, including supercomputers, for the CFD activities is described. Examples of CFD directed toward applications to helicopters, turbomachinery, heat exchangers, and the National Aerospace Plane are included. Helicopter rotor codes for the prediction of rotor and fuselage flow fields and airloads were developed with emphasis on rotor wake modeling. Airflow and airload predictions and comparisons with experimental data are presented. Examples are presented of recent parabolized Navier-Stokes and full Navier-Stokes solutions for hypersonic shock-wave/boundary layer interaction, and hydrogen/air supersonic combustion. In addition, other examples of CFD efforts in turbomachinery Navier-Stokes methodology and separated flow modeling are presented. A brief discussion of the 3-tier scientific computing environment is also presented, in which the researcher has access to workstations, mid-size computers, and supercomputers.
NASA Technical Reports Server (NTRS)
Landgrebe, Anton J.
1987-01-01
An overview of research activities at the United Technologies Research Center (UTRC) in the area of Computational Fluid Dynamics (CFD) is presented. The requirement and use of various levels of computers, including supercomputers, for the CFD activities is described. Examples of CFD directed toward applications to helicopters, turbomachinery, heat exchangers, and the National Aerospace Plane are included. Helicopter rotor codes for the prediction of rotor and fuselage flow fields and airloads were developed with emphasis on rotor wake modeling. Airflow and airload predictions and comparisons with experimental data are presented. Examples are presented of recent parabolized Navier-Stokes and full Navier-Stokes solutions for hypersonic shock-wave/boundary layer interaction, and hydrogen/air supersonic combustion. In addition, other examples of CFD efforts in turbomachinery Navier-Stokes methodology and separated flow modeling are presented. A brief discussion of the 3-tier scientific computing environment is also presented, in which the researcher has access to workstations, mid-size computers, and supercomputers.
Optimizing R with SparkR on a commodity cluster for biomedical research.
Sedlmayr, Martin; Würfl, Tobias; Maier, Christian; Häberle, Lothar; Fasching, Peter; Prokosch, Hans-Ulrich; Christoph, Jan
2016-12-01
Medical researchers are challenged today by the enormous amount of data collected in healthcare. Analysis methods such as genome-wide association studies (GWAS) are often computationally intensive and thus require enormous resources to be performed in a reasonable amount of time. While dedicated clusters and public clouds may deliver the desired performance, their use requires upfront financial efforts or anonymous data, which is often not possible for preliminary or occasional tasks. We explored the possibilities to build a private, flexible cluster for processing scripts in R based on commodity, non-dedicated hardware of our department. For this, a GWAS-calculation in R on a single desktop computer, a Message Passing Interface (MPI)-cluster, and a SparkR-cluster were compared with regards to the performance, scalability, quality, and simplicity. The original script had a projected runtime of three years on a single desktop computer. Optimizing the script in R already yielded a significant reduction in computing time (2 weeks). By using R-MPI and SparkR, we were able to parallelize the computation and reduce the time to less than three hours (2.6 h) on already available, standard office computers. While MPI is a proven approach in high-performance clusters, it requires rather static, dedicated nodes. SparkR and its Hadoop siblings allow for a dynamic, elastic environment with automated failure handling. SparkR also scales better with the number of nodes in the cluster than MPI due to optimized data communication. R is a popular environment for clinical data analysis. The new SparkR solution offers elastic resources and allows supporting big data analysis using R even on non-dedicated resources with minimal change to the original code. To unleash the full potential, additional efforts should be invested to customize and improve the algorithms, especially with regards to data distribution. Copyright © 2016 The Authors. Published by Elsevier Ireland Ltd.. All rights reserved.
Towards the formal verification of the requirements and design of a processor interface unit
NASA Technical Reports Server (NTRS)
Fura, David A.; Windley, Phillip J.; Cohen, Gerald C.
1993-01-01
The formal verification of the design and partial requirements for a Processor Interface Unit (PIU) using the Higher Order Logic (HOL) theorem-proving system is described. The processor interface unit is a single-chip subsystem within a fault-tolerant embedded system under development within the Boeing Defense and Space Group. It provides the opportunity to investigate the specification and verification of a real-world subsystem within a commercially-developed fault-tolerant computer. An overview of the PIU verification effort is given. The actual HOL listing from the verification effort are documented in a companion NASA contractor report entitled 'Towards the Formal Verification of the Requirements and Design of a Processor Interface Unit - HOL Listings' including the general-purpose HOL theories and definitions that support the PIU verification as well as tactics used in the proofs.
Technology advances and market forces: Their impact on high performance architectures
NASA Technical Reports Server (NTRS)
Best, D. R.
1978-01-01
Reasonable projections into future supercomputer architectures and technology require an analysis of the computer industry market environment, the current capabilities and trends within the component industry, and the research activities on computer architecture in the industrial and academic communities. Management, programmer, architect, and user must cooperate to increase the efficiency of supercomputer development efforts. Care must be taken to match the funding, compiler, architecture and application with greater attention to testability, maintainability, reliability, and usability than supercomputer development programs of the past.
Report on Computing and Networking in the Space Science Laboratory by the SSL Computer Committee
NASA Technical Reports Server (NTRS)
Gallagher, D. L. (Editor)
1993-01-01
The Space Science Laboratory (SSL) at Marshall Space Flight Center is a multiprogram facility. Scientific research is conducted in four discipline areas: earth science and applications, solar-terrestrial physics, astrophysics, and microgravity science and applications. Representatives from each of these discipline areas participate in a Laboratory computer requirements committee, which developed this document. The purpose is to establish and discuss Laboratory objectives for computing and networking in support of science. The purpose is also to lay the foundation for a collective, multiprogram approach to providing these services. Special recognition is given to the importance of the national and international efforts of our research communities toward the development of interoperable, network-based computer applications.
Toward a Rational and Mechanistic Account of Mental Effort.
Shenhav, Amitai; Musslick, Sebastian; Lieder, Falk; Kool, Wouter; Griffiths, Thomas L; Cohen, Jonathan D; Botvinick, Matthew M
2017-07-25
In spite of its familiar phenomenology, the mechanistic basis for mental effort remains poorly understood. Although most researchers agree that mental effort is aversive and stems from limitations in our capacity to exercise cognitive control, it is unclear what gives rise to those limitations and why they result in an experience of control as costly. The presence of these control costs also raises further questions regarding how best to allocate mental effort to minimize those costs and maximize the attendant benefits. This review explores recent advances in computational modeling and empirical research aimed at addressing these questions at the level of psychological process and neural mechanism, examining both the limitations to mental effort exertion and how we manage those limited cognitive resources. We conclude by identifying remaining challenges for theoretical accounts of mental effort as well as possible applications of the available findings to understanding the causes of and potential solutions for apparent failures to exert the mental effort required of us.
Multidisciplinary propulsion simulation using NPSS
NASA Technical Reports Server (NTRS)
Claus, Russell W.; Evans, Austin L.; Follen, Gregory J.
1992-01-01
The current status of the Numerical Propulsion System Simulation (NPSS) program, a cooperative effort of NASA, industry, and universities to reduce the cost and time of advanced technology propulsion system development, is reviewed. The technologies required for this program include (1) interdisciplinary analysis to couple the relevant disciplines, such as aerodynamics, structures, heat transfer, combustion, acoustics, controls, and materials; (2) integrated systems analysis; (3) a high-performance computing platform, including massively parallel processing; and (4) a simulation environment providing a user-friendly interface. Several research efforts to develop these technologies are discussed.
NASA Astrophysics Data System (ADS)
Bouchpan-Lerust-Juéry, L.
2007-08-01
Current and next generation on-board computer systems tend to implement real-time embedded control applications (e.g. Attitude and Orbit Control Subsystem (AOCS), Packet Utililization Standard (PUS), spacecraft autonomy . . . ) which must meet high standards of Reliability and Predictability as well as Safety. All these requirements require a considerable amount of effort and cost for Space Sofware Industry. This paper, in a first part, presents a free Open Source integrated solution to develop RTAI applications from analysis, design, simulation and direct implementation using code generation based on Open Source and in its second part summarises this suggested approach, its results and the conclusion for further work.
Initial dynamic load estimates during configuration design
NASA Technical Reports Server (NTRS)
Schiff, Daniel
1987-01-01
This analysis includes the structural response to shock and vibration and evaluates the maximum deflections and material stresses and the potential for the occurrence of elastic instability, fatigue and fracture. The required computations are often performed by means of finite element analysis (FEA) computer programs in which the structure is simulated by a finite element model which may contain thousands of elements. The formulation of a finite element model can be time consuming, and substantial additional modeling effort may be necessary if the structure requires significant changes after initial analysis. Rapid methods for obtaining rough estimates of the structural response to shock and vibration are presented for the purpose of providing guidance during the initial mechanical design configuration stage.
Computational Control Workstation: Users' perspectives
NASA Technical Reports Server (NTRS)
Roithmayr, Carlos M.; Straube, Timothy M.; Tave, Jeffrey S.
1993-01-01
A Workstation has been designed and constructed for rapidly simulating motions of rigid and elastic multibody systems. We examine the Workstation from the point of view of analysts who use the machine in an industrial setting. Two aspects of the device distinguish it from other simulation programs. First, one uses a series of windows and menus on a computer terminal, together with a keyboard and mouse, to provide a mathematical and geometrical description of the system under consideration. The second hallmark is a facility for animating simulation results. An assessment of the amount of effort required to numerically describe a system to the Workstation is made by comparing the process to that used with other multibody software. The apparatus for displaying results as a motion picture is critiqued as well. In an effort to establish confidence in the algorithms that derive, encode, and solve equations of motion, simulation results from the Workstation are compared to answers obtained with other multibody programs. Our study includes measurements of computational speed.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Steich, D J; Brugger, S T; Kallman, J S
2000-02-01
This final report describes our efforts on the Three-Dimensional Massively Parallel CEM Technologies LDRD project (97-ERD-009). Significant need exists for more advanced time domain computational electromagnetics modeling. Bookkeeping details and modifying inflexible software constitute a vast majority of the effort required to address such needs. The required effort escalates rapidly as problem complexity increases. For example, hybrid meshes requiring hybrid numerics on massively parallel platforms (MPPs). This project attempts to alleviate the above limitations by investigating flexible abstractions for these numerical algorithms on MPPs using object-oriented methods, providing a programming environment insulating physics from bookkeeping. The three major design iterationsmore » during the project, known as TIGER-I to TIGER-III, are discussed. Each version of TIGER is briefly discussed along with lessons learned during the development and implementation. An Application Programming Interface (API) of the object-oriented interface for Tiger-III is included in three appendices. The three appendices contain the Utilities, Entity-Attribute, and Mesh libraries developed during the project. The API libraries represent a snapshot of our latest attempt at insulated the physics from the bookkeeping.« less
Baele, Guy; Lemey, Philippe; Vansteelandt, Stijn
2013-03-06
Accurate model comparison requires extensive computation times, especially for parameter-rich models of sequence evolution. In the Bayesian framework, model selection is typically performed through the evaluation of a Bayes factor, the ratio of two marginal likelihoods (one for each model). Recently introduced techniques to estimate (log) marginal likelihoods, such as path sampling and stepping-stone sampling, offer increased accuracy over the traditional harmonic mean estimator at an increased computational cost. Most often, each model's marginal likelihood will be estimated individually, which leads the resulting Bayes factor to suffer from errors associated with each of these independent estimation processes. We here assess the original 'model-switch' path sampling approach for direct Bayes factor estimation in phylogenetics, as well as an extension that uses more samples, to construct a direct path between two competing models, thereby eliminating the need to calculate each model's marginal likelihood independently. Further, we provide a competing Bayes factor estimator using an adaptation of the recently introduced stepping-stone sampling algorithm and set out to determine appropriate settings for accurately calculating such Bayes factors, with context-dependent evolutionary models as an example. While we show that modest efforts are required to roughly identify the increase in model fit, only drastically increased computation times ensure the accuracy needed to detect more subtle details of the evolutionary process. We show that our adaptation of stepping-stone sampling for direct Bayes factor calculation outperforms the original path sampling approach as well as an extension that exploits more samples. Our proposed approach for Bayes factor estimation also has preferable statistical properties over the use of individual marginal likelihood estimates for both models under comparison. Assuming a sigmoid function to determine the path between two competing models, we provide evidence that a single well-chosen sigmoid shape value requires less computational efforts in order to approximate the true value of the (log) Bayes factor compared to the original approach. We show that the (log) Bayes factors calculated using path sampling and stepping-stone sampling differ drastically from those estimated using either of the harmonic mean estimators, supporting earlier claims that the latter systematically overestimate the performance of high-dimensional models, which we show can lead to erroneous conclusions. Based on our results, we argue that highly accurate estimation of differences in model fit for high-dimensional models requires much more computational effort than suggested in recent studies on marginal likelihood estimation.
2013-01-01
Background Accurate model comparison requires extensive computation times, especially for parameter-rich models of sequence evolution. In the Bayesian framework, model selection is typically performed through the evaluation of a Bayes factor, the ratio of two marginal likelihoods (one for each model). Recently introduced techniques to estimate (log) marginal likelihoods, such as path sampling and stepping-stone sampling, offer increased accuracy over the traditional harmonic mean estimator at an increased computational cost. Most often, each model’s marginal likelihood will be estimated individually, which leads the resulting Bayes factor to suffer from errors associated with each of these independent estimation processes. Results We here assess the original ‘model-switch’ path sampling approach for direct Bayes factor estimation in phylogenetics, as well as an extension that uses more samples, to construct a direct path between two competing models, thereby eliminating the need to calculate each model’s marginal likelihood independently. Further, we provide a competing Bayes factor estimator using an adaptation of the recently introduced stepping-stone sampling algorithm and set out to determine appropriate settings for accurately calculating such Bayes factors, with context-dependent evolutionary models as an example. While we show that modest efforts are required to roughly identify the increase in model fit, only drastically increased computation times ensure the accuracy needed to detect more subtle details of the evolutionary process. Conclusions We show that our adaptation of stepping-stone sampling for direct Bayes factor calculation outperforms the original path sampling approach as well as an extension that exploits more samples. Our proposed approach for Bayes factor estimation also has preferable statistical properties over the use of individual marginal likelihood estimates for both models under comparison. Assuming a sigmoid function to determine the path between two competing models, we provide evidence that a single well-chosen sigmoid shape value requires less computational efforts in order to approximate the true value of the (log) Bayes factor compared to the original approach. We show that the (log) Bayes factors calculated using path sampling and stepping-stone sampling differ drastically from those estimated using either of the harmonic mean estimators, supporting earlier claims that the latter systematically overestimate the performance of high-dimensional models, which we show can lead to erroneous conclusions. Based on our results, we argue that highly accurate estimation of differences in model fit for high-dimensional models requires much more computational effort than suggested in recent studies on marginal likelihood estimation. PMID:23497171
Heterogeneous computing architecture for fast detection of SNP-SNP interactions.
Sluga, Davor; Curk, Tomaz; Zupan, Blaz; Lotric, Uros
2014-06-25
The extent of data in a typical genome-wide association study (GWAS) poses considerable computational challenges to software tools for gene-gene interaction discovery. Exhaustive evaluation of all interactions among hundreds of thousands to millions of single nucleotide polymorphisms (SNPs) may require weeks or even months of computation. Massively parallel hardware within a modern Graphic Processing Unit (GPU) and Many Integrated Core (MIC) coprocessors can shorten the run time considerably. While the utility of GPU-based implementations in bioinformatics has been well studied, MIC architecture has been introduced only recently and may provide a number of comparative advantages that have yet to be explored and tested. We have developed a heterogeneous, GPU and Intel MIC-accelerated software module for SNP-SNP interaction discovery to replace the previously single-threaded computational core in the interactive web-based data exploration program SNPsyn. We report on differences between these two modern massively parallel architectures and their software environments. Their utility resulted in an order of magnitude shorter execution times when compared to the single-threaded CPU implementation. GPU implementation on a single Nvidia Tesla K20 runs twice as fast as that for the MIC architecture-based Xeon Phi P5110 coprocessor, but also requires considerably more programming effort. General purpose GPUs are a mature platform with large amounts of computing power capable of tackling inherently parallel problems, but can prove demanding for the programmer. On the other hand the new MIC architecture, albeit lacking in performance reduces the programming effort and makes it up with a more general architecture suitable for a wider range of problems.
Heterogeneous computing architecture for fast detection of SNP-SNP interactions
2014-01-01
Background The extent of data in a typical genome-wide association study (GWAS) poses considerable computational challenges to software tools for gene-gene interaction discovery. Exhaustive evaluation of all interactions among hundreds of thousands to millions of single nucleotide polymorphisms (SNPs) may require weeks or even months of computation. Massively parallel hardware within a modern Graphic Processing Unit (GPU) and Many Integrated Core (MIC) coprocessors can shorten the run time considerably. While the utility of GPU-based implementations in bioinformatics has been well studied, MIC architecture has been introduced only recently and may provide a number of comparative advantages that have yet to be explored and tested. Results We have developed a heterogeneous, GPU and Intel MIC-accelerated software module for SNP-SNP interaction discovery to replace the previously single-threaded computational core in the interactive web-based data exploration program SNPsyn. We report on differences between these two modern massively parallel architectures and their software environments. Their utility resulted in an order of magnitude shorter execution times when compared to the single-threaded CPU implementation. GPU implementation on a single Nvidia Tesla K20 runs twice as fast as that for the MIC architecture-based Xeon Phi P5110 coprocessor, but also requires considerably more programming effort. Conclusions General purpose GPUs are a mature platform with large amounts of computing power capable of tackling inherently parallel problems, but can prove demanding for the programmer. On the other hand the new MIC architecture, albeit lacking in performance reduces the programming effort and makes it up with a more general architecture suitable for a wider range of problems. PMID:24964802
NASA Astrophysics Data System (ADS)
Skrzypek, Josef; Mesrobian, Edmond; Gungner, David J.
1989-03-01
The development of autonomous land vehicles (ALV) capable of operating in an unconstrained environment has proven to be a formidable research effort. The unpredictability of events in such an environment calls for the design of a robust perceptual system, an impossible task requiring the programming of a system bases on the expectation of future, unconstrained events. Hence, the need for a "general purpose" machine vision system that is capable of perceiving and understanding images in an unconstrained environment in real-time. The research undertaken at the UCLA Machine Perception Laboratory addresses this need by focusing on two specific issues: 1) the long term goals for machine vision research as a joint effort between the neurosciences and computer science; and 2) a framework for evaluating progress in machine vision. In the past, vision research has been carried out independently within different fields including neurosciences, psychology, computer science, and electrical engineering. Our interdisciplinary approach to vision research is based on the rigorous combination of computational neuroscience, as derived from neurophysiology and neuropsychology, with computer science and electrical engineering. The primary motivation behind our approach is that the human visual system is the only existing example of a "general purpose" vision system and using a neurally based computing substrate, it can complete all necessary visual tasks in real-time.
OER Approach for Specific Student Groups in Hardware-Based Courses
ERIC Educational Resources Information Center
Ackovska, Nevena; Ristov, Sasko
2014-01-01
Hardware-based courses in computer science studies require much effort from both students and teachers. The most important part of students' learning is attending in person and actively working on laboratory exercises on hardware equipment. This paper deals with a specific group of students, those who are marginalized by not being able to…
An Authoring System for Creating Computer-Based Role-Performance Trainers.
ERIC Educational Resources Information Center
Guralnick, David; Kass, Alex
This paper describes a multimedia authoring system called MOPed-II. Like other authoring systems, MOPed-II reduces the time and expense of producing end-user applications by eliminating much of the programming effort they require. However, MOPed-II reflects an approach to authoring tools for educational multimedia which is different from most…
Code of Federal Regulations, 2014 CFR
2014-01-01
... determined necessary for Year 2000 computer conversion efforts. 630.310 Section 630.310 Administrative... Scheduling of annual leave by employees determined necessary for Year 2000 computer conversion efforts. (a) Year 2000 computer conversion efforts are deemed to be an exigency of the public business for the...
Code of Federal Regulations, 2012 CFR
2012-01-01
... determined necessary for Year 2000 computer conversion efforts. 630.310 Section 630.310 Administrative... Scheduling of annual leave by employees determined necessary for Year 2000 computer conversion efforts. (a) Year 2000 computer conversion efforts are deemed to be an exigency of the public business for the...
Code of Federal Regulations, 2013 CFR
2013-01-01
... determined necessary for Year 2000 computer conversion efforts. 630.310 Section 630.310 Administrative... Scheduling of annual leave by employees determined necessary for Year 2000 computer conversion efforts. (a) Year 2000 computer conversion efforts are deemed to be an exigency of the public business for the...
Code of Federal Regulations, 2011 CFR
2011-01-01
... determined necessary for Year 2000 computer conversion efforts. 630.310 Section 630.310 Administrative... Scheduling of annual leave by employees determined necessary for Year 2000 computer conversion efforts. (a) Year 2000 computer conversion efforts are deemed to be an exigency of the public business for the...
Code of Federal Regulations, 2010 CFR
2010-01-01
... determined necessary for Year 2000 computer conversion efforts. 630.310 Section 630.310 Administrative... Scheduling of annual leave by employees determined necessary for Year 2000 computer conversion efforts. (a) Year 2000 computer conversion efforts are deemed to be an exigency of the public business for the...
Large-scale detection of repetitions
Smyth, W. F.
2014-01-01
Combinatorics on words began more than a century ago with a demonstration that an infinitely long string with no repetitions could be constructed on an alphabet of only three letters. Computing all the repetitions (such as ⋯TTT⋯ or ⋯CGACGA⋯ ) in a given string x of length n is one of the oldest and most important problems of computational stringology, requiring time in the worst case. About a dozen years ago, it was discovered that repetitions can be computed as a by-product of the Θ(n)-time computation of all the maximal periodicities or runs in x. However, even though the computation is linear, it is also brute force: global data structures, such as the suffix array, the longest common prefix array and the Lempel–Ziv factorization, need to be computed in a preprocessing phase. Furthermore, all of this effort is required despite the fact that the expected number of runs in a string is generally a small fraction of the string length. In this paper, I explore the possibility that repetitions (perhaps also other regularities in strings) can be computed in a manner commensurate with the size of the output. PMID:24751872
Rapid Prototyping of Hydrologic Model Interfaces with IPython
NASA Astrophysics Data System (ADS)
Farthing, M. W.; Winters, K. D.; Ahmadia, A. J.; Hesser, T.; Howington, S. E.; Johnson, B. D.; Tate, J.; Kees, C. E.
2014-12-01
A significant gulf still exists between the state of practice and state of the art in hydrologic modeling. Part of this gulf is due to the lack of adequate pre- and post-processing tools for newly developed computational models. The development of user interfaces has traditionally lagged several years behind the development of a particular computational model or suite of models. As a result, models with mature interfaces often lack key advancements in model formulation, solution methods, and/or software design and technology. Part of the problem has been a focus on developing monolithic tools to provide comprehensive interfaces for the entire suite of model capabilities. Such efforts require expertise in software libraries and frameworks for creating user interfaces (e.g., Tcl/Tk, Qt, and MFC). These tools are complex and require significant investment in project resources (time and/or money) to use. Moreover, providing the required features for the entire range of possible applications and analyses creates a cumbersome interface. For a particular site or application, the modeling requirements may be simplified or at least narrowed, which can greatly reduce the number and complexity of options that need to be accessible to the user. However, monolithic tools usually are not adept at dynamically exposing specific workflows. Our approach is to deliver highly tailored interfaces to users. These interfaces may be site and/or process specific. As a result, we end up with many, customized interfaces rather than a single, general-use tool. For this approach to be successful, it must be efficient to create these tailored interfaces. We need technology for creating quality user interfaces that is accessible and has a low barrier for integration into model development efforts. Here, we present efforts to leverage IPython notebooks as tools for rapid prototyping of site and application-specific user interfaces. We provide specific examples from applications in near-shore environments as well as levee analysis. We discuss our design decisions and methodology for developing customized interfaces, strategies for delivery of the interfaces to users in various computing environments, as well as implications for the design/implementation of simulation models.
The Computational Infrastructure for Geodynamics as a Community of Practice
NASA Astrophysics Data System (ADS)
Hwang, L.; Kellogg, L. H.
2016-12-01
Computational Infrastructure for Geodynamics (CIG), geodynamics.org, originated in 2005 out of community recognition that the efforts of individual or small groups of researchers to develop scientifically-sound software is impossible to sustain, duplicates effort, and makes it difficult for scientists to adopt state-of-the art computational methods that promote new discovery. As a community of practice, participants in CIG share an interest in computational modeling in geodynamics and work together on open source software to build the capacity to support complex, extensible, scalable, interoperable, reliable, and reusable software in an effort to increase the return on investment in scientific software development and increase the quality of the resulting software. The group interacts regularly to learn from each other and better their practices formally through webinar series, workshops, and tutorials and informally through listservs and hackathons. Over the past decade, we have learned that successful scientific software development requires at a minimum: collaboration between domain-expert researchers, software developers and computational scientists; clearly identified and committed lead developer(s); well-defined scientific and computational goals that are regularly evaluated and updated; well-defined benchmarks and testing throughout development; attention throughout development to usability and extensibility; understanding and evaluation of the complexity of dependent libraries; and managed user expectations through education, training, and support. CIG's code donation standards provide the basis for recently formalized best practices in software development (geodynamics.org/cig/dev/best-practices/). Best practices include use of version control; widely used, open source software libraries; extensive test suites; portable configuration and build systems; extensive documentation internal and external to the code; and structured, human readable input formats.
Aircraft integrated design and analysis: A classroom experience
NASA Technical Reports Server (NTRS)
1988-01-01
AAE 451 is the capstone course required of all senior undergraduates in the School of Aeronautics and Astronautics at Purdue University. During the past year the first steps of a long evolutionary process were taken to change the content and expectations of this course. These changes are the result of the availability of advanced computational capabilities and sophisticated electronic media availability at Purdue. This presentation will describe both the long range objectives and this year's experience using the High Speed Commercial Transport (HSCT) design, the AIAA Long Duration Aircraft design and a Remotely Piloted Vehicle (RPV) design proposal as project objectives. The central goal of these efforts was to provide a user-friendly, computer-software-based, environment to supplement traditional design course methodology. The Purdue University Computer Center (PUCC), the Engineering Computer Network (ECN), and stand-alone PC's were used for this development. This year's accomplishments centered primarily on aerodynamics software obtained from the NASA Langley Research Center and its integration into the classroom. Word processor capability for oral and written work and computer graphics were also blended into the course. A total of 10 HSCT designs were generated, ranging from twin-fuselage and forward-swept wing aircraft, to the more traditional delta and double-delta wing aircraft. Four Long Duration Aircraft designs were submitted, together with one RPV design tailored for photographic surveillance. Supporting these activities were three video satellite lectures beamed from NASA/Langley to Purdue. These lectures covered diverse areas such as an overview of HSCT design, supersonic-aircraft stability and control, and optimization of aircraft performance. Plans for next year's effort will be reviewed, including dedicated computer workstation utilization, remote satellite lectures, and university/industrial cooperative efforts.
Simulation of floods caused by overloaded sewer systems: extensions of shallow-water equations
NASA Astrophysics Data System (ADS)
Hilden, Michael
2005-03-01
The outflow of water from a manhole onto a street is a typical flow problem within the simulation of floods in urban areas that are caused by overloaded sewer systems in the event of heavy rains. The reliable assessment of the flood risk for the connected houses requires accurate simulations of the water flow processes in the sewer system and in the street.The Navier-Stokes equations (NSEs) describe the free surface flow of the fluid water accurately, but since their numerical solution requires high CPU times and much memory, their application is not practical. However, their solutions for selected flow problems are applied as reference states to assess the results of other model approaches.The classical shallow-water equations (SWEs) require only fractions (factor 1/100) of the NSEs' computational effort. They assume hydrostatic pressure distribution, depth-averaged horizontal velocities and neglect vertical velocities. These shallow-water assumptions are not fulfilled for the outflow of water from a manhole onto the street. Accordingly, calculations show differences between NSEs and SWEs solutions.The SWEs are extended in order to assess the flood risks in urban areas reliably within applicable computational efforts. Separating vortex regions from the main flow and approximating vertical velocities to involve their contributions into a pressure correction yield suitable results.
Viscous Design of TCA Configuration
NASA Technical Reports Server (NTRS)
Krist, Steven E.; Bauer, Steven X. S.; Campbell, Richard L.
1999-01-01
The goal in this effort is to redesign the baseline TCA configuration for improved performance at both supersonic and transonic cruise. Viscous analyses are conducted with OVERFLOW, a Navier-Stokes code for overset grids, using PEGSUS to compute the interpolations between overset grids. Viscous designs are conducted with OVERDISC, a script which couples OVERFLOW with the Constrained Direct Iterative Surface Curvature (CDISC) inverse design method. The successful execution of any computational fluid dynamics (CFD) based aerodynamic design method for complex configurations requires an efficient method for regenerating the computational grids to account for modifications to the configuration shape. The first section of this presentation deals with the automated regridding procedure used to generate overset grids for the fuselage/wing/diverter/nacelle configurations analysed in this effort. The second section outlines the procedures utilized to conduct OVERDISC inverse designs. The third section briefly covers the work conducted by Dick Campbell, in which a dual-point design at Mach 2.4 and 0.9 was attempted using OVERDISC; the initial configuration from which this design effort was started is an early version of the optimized shape for the TCA configuration developed by the Boeing Commercial Airplane Group (BCAG), which eventually evolved into the NCV design. The final section presents results from application of the Natural Flow Wing design philosophy to the TCA configuration.
Multimedia architectures: from desktop systems to portable appliances
NASA Astrophysics Data System (ADS)
Bhaskaran, Vasudev; Konstantinides, Konstantinos; Natarajan, Balas R.
1997-01-01
Future desktop and portable computing systems will have as their core an integrated multimedia system. Such a system will seamlessly combine digital video, digital audio, computer animation, text, and graphics. Furthermore, such a system will allow for mixed-media creation, dissemination, and interactive access in real time. Multimedia architectures that need to support these functions have traditionally required special display and processing units for the different media types. This approach tends to be expensive and is inefficient in its use of silicon. Furthermore, such media-specific processing units are unable to cope with the fluid nature of the multimedia market wherein the needs and standards are changing and system manufacturers may demand a single component media engine across a range of products. This constraint has led to a shift towards providing a single-component multimedia specific computing engine that can be integrated easily within desktop systems, tethered consumer appliances, or portable appliances. In this paper, we review some of the recent architectural efforts in developing integrated media systems. We primarily focus on two efforts, namely the evolution of multimedia-capable general purpose processors and a more recent effort in developing single component mixed media co-processors. Design considerations that could facilitate the migration of these technologies to a portable integrated media system also are presented.
Accelerating the discovery of space-time patterns of infectious diseases using parallel computing.
Hohl, Alexander; Delmelle, Eric; Tang, Wenwu; Casas, Irene
2016-11-01
Infectious diseases have complex transmission cycles, and effective public health responses require the ability to monitor outbreaks in a timely manner. Space-time statistics facilitate the discovery of disease dynamics including rate of spread and seasonal cyclic patterns, but are computationally demanding, especially for datasets of increasing size, diversity and availability. High-performance computing reduces the effort required to identify these patterns, however heterogeneity in the data must be accounted for. We develop an adaptive space-time domain decomposition approach for parallel computation of the space-time kernel density. We apply our methodology to individual reported dengue cases from 2010 to 2011 in the city of Cali, Colombia. The parallel implementation reaches significant speedup compared to sequential counterparts. Density values are visualized in an interactive 3D environment, which facilitates the identification and communication of uneven space-time distribution of disease events. Our framework has the potential to enhance the timely monitoring of infectious diseases. Copyright © 2016 Elsevier Ltd. All rights reserved.
Automated Generation of Message-Passing Programs: An Evaluation Using CAPTools
NASA Technical Reports Server (NTRS)
Hribar, Michelle R.; Jin, Haoqiang; Yan, Jerry C.; Saini, Subhash (Technical Monitor)
1998-01-01
Scientists at NASA Ames Research Center have been developing computational aeroscience applications on highly parallel architectures over the past ten years. During that same time period, a steady transition of hardware and system software also occurred, forcing us to expend great efforts into migrating and re-coding our applications. As applications and machine architectures become increasingly complex, the cost and time required for this process will become prohibitive. In this paper, we present the first set of results in our evaluation of interactive parallelization tools. In particular, we evaluate CAPTool's ability to parallelize computational aeroscience applications. CAPTools was tested on serial versions of the NAS Parallel Benchmarks and ARC3D, a computational fluid dynamics application, on two platforms: the SGI Origin 2000 and the Cray T3E. This evaluation includes performance, amount of user interaction required, limitations and portability. Based on these results, a discussion on the feasibility of computer aided parallelization of aerospace applications is presented along with suggestions for future work.
Applications in Data-Intensive Computing
DOE Office of Scientific and Technical Information (OSTI.GOV)
Shah, Anuj R.; Adkins, Joshua N.; Baxter, Douglas J.
2010-04-01
This book chapter, to be published in Advances in Computers, Volume 78, in 2010 describes applications of data intensive computing (DIC). This is an invited chapter resulting from a previous publication on DIC. This work summarizes efforts coming out of the PNNL's Data Intensive Computing Initiative. Advances in technology have empowered individuals with the ability to generate digital content with mouse clicks and voice commands. Digital pictures, emails, text messages, home videos, audio, and webpages are common examples of digital content that are generated on a regular basis. Data intensive computing facilitates human understanding of complex problems. Data-intensive applications providemore » timely and meaningful analytical results in response to exponentially growing data complexity and associated analysis requirements through the development of new classes of software, algorithms, and hardware.« less
Automated Design of Complex Dynamic Systems
Hermans, Michiel; Schrauwen, Benjamin; Bienstman, Peter; Dambre, Joni
2014-01-01
Several fields of study are concerned with uniting the concept of computation with that of the design of physical systems. For example, a recent trend in robotics is to design robots in such a way that they require a minimal control effort. Another example is found in the domain of photonics, where recent efforts try to benefit directly from the complex nonlinear dynamics to achieve more efficient signal processing. The underlying goal of these and similar research efforts is to internalize a large part of the necessary computations within the physical system itself by exploiting its inherent non-linear dynamics. This, however, often requires the optimization of large numbers of system parameters, related to both the system's structure as well as its material properties. In addition, many of these parameters are subject to fabrication variability or to variations through time. In this paper we apply a machine learning algorithm to optimize physical dynamic systems. We show that such algorithms, which are normally applied on abstract computational entities, can be extended to the field of differential equations and used to optimize an associated set of parameters which determine their behavior. We show that machine learning training methodologies are highly useful in designing robust systems, and we provide a set of both simple and complex examples using models of physical dynamical systems. Interestingly, the derived optimization method is intimately related to direct collocation a method known in the field of optimal control. Our work suggests that the application domains of both machine learning and optimal control have a largely unexplored overlapping area which envelopes a novel design methodology of smart and highly complex physical systems. PMID:24497969
1992-06-18
developed by Fukushima . The system has potential use for SDI target/decoy discrimination. For testing purposes, simulated angle-angle and range-Doppler...properties and computational requirements of the Neocognitron, a patern recognition neural network developed by Fukushima . The RADONN effort builds upon...and Information Processing, 17-21 June 1991, Plymouth State College, Plymouth, New Hampshire.) 5.0 References 1. Kunihiko Fukushima , Sei Miyake, and
ERIC Educational Resources Information Center
Suydan, Ava Birgitte
2014-01-01
Despite university efforts to decrease the number of students dropping out of college, attrition of online students occurs at an annual rate of 50% or more (Wang & Wu, 2004). Educational leaders understand the increased demand for online programs and courses because of students' requirements of convenience and flexibility (Kuo, Walker,…
A comparative study on different methods of automatic mesh generation of human femurs.
Viceconti, M; Bellingeri, L; Cristofolini, L; Toni, A
1998-01-01
The aim of this study was to evaluate comparatively five methods for automating mesh generation (AMG) when used to mesh a human femur. The five AMG methods considered were: mapped mesh, which provides hexahedral elements through a direct mapping of the element onto the geometry; tetra mesh, which generates tetrahedral elements from a solid model of the object geometry; voxel mesh which builds cubic 8-node elements directly from CT images; and hexa mesh that automatically generated hexahedral elements from a surface definition of the femur geometry. The various methods were tested against two reference models: a simplified geometric model and a proximal femur model. The first model was useful to assess the inherent accuracy of the meshes created by the AMG methods, since an analytical solution was available for the elastic problem of the simplified geometric model. The femur model was used to test the AMG methods in a more realistic condition. The femoral geometry was derived from a reference model (the "standardized femur") and the finite element analyses predictions were compared to experimental measurements. All methods were evaluated in terms of human and computer effort needed to carry out the complete analysis, and in terms of accuracy. The comparison demonstrated that each tested method deserves attention and may be the best for specific situations. The mapped AMG method requires a significant human effort but is very accurate and it allows a tight control of the mesh structure. The tetra AMG method requires a solid model of the object to be analysed but is widely available and accurate. The hexa AMG method requires a significant computer effort but can also be used on polygonal models and is very accurate. The voxel AMG method requires a huge number of elements to reach an accuracy comparable to that of the other methods, but it does not require any pre-processing of the CT dataset to extract the geometry and in some cases may be the only viable solution.
Beuer, Florian; Schweiger, Josef; Huber, Martin; Engels, Jörg; Stimmelmayr, Michael
2014-06-01
Various treatment concepts have been presented for the edentulous mandible. Manufacturing tension-free and precisely fitting bars on dental implants was previously a great challenge in prosthetic dentistry and required great effort. Modern computer aided design/computer aided manufacturing technology in combination with some clinical modifications of the established workflow enables the clinician to achieve precise results in a very efficient way. The innovative five-step concept is presented in a clinical case. © 2014 by the American College of Prosthodontists.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Grout, Ray W. S.
Convergence of spectral deferred correction (SDC), where low-order time integration methods are used to construct higher-order methods through iterative refinement, can be accelerated in terms of computational effort by using mixed-precision methods. Using ideas from multi-level SDC (in turn based on FAS multigrid ideas), some of the SDC correction sweeps can use function values computed in reduced precision without adversely impacting the accuracy of the final solution. This is particularly beneficial for the performance of combustion solvers such as S3D [6] which require double precision accuracy but are performance limited by the cost of data motion.
Using artificial intelligence to control fluid flow computations
NASA Technical Reports Server (NTRS)
Gelsey, Andrew
1992-01-01
Computational simulation is an essential tool for the prediction of fluid flow. Many powerful simulation programs exist today. However, using these programs to reliably analyze fluid flow and other physical situations requires considerable human effort and expertise to set up a simulation, determine whether the output makes sense, and repeatedly run the simulation with different inputs until a satisfactory result is achieved. Automating this process is not only of considerable practical importance but will also significantly advance basic artificial intelligence (AI) research in reasoning about the physical world.
Determining Training Device Requirements in Army Aviation Systems
NASA Technical Reports Server (NTRS)
Poumade, M. L.
1984-01-01
A decision making methodology which applies the systems approach to the training problem is discussed. Training is viewed as a total system instead of a collection of individual devices and unrelated techniques. The core of the methodology is the use of optimization techniques such as the transportation algorithm and multiobjective goal programming with training task and training device specific data. The role of computers, especially automated data bases and computer simulation models, in the development of training programs is also discussed. The approach can provide significant training enhancement and cost savings over the more traditional, intuitive form of training development and device requirements process. While given from an aviation perspective, the methodology is equally applicable to other training development efforts.
Fan noise prediction assessment
NASA Technical Reports Server (NTRS)
Bent, Paul H.
1995-01-01
This report is an evaluation of two techniques for predicting the fan noise radiation from engine nacelles. The first is a relatively computational intensive finite element technique. The code is named ARC, an abbreviation of Acoustic Radiation Code, and was developed by Eversman. This is actually a suite of software that first generates a grid around the nacelle, then solves for the potential flowfield, and finally solves the acoustic radiation problem. The second approach is an analytical technique requiring minimal computational effort. This is termed the cutoff ratio technique and was developed by Rice. Details of the duct geometry, such as the hub-to-tip ratio and Mach number of the flow in the duct, and modal content of the duct noise are required for proper prediction.
NASA Astrophysics Data System (ADS)
Michaelis, A.; Wang, W.; Melton, F. S.; Votava, P.; Milesi, C.; Hashimoto, H.; Nemani, R. R.; Hiatt, S. H.
2009-12-01
As the length and diversity of the global earth observation data records grow, modeling and analyses of biospheric conditions increasingly requires multiple terabytes of data from a diversity of models and sensors. With network bandwidth beginning to flatten, transmission of these data from centralized data archives presents an increasing challenge, and costs associated with local storage and management of data and compute resources are often significant for individual research and application development efforts. Sharing community valued intermediary data sets, results and codes from individual efforts with others that are not in direct funded collaboration can also be a challenge with respect to time, cost and expertise. We purpose a modeling, data and knowledge center that houses NASA satellite data, climate data and ancillary data where a focused community may come together to share modeling and analysis codes, scientific results, knowledge and expertise on a centralized platform, named Ecosystem Modeling Center (EMC). With the recent development of new technologies for secure hardware virtualization, an opportunity exists to create specific modeling, analysis and compute environments that are customizable, “archiveable” and transferable. Allowing users to instantiate such environments on large compute infrastructures that are directly connected to large data archives may significantly reduce costs and time associated with scientific efforts by alleviating users from redundantly retrieving and integrating data sets and building modeling analysis codes. The EMC platform also provides the possibility for users receiving indirect assistance from expertise through prefabricated compute environments, potentially reducing study “ramp up” times.
Communicating quality improvement through a hospital newsletter.
Tietz, A; Tabor, R
1995-01-01
Healthcare organizations across the United States are embracing the tenets of continuous quality improvement. The challenge is to disseminate information about this quality activity throughout the organization. A monthly newsletter serves two vital purposes: to share the improvements and to generate more enthusiasm and participation by staff members. This article gives practical suggestions for promoting a monthly newsletter. Preparation of an informative newsletter requires a significant investment of time and effort. However, the positive results of providing facilitywide communications can make it worth the effort. The current availability of relatively inexpensive desktop publishing computer software programs has made the process much easier.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Brown, S.A.
In computing landscape which has a plethora of different hardware architectures and supporting software systems ranging from compilers to operating systems, there is an obvious and strong need for a philosophy of software development that lends itself to the design and construction of portable code systems. The current efforts to standardize software bear witness to this need. SABrE is an effort to implement a software development environment which is itself portable and promotes the design and construction of portable applications. SABrE does not include such important tools as editors and compilers. Well built tools of that kind are readily availablemore » across virtually all computer platforms. The areas that SABrE addresses are at a higher level involving issues such as data portability, portable inter-process communication, and graphics. These blocks of functionality have particular significance to the kind of code development done at LLNL. That is partly why the general computing community has not supplied us with these tools already. This is another key feature of the software development environments which we must recognize. The general computing community cannot and should not be expected to produce all of the tools which we require.« less
DOE Office of Scientific and Technical Information (OSTI.GOV)
Brown, S.A.
In computing landscape which has a plethora of different hardware architectures and supporting software systems ranging from compilers to operating systems, there is an obvious and strong need for a philosophy of software development that lends itself to the design and construction of portable code systems. The current efforts to standardize software bear witness to this need. SABrE is an effort to implement a software development environment which is itself portable and promotes the design and construction of portable applications. SABrE does not include such important tools as editors and compilers. Well built tools of that kind are readily availablemore » across virtually all computer platforms. The areas that SABrE addresses are at a higher level involving issues such as data portability, portable inter-process communication, and graphics. These blocks of functionality have particular significance to the kind of code development done at LLNL. That is partly why the general computing community has not supplied us with these tools already. This is another key feature of the software development environments which we must recognize. The general computing community cannot and should not be expected to produce all of the tools which we require.« less
Framework for architecture-independent run-time reconfigurable applications
NASA Astrophysics Data System (ADS)
Lehn, David I.; Hudson, Rhett D.; Athanas, Peter M.
2000-10-01
Configurable Computing Machines (CCMs) have emerged as a technology with the computational benefits of custom ASICs as well as the flexibility and reconfigurability of general-purpose microprocessors. Significant effort from the research community has focused on techniques to move this reconfigurability from a rapid application development tool to a run-time tool. This requires the ability to change the hardware design while the application is executing and is known as Run-Time Reconfiguration (RTR). Widespread acceptance of run-time reconfigurable custom computing depends upon the existence of high-level automated design tools. Such tools must reduce the designers effort to port applications between different platforms as the architecture, hardware, and software evolves. A Java implementation of a high-level application framework, called Janus, is presented here. In this environment, developers create Java classes that describe the structural behavior of an application. The framework allows hardware and software modules to be freely mixed and interchanged. A compilation phase of the development process analyzes the structure of the application and adapts it to the target platform. Janus is capable of structuring the run-time behavior of an application to take advantage of the memory and computational resources available.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Lobel, R.
TRUMP is a general finite difference computer program for the solution of transient and steady state heat transfer problems. It is a very general program capable of solving heat transfer problems in one, two or three dimensions for plane, cylindrical or spherical geometry. Because of the variety of possible geometries, the effort required to describe the geometry can be large. GIFT was written to minimize this effort for one-dimensional heat flow problems. After describing the inner and outer boundaries of a region made of a single material along with the modes of heat transfer which thermally connect different regions, GIFTmore » will calculate all the geometric data (BLOCK 04) and thermal network data (BLOCK 05) required by TRUMP for one-dimensional problems. The heat transfer between layers (or shells) of a material may be by conduction or radiation; also, an interface resistance between layers can be specified. Convection between layers can be accounted for by use of an effective thermal conductivity in which the convection effect is included or by a thermal conductance coefficient. GIFT was written for the Sigma 7 computer, a small digital computer with a versatile graphic display system. This system makes it possible to input the desired data in a question and answer mode and to see both the input and the output displayed on a screen in front of the user at all times. (auth)« less
Strategic directions of computing at Fermilab
NASA Astrophysics Data System (ADS)
Wolbers, Stephen
1998-05-01
Fermilab computing has changed a great deal over the years, driven by the demands of the Fermilab experimental community to record and analyze larger and larger datasets, by the desire to take advantage of advances in computing hardware and software, and by the advances coming from the R&D efforts of the Fermilab Computing Division. The strategic directions of Fermilab Computing continue to be driven by the needs of the experimental program. The current fixed-target run will produce over 100 TBytes of raw data and systems must be in place to allow the timely analysis of the data. The collider run II, beginning in 1999, is projected to produce of order 1 PByte of data per year. There will be a major change in methodology and software language as the experiments move away from FORTRAN and into object-oriented languages. Increased use of automation and the reduction of operator-assisted tape mounts will be required to meet the needs of the large experiments and large data sets. Work will continue on higher-rate data acquisition systems for future experiments and projects. R&D projects will be pursued as necessary to provide software, tools, or systems which cannot be purchased or acquired elsewhere. A closer working relation with other high energy laboratories will be pursued to reduce duplication of effort and to allow effective collaboration on many aspects of HEP computing.
NASA HPCC Technology for Aerospace Analysis and Design
NASA Technical Reports Server (NTRS)
Schulbach, Catherine H.
1999-01-01
The Computational Aerosciences (CAS) Project is part of NASA's High Performance Computing and Communications Program. Its primary goal is to accelerate the availability of high-performance computing technology to the US aerospace community-thus providing the US aerospace community with key tools necessary to reduce design cycle times and increase fidelity in order to improve safety, efficiency and capability of future aerospace vehicles. A complementary goal is to hasten the emergence of a viable commercial market within the aerospace community for the advantage of the domestic computer hardware and software industry. The CAS Project selects representative aerospace problems (especially design) and uses them to focus efforts on advancing aerospace algorithms and applications, systems software, and computing machinery to demonstrate vast improvements in system performance and capability over the life of the program. Recent demonstrations have served to assess the benefits of possible performance improvements while reducing the risk of adopting high-performance computing technology. This talk will discuss past accomplishments in providing technology to the aerospace community, present efforts, and future goals. For example, the times to do full combustor and compressor simulations (of aircraft engines) have been reduced by factors of 320:1 and 400:1 respectively. While this has enabled new capabilities in engine simulation, the goal of an overnight, dynamic, multi-disciplinary, 3-dimensional simulation of an aircraft engine is still years away and will require new generations of high-end technology.
Bruno Garza, J L; Eijckelhof, B H W; Johnson, P W; Raina, S M; Rynell, P W; Huysmans, M A; van Dieën, J H; van der Beek, A J; Blatter, B M; Dennerlein, J T
2012-01-01
This study, a part of the PRedicting Occupational biomechanics in OFfice workers (PROOF) study, investigated whether there are differences in field-measured forces, muscle efforts, postures, velocities and accelerations across computer activities. These parameters were measured continuously for 120 office workers performing their own work for two hours each. There were differences in nearly all forces, muscle efforts, postures, velocities and accelerations across keyboard, mouse and idle activities. Keyboard activities showed a 50% increase in the median right trapezius muscle effort when compared to mouse activities. Median shoulder rotation changed from 25 degrees internal rotation during keyboard use to 15 degrees external rotation during mouse use. Only keyboard use was associated with median ulnar deviations greater than 5 degrees. Idle activities led to the greatest variability observed in all muscle efforts and postures measured. In future studies, measurements of computer activities could be used to provide information on the physical exposures experienced during computer use. Practitioner Summary: Computer users may develop musculoskeletal disorders due to their force, muscle effort, posture and wrist velocity and acceleration exposures during computer use. We report that many physical exposures are different across computer activities. This information may be used to estimate physical exposures based on patterns of computer activities over time.
Aerodynamic shape optimization using preconditioned conjugate gradient methods
NASA Technical Reports Server (NTRS)
Burgreen, Greg W.; Baysal, Oktay
1993-01-01
In an effort to further improve upon the latest advancements made in aerodynamic shape optimization procedures, a systematic study is performed to examine several current solution methodologies as applied to various aspects of the optimization procedure. It is demonstrated that preconditioned conjugate gradient-like methodologies dramatically decrease the computational efforts required for such procedures. The design problem investigated is the shape optimization of the upper and lower surfaces of an initially symmetric (NACA-012) airfoil in inviscid transonic flow and at zero degree angle-of-attack. The complete surface shape is represented using a Bezier-Bernstein polynomial. The present optimization method then automatically obtains supercritical airfoil shapes over a variety of freestream Mach numbers. Furthermore, the best optimization strategy examined resulted in a factor of 8 decrease in computational time as well as a factor of 4 decrease in memory over the most efficient strategies in current use.
Intelligent Manufacturing of Commercial Optics Final Report CRADA No. TC-0313-92
DOE Office of Scientific and Technical Information (OSTI.GOV)
Taylor, J. S.; Pollicove, H.
The project combined the research and development efforts of LLNL and the University of Rochester Center for Manufacturing Optics (COM), to develop a new generation of flexible computer controlled optics· grinding machines. COM's principal near term development effort is to commercialize the OPTICAM-SM, a new prototype spherical grinding machine. A crucial requirement for commercializing the OPTICAM-SM is the development of a predictable and repeatable material removal process ( deterministic micro-grinding) that yields high quality surfaces that minimize non-deterministic polishing. OPTICAM machine tools and the fabrication process development studies are part of COM' s response to the DOD (ARPA) request tomore » implement a modernization strategy for revitalizing the U.S. optics manufacturing base. This project was entered into in order to develop a new generation of :flexible, computer-controlled optics grinding machines.« less
Computational analysis of the SSME fuel preburner flow
NASA Technical Reports Server (NTRS)
Wang, T. S.; Farmer, R. C.
1986-01-01
A computational fluid dynamics model which simulates the steady state operation of the SSME fuel preburner is developed. Specifically, the model will be used to quantify the flow factors which cause local hot spots in the fuel preburner in order to recommend experiments whereby the control of undesirable flow features can be demonstrated. The results of a two year effort to model the preburner are presented. In this effort, investigating the fuel preburner flowfield, the appropriate transport equations were numerically solved for both an axisymmetric and a three-dimensional configuration. Continuum's VAST (Variational Solution of the Transport equations) code, in conjunction with the CM-1000 Engineering Analysis Workstation and the NASA/Ames CYBER 205, was used to perform the required calculations. It is concluded that the preburner operational anomalies are not due to steady state phenomena and must, therefore, be related to transient operational procedures.
CAD Services: an Industry Standard Interface for Mechanical CAD Interoperability
NASA Technical Reports Server (NTRS)
Claus, Russell; Weitzer, Ilan
2002-01-01
Most organizations seek to design and develop new products in increasingly shorter time periods. At the same time, increased performance demands require a team-based multidisciplinary design process that may span several organizations. One approach to meet these demands is to use 'Geometry Centric' design. In this approach, design engineers team their efforts through one united representation of the design that is usually captured in a CAD system. Standards-based interfaces are critical to provide uniform, simple, distributed services that enable the 'Geometry Centric' design approach. This paper describes an industry-wide effort, under the Object Management Group's (OMG) Manufacturing Domain Task Force, to define interfaces that enable the interoperability of CAD, Computer Aided Manufacturing (CAM), and Computer Aided Engineering (CAE) tools. This critical link to enable 'Geometry Centric' design is called: Cad Services V1.0. This paper discusses the features of this standard and proposed application.
STREAM2016: Streaming Requirements, Experience, Applications and Middleware Workshop
DOE Office of Scientific and Technical Information (OSTI.GOV)
Fox, Geoffrey; Jha, Shantenu; Ramakrishnan, Lavanya
The Department of Energy (DOE) Office of Science (SC) facilities including accelerators, light sources and neutron sources and sensors that study, the environment, and the atmosphere, are producing streaming data that needs to be analyzed for next-generation scientific discoveries. There has been an explosion of new research and technologies for stream analytics arising from the academic and private sectors. However, there has been no corresponding effort in either documenting the critical research opportunities or building a community that can create and foster productive collaborations. The two-part workshop series, STREAM: Streaming Requirements, Experience, Applications and Middleware Workshop (STREAM2015 and STREAM2016), weremore » conducted to bring the community together and identify gaps and future efforts needed by both NSF and DOE. This report describes the discussions, outcomes and conclusions from STREAM2016: Streaming Requirements, Experience, Applications and Middleware Workshop, the second of these workshops held on March 22-23, 2016 in Tysons, VA. STREAM2016 focused on the Department of Energy (DOE) applications, computational and experimental facilities, as well software systems. Thus, the role of “streaming and steering” as a critical mode of connecting the experimental and computing facilities was pervasive through the workshop. Given the overlap in interests and challenges with industry, the workshop had significant presence from several innovative companies and major contributors. The requirements that drive the proposed research directions, identified in this report, show an important opportunity for building competitive research and development program around streaming data. These findings and recommendations are consistent with vision outlined in NRC Frontiers of Data and National Strategic Computing Initiative (NCSI) [1, 2]. The discussions from the workshop are captured as topic areas covered in this report's sections. The report discusses four research directions driven by current and future application requirements reflecting the areas identified as important by STREAM2016. These include (i) Algorithms, (ii) Programming Models, Languages and Runtime Systems (iii) Human-in-the-loop and Steering in Scientific Workflow and (iv) Facilities.« less
Computational modelling of oxygenation processes in enzymes and biomimetic model complexes.
de Visser, Sam P; Quesne, Matthew G; Martin, Bodo; Comba, Peter; Ryde, Ulf
2014-01-11
With computational resources becoming more efficient and more powerful and at the same time cheaper, computational methods have become more and more popular for studies on biochemical and biomimetic systems. Although large efforts from the scientific community have gone into exploring the possibilities of computational methods for studies on large biochemical systems, such studies are not without pitfalls and often cannot be routinely done but require expert execution. In this review we summarize and highlight advances in computational methodology and its application to enzymatic and biomimetic model complexes. In particular, we emphasize on topical and state-of-the-art methodologies that are able to either reproduce experimental findings, e.g., spectroscopic parameters and rate constants, accurately or make predictions of short-lived intermediates and fast reaction processes in nature. Moreover, we give examples of processes where certain computational methods dramatically fail.
High Performance Computing for Modeling Wind Farms and Their Impact
NASA Astrophysics Data System (ADS)
Mavriplis, D.; Naughton, J. W.; Stoellinger, M. K.
2016-12-01
As energy generated by wind penetrates further into our electrical system, modeling of power production, power distribution, and the economic impact of wind-generated electricity is growing in importance. The models used for this work can range in fidelity from simple codes that run on a single computer to those that require high performance computing capabilities. Over the past several years, high fidelity models have been developed and deployed on the NCAR-Wyoming Supercomputing Center's Yellowstone machine. One of the primary modeling efforts focuses on developing the capability to compute the behavior of a wind farm in complex terrain under realistic atmospheric conditions. Fully modeling this system requires the simulation of continental flows to modeling the flow over a wind turbine blade, including down to the blade boundary level, fully 10 orders of magnitude in scale. To accomplish this, the simulations are broken up by scale, with information from the larger scales being passed to the lower scale models. In the code being developed, four scale levels are included: the continental weather scale, the local atmospheric flow in complex terrain, the wind plant scale, and the turbine scale. The current state of the models in the latter three scales will be discussed. These simulations are based on a high-order accurate dynamic overset and adaptive mesh approach, which runs at large scale on the NWSC Yellowstone machine. A second effort on modeling the economic impact of new wind development as well as improvement in wind plant performance and enhancements to the transmission infrastructure will also be discussed.
Aeroelasticity of wing and wing-body configurations on parallel computers
NASA Technical Reports Server (NTRS)
Byun, Chansup
1995-01-01
The objective of this research is to develop computationally efficient methods for solving aeroelasticity problems on parallel computers. Both uncoupled and coupled methods are studied in this research. For the uncoupled approach, the conventional U-g method is used to determine the flutter boundary. The generalized aerodynamic forces required are obtained by the pulse transfer-function analysis method. For the coupled approach, the fluid-structure interaction is obtained by directly coupling finite difference Euler/Navier-Stokes equations for fluids and finite element dynamics equations for structures. This capability will significantly impact many aerospace projects of national importance such as Advanced Subsonic Civil Transport (ASCT), where the structural stability margin becomes very critical at the transonic region. This research effort will have direct impact on the High Performance Computing and Communication (HPCC) Program of NASA in the area of parallel computing.
Turbomachinery CFD on parallel computers
NASA Technical Reports Server (NTRS)
Blech, Richard A.; Milner, Edward J.; Quealy, Angela; Townsend, Scott E.
1992-01-01
The role of multistage turbomachinery simulation in the development of propulsion system models is discussed. Particularly, the need for simulations with higher fidelity and faster turnaround time is highlighted. It is shown how such fast simulations can be used in engineering-oriented environments. The use of parallel processing to achieve the required turnaround times is discussed. Current work by several researchers in this area is summarized. Parallel turbomachinery CFD research at the NASA Lewis Research Center is then highlighted. These efforts are focused on implementing the average-passage turbomachinery model on MIMD, distributed memory parallel computers. Performance results are given for inviscid, single blade row and viscous, multistage applications on several parallel computers, including networked workstations.
Kussmann, Jörg; Ochsenfeld, Christian
2007-08-07
Details of a new density matrix-based formulation for calculating nuclear magnetic resonance chemical shifts at both Hartree-Fock and density functional theory levels are presented. For systems with a nonvanishing highest occupied molecular orbital-lowest unoccupied molecular orbital gap, the method allows us to reduce the asymptotic scaling order of the computational effort from cubic to linear, so that molecular systems with 1000 and more atoms can be tackled with today's computers. The key feature is a reformulation of the coupled-perturbed self-consistent field (CPSCF) theory in terms of the one-particle density matrix (D-CPSCF), which avoids entirely the use of canonical MOs. By means of a direct solution for the required perturbed density matrices and the adaptation of linear-scaling integral contraction schemes, the overall scaling of the computational effort is reduced to linear. A particular focus of our formulation is to ensure numerical stability when sparse-algebra routines are used to obtain an overall linear-scaling behavior.
Altan, Irem; Charbonneau, Patrick; Snell, Edward H.
2016-01-01
Crystallization is a key step in macromolecular structure determination by crystallography. While a robust theoretical treatment of the process is available, due to the complexity of the system, the experimental process is still largely one of trial and error. In this article, efforts in the field are discussed together with a theoretical underpinning using a solubility phase diagram. Prior knowledge has been used to develop tools that computationally predict the crystallization outcome and define mutational approaches that enhance the likelihood of crystallization. For the most part these tools are based on binary outcomes (crystal or no crystal), and the full information contained in an assembly of crystallization screening experiments is lost. The potential of this additional information is illustrated by examples where new biological knowledge can be obtained and where a target can be sub-categorized to predict which class of reagents provides the crystallization driving force. Computational analysis of crystallization requires complete and correctly formatted data. While massive crystallization screening efforts are under way, the data available from many of these studies are sparse. The potential for this data and the steps needed to realize this potential are discussed. PMID:26792536
Survey of the supporting research and technology for the thermal protection of the Galileo Probe
NASA Technical Reports Server (NTRS)
Howe, J. T.; Pitts, W. C.; Lundell, J. H.
1981-01-01
The Galileo Probe, which is scheduled to be launched in 1985 and to enter the hydrogen-helium atmosphere of Jupiter up to 1,475 days later, presents thermal protection problems that are far more difficult than those experienced in previous planetary entry missions. The high entry speed of the Probe will cause forebody heating rates orders of magnitude greater than those encountered in the Apollo and Pioneer Venus missions, severe afterbody heating from base-flow radiation, and thermochemical ablation rates for carbon phenolic that rival the free-stream mass flux. This paper presents a comprehensive survey of the experimental work and computational research that provide technological support for the Probe's heat-shield design effort. The survey includes atmospheric modeling; both approximate and first-principle computations of flow fields and heat-shield material response; base heating; turbulence modelling; new computational techniques; experimental heating and materials studies; code validation efforts; and a set of 'consensus' first-principle flow-field solutions through the entry maneuver, with predictions of the corresponding thermal protection requirements.
Education of MIS Users: One Hospital's Experience
Jacobs, Patt
1982-01-01
Dr. Stanley Jacobs has identified at least five factors which impact the amount of effort required to implement a hospital based computer system. One of these factors is clearly the users of the system. This paper focuses upon the implementation process at St. Vincent Hospital and Medical Center (SVH&MC) highlighting the user education program developed by the hospital DP staff.
A Next-Generation Parallel File System Environment for the OLCF
DOE Office of Scientific and Technical Information (OSTI.GOV)
Dillow, David A; Fuller, Douglas; Gunasekaran, Raghul
2012-01-01
When deployed in 2008/2009 the Spider system at the Oak Ridge National Laboratory s Leadership Computing Facility (OLCF) was the world s largest scale Lustre parallel file system. Envisioned as a shared parallel file system capable of delivering both the bandwidth and capacity requirements of the OLCF s diverse computational environment, Spider has since become a blueprint for shared Lustre environments deployed worldwide. Designed to support the parallel I/O requirements of the Jaguar XT5 system and other smallerscale platforms at the OLCF, the upgrade to the Titan XK6 heterogeneous system will begin to push the limits of Spider s originalmore » design by mid 2013. With a doubling in total system memory and a 10x increase in FLOPS, Titan will require both higher bandwidth and larger total capacity. Our goal is to provide a 4x increase in total I/O bandwidth from over 240GB=sec today to 1TB=sec and a doubling in total capacity. While aggregate bandwidth and total capacity remain important capabilities, an equally important goal in our efforts is dramatically increasing metadata performance, currently the Achilles heel of parallel file systems at leadership. We present in this paper an analysis of our current I/O workloads, our operational experiences with the Spider parallel file systems, the high-level design of our Spider upgrade, and our efforts in developing benchmarks that synthesize our performance requirements based on our workload characterization studies.« less
Design considerations for computationally constrained two-way real-time video communication
NASA Astrophysics Data System (ADS)
Bivolarski, Lazar M.; Saunders, Steven E.; Ralston, John D.
2009-08-01
Today's video codecs have evolved primarily to meet the requirements of the motion picture and broadcast industries, where high-complexity studio encoding can be utilized to create highly-compressed master copies that are then broadcast one-way for playback using less-expensive, lower-complexity consumer devices for decoding and playback. Related standards activities have largely ignored the computational complexity and bandwidth constraints of wireless or Internet based real-time video communications using devices such as cell phones or webcams. Telecommunications industry efforts to develop and standardize video codecs for applications such as video telephony and video conferencing have not yielded image size, quality, and frame-rate performance that match today's consumer expectations and market requirements for Internet and mobile video services. This paper reviews the constraints and the corresponding video codec requirements imposed by real-time, 2-way mobile video applications. Several promising elements of a new mobile video codec architecture are identified, and more comprehensive computational complexity metrics and video quality metrics are proposed in order to support the design, testing, and standardization of these new mobile video codecs.
Mattoon, C. M.; Beck, B. R.
2015-12-24
An international effort is underway to design a new structure for storing and using nuclear reaction data, with the goal of eventually replacing the current standard, ENDF-6. This effort, organized by the Working Party for Evaluation Cooperation, was initiated in 2012 and has resulted in a list of requirements and specifications for how the proposed new structure shall perform. The new structure will take advantage of new developments in computational tools, using a nested hierarchy to store data. Here, the structure can be stored in text form (such as an XML file) for human readability and data sharing, or itmore » can be stored in binary to optimize data access. In this paper, we present the progress towards completing the requirements, specifications and implementation of the new structure.« less
DOE Office of Scientific and Technical Information (OSTI.GOV)
Mattoon, C. M.; Beck, B. R.
An international effort is underway to design a new structure for storing and using nuclear reaction data, with the goal of eventually replacing the current standard, ENDF-6. This effort, organized by the Working Party for Evaluation Cooperation, was initiated in 2012 and has resulted in a list of requirements and specifications for how the proposed new structure shall perform. The new structure will take advantage of new developments in computational tools, using a nested hierarchy to store data. Here, the structure can be stored in text form (such as an XML file) for human readability and data sharing, or itmore » can be stored in binary to optimize data access. In this paper, we present the progress towards completing the requirements, specifications and implementation of the new structure.« less
Computing, Information and Communications Technology (CICT) Website
NASA Technical Reports Server (NTRS)
Hardman, John; Tu, Eugene (Technical Monitor)
2002-01-01
The Computing, Information and Communications Technology Program (CICT) was established in 2001 to ensure NASA's Continuing leadership in emerging technologies. It is a coordinated, Agency-wide effort to develop and deploy key enabling technologies for a broad range of mission-critical tasks. The NASA CICT program is designed to address Agency-specific computing, information, and communications technology requirements beyond the projected capabilities of commercially available solutions. The areas of technical focus have been chosen for their impact on NASA's missions, their national importance, and the technical challenge they provide to the Program. In order to meet its objectives, the CICT Program is organized into the following four technology focused projects: 1) Computing, Networking and Information Systems (CNIS); 2) Intelligent Systems (IS); 3) Space Communications (SC); 4) Information Technology Strategic Research (ITSR).
Overview of Risk Mitigation for Safety-Critical Computer-Based Systems
NASA Technical Reports Server (NTRS)
Torres-Pomales, Wilfredo
2015-01-01
This report presents a high-level overview of a general strategy to mitigate the risks from threats to safety-critical computer-based systems. In this context, a safety threat is a process or phenomenon that can cause operational safety hazards in the form of computational system failures. This report is intended to provide insight into the safety-risk mitigation problem and the characteristics of potential solutions. The limitations of the general risk mitigation strategy are discussed and some options to overcome these limitations are provided. This work is part of an ongoing effort to enable well-founded assurance of safety-related properties of complex safety-critical computer-based aircraft systems by developing an effective capability to model and reason about the safety implications of system requirements and design.
Visual Computing Environment Workshop
NASA Technical Reports Server (NTRS)
Lawrence, Charles (Compiler)
1998-01-01
The Visual Computing Environment (VCE) is a framework for intercomponent and multidisciplinary computational simulations. Many current engineering analysis codes simulate various aspects of aircraft engine operation. For example, existing computational fluid dynamics (CFD) codes can model the airflow through individual engine components such as the inlet, compressor, combustor, turbine, or nozzle. Currently, these codes are run in isolation, making intercomponent and complete system simulations very difficult to perform. In addition, management and utilization of these engineering codes for coupled component simulations is a complex, laborious task, requiring substantial experience and effort. To facilitate multicomponent aircraft engine analysis, the CFD Research Corporation (CFDRC) is developing the VCE system. This system, which is part of NASA's Numerical Propulsion Simulation System (NPSS) program, can couple various engineering disciplines, such as CFD, structural analysis, and thermal analysis.
Computational strategies for tire monitoring and analysis
NASA Technical Reports Server (NTRS)
Danielson, Kent T.; Noor, Ahmed K.; Green, James S.
1995-01-01
Computational strategies are presented for the modeling and analysis of tires in contact with pavement. A procedure is introduced for simple and accurate determination of tire cross-sectional geometric characteristics from a digitally scanned image. Three new strategies for reducing the computational effort in the finite element solution of tire-pavement contact are also presented. These strategies take advantage of the observation that footprint loads do not usually stimulate a significant tire response away from the pavement contact region. The finite element strategies differ in their level of approximation and required amount of computer resources. The effectiveness of the strategies is demonstrated by numerical examples of frictionless and frictional contact of the space shuttle Orbiter nose-gear tire. Both an in-house research code and a commercial finite element code are used in the numerical studies.
Remote Earth Sciences data collection using ACTS
NASA Technical Reports Server (NTRS)
Evans, Robert H.
1992-01-01
Given the focus on global change and the attendant scope of such research, we anticipate significant growth of requirements for investigator interaction, processing system capabilities, and availability of data sets. The increased complexity of global processes requires interdisciplinary teams to address them; the investigators will need to interact on a regular basis; however, it is unlikely that a single institution will house sufficient investigators with the required breadth of skills. The complexity of the computations may also require resources beyond those located within a single institution; this lack of sufficient computational resources leads to a distributed system located at geographically dispersed institutions. Finally the combination of long term data sets like the Pathfinder datasets and the data to be gathered by new generations of satellites such as SeaWiFS and MODIS-N yield extra-ordinarily large amounts of data. All of these factors combine to increase demands on the communications facilities available; the demands are generating requirements for highly flexible, high capacity networks. We have been examining the applicability of the Advanced Communications Technology Satellite (ACTS) to address the scientific, computational, and, primarily, communications questions resulting from global change research. As part of this effort three scenarios for oceanographic use of ACTS have been developed; a full discussion of this is contained in Appendix B.
Gigerenzer, Gerd
2009-01-01
In their comment on Marewski et al. (good judgments do not require complex cognition, 2009) Evans and Over (heuristic thinking and human intelligence: a commentary on Marewski, Gaissmaier and Gigerenzer, 2009) conjectured that heuristics can often lead to biases and are not error free. This is a most surprising critique. The computational models of heuristics we have tested allow for quantitative predictions of how many errors a given heuristic will make, and we and others have measured the amount of error by analysis, computer simulation, and experiment. This is clear progress over simply giving heuristics labels, such as availability, that do not allow for quantitative comparisons of errors. Evans and Over argue that the reason people rely on heuristics is the accuracy-effort trade-off. However, the comparison between heuristics and more effortful strategies, such as multiple regression, has shown that there are many situations in which a heuristic is more accurate with less effort. Finally, we do not see how the fast and frugal heuristics program could benefit from a dual-process framework unless the dual-process framework is made more precise. Instead, the dual-process framework could benefit if its two “black boxes” (Type 1 and Type 2 processes) were substituted by computational models of both heuristics and other processes. PMID:19784854
Barber, Larissa K; Smit, Brandon W
2014-01-01
This study replicated ego-depletion predictions from the self-control literature in a computer simulation task that requires ongoing decision-making in relation to constantly changing environmental information: the Network Fire Chief (NFC). Ego-depletion led to decreased self-regulatory effort, but not performance, on the NFC task. These effects were also buffered by task enjoyment so that individuals who enjoyed the dynamic decision-making task did not experience ego-depletion effects. These findings confirm that past ego-depletion effects on decision-making are not limited to static or isolated decision-making tasks and can be extended to dynamic, naturalistic decision-making processes more common to naturalistic settings. Furthermore, the NFC simulation provides a methodological mechanism for independently measuring effort and performance when studying ego-depletion.
Tabletop computed lighting for practical digital photography.
Mohan, Ankit; Bailey, Reynold; Waite, Jonathan; Tumblin, Jack; Grimm, Cindy; Bodenheimer, Bobby
2007-01-01
We apply simplified image-based lighting methods to reduce the equipment, cost, time, and specialized skills required for high-quality photographic lighting of desktop-sized static objects such as museum artifacts. We place the object and a computer-steered moving-head spotlight inside a simple foam-core enclosure and use a camera to record photos as the light scans the box interior. Optimization, guided by interactive user sketching, selects a small set of these photos whose weighted sum best matches the user-defined target sketch. Unlike previous image-based relighting efforts, our method requires only a single area light source, yet it can achieve high-resolution light positioning to avoid multiple sharp shadows. A reduced version uses only a handheld light and may be suitable for battery-powered field photography equipment that fits into a backpack.
3D Multi-Level Non-LTE Radiative Transfer for the CO Molecule
NASA Astrophysics Data System (ADS)
Berkner, A.; Schweitzer, A.; Hauschildt, P. H.
2015-01-01
The photospheres of cool stars are both rich in molecules and an environment where the assumption of LTE can not be upheld under all circumstances. Unfortunately, detailed 3D non-LTE calculations involving molecules are hardly feasible with current computers. For this reason, we present our implementation of the super level technique, in which molecular levels are combined into super levels, to reduce the number of unknowns in the rate equations and, thus, the computational effort and memory requirements involved, and show the results of our first tests against the 1D implementation of the same method.
NASA Technical Reports Server (NTRS)
Foss, W. E., Jr.
1981-01-01
A computer technique to determine the mission radius and maneuverability characteristics of combat aircraft was developed. The technique was used to determine critical operational requirements and the areas in which research programs would be expected to yield the most beneficial results. In turn, the results of research efforts were evaluated in terms of aircraft performance on selected mission segments and for complete mission profiles. Extensive use of the technique in evaluation studies indicates that the calculated performance is essentially the same as that obtained by the proprietary programs in use throughout the aircraft industry.
SCA Waveform Development for Space Telemetry
NASA Technical Reports Server (NTRS)
Mortensen, Dale J.; Kifle, Multi; Hall, C. Steve; Quinn, Todd M.
2004-01-01
The NASA Glenn Research Center is investigating and developing suitable reconfigurable radio architectures for future NASA missions. This effort is examining software-based open-architectures for space based transceivers, as well as common hardware platform architectures. The Joint Tactical Radio System's (JTRS) Software Communications Architecture (SCA) is a candidate for the software approach, but may need modifications or adaptations for use in space. An in-house SCA compliant waveform development focuses on increasing understanding of software defined radio architectures and more specifically the JTRS SCA. Space requirements put a premium on size, mass, and power. This waveform development effort is key to evaluating tradeoffs with the SCA for space applications. Existing NASA telemetry links, as well as Space Exploration Initiative scenarios, are the basis for defining the waveform requirements. Modeling and simulations are being developed to determine signal processing requirements associated with a waveform and a mission-specific computational burden. Implementation of the waveform on a laboratory software defined radio platform is proceeding in an iterative fashion. Parallel top-down and bottom-up design approaches are employed.
Personal computer security: part 1. Firewalls, antivirus software, and Internet security suites.
Caruso, Ronald D
2003-01-01
Personal computer (PC) security in the era of the Health Insurance Portability and Accountability Act of 1996 (HIPAA) involves two interrelated elements: safeguarding the basic computer system itself and protecting the information it contains and transmits, including personal files. HIPAA regulations have toughened the requirements for securing patient information, requiring every radiologist with such data to take further precautions. Security starts with physically securing the computer. Account passwords and a password-protected screen saver should also be set up. A modern antivirus program can easily be installed and configured. File scanning and updating of virus definitions are simple processes that can largely be automated and should be performed at least weekly. A software firewall is also essential for protection from outside intrusion, and an inexpensive hardware firewall can provide yet another layer of protection. An Internet security suite yields additional safety. Regular updating of the security features of installed programs is important. Obtaining a moderate degree of PC safety and security is somewhat inconvenient but is necessary and well worth the effort. Copyright RSNA, 2003
NASA Technical Reports Server (NTRS)
1986-01-01
The Johnson Space Center Management Information System (JSCMIS) is an interface to computer data bases at NASA Johnson which allows an authorized user to browse and retrieve information from a variety of sources with minimum effort. This issue gives requirements definition and design specifications for versions 2.1 and 2.1.1, along with documented test scenario environments, and security object design and specifications.
NASA Technical Reports Server (NTRS)
McComas, David C.; Strege, Susanne L.; Carpenter, Paul B. Hartman, Randy
2015-01-01
The core Flight System (cFS) is a flight software (FSW) product line developed by the Flight Software Systems Branch (FSSB) at NASA's Goddard Space Flight Center (GSFC). The cFS uses compile-time configuration parameters to implement variable requirements to enable portability across embedded computing platforms and to implement different end-user functional needs. The verification and validation of these requirements is proving to be a significant challenge. This paper describes the challenges facing the cFS and the results of a pilot effort to apply EXB Solution's testing approach to the cFS applications.
A Study of Gaps in Network Knowledge Synthesis
2015-10-18
several authorizations is present. PPSI has an additional nm computational overhead beyond the complexity of PSI itself, where n is the maximum number of...devices are black boxes M L 5 Sensors require collection across multiple layers M L D at a C ol le ct io n 1 Collection at line speed is very hard H H 2...Requires manual effort to specify what data to collect M L 10 Cannot work on encoded/compressed data M L D at a F il te ri n g 1 Trade-off between
DOE Office of Scientific and Technical Information (OSTI.GOV)
Baart, T. A.; Vandersypen, L. M. K.; Kavli Institute of Nanoscience, Delft University of Technology, P.O. Box 5046, 2600 GA Delft
We report the computer-automated tuning of gate-defined semiconductor double quantum dots in GaAs heterostructures. We benchmark the algorithm by creating three double quantum dots inside a linear array of four quantum dots. The algorithm sets the correct gate voltages for all the gates to tune the double quantum dots into the single-electron regime. The algorithm only requires (1) prior knowledge of the gate design and (2) the pinch-off value of the single gate T that is shared by all the quantum dots. This work significantly alleviates the user effort required to tune multiple quantum dot devices.
NASA Technical Reports Server (NTRS)
Rogers, Stuart E.
1990-01-01
The current work is initiated in an effort to obtain an efficient, accurate, and robust algorithm for the numerical solution of the incompressible Navier-Stokes equations in two- and three-dimensional generalized curvilinear coordinates for both steady-state and time-dependent flow problems. This is accomplished with the use of the method of artificial compressibility and a high-order flux-difference splitting technique for the differencing of the convective terms. Time accuracy is obtained in the numerical solutions by subiterating the equations in psuedo-time for each physical time step. The system of equations is solved with a line-relaxation scheme which allows the use of very large pseudo-time steps leading to fast convergence for steady-state problems as well as for the subiterations of time-dependent problems. Numerous laminar test flow problems are computed and presented with a comparison against analytically known solutions or experimental results. These include the flow in a driven cavity, the flow over a backward-facing step, the steady and unsteady flow over a circular cylinder, flow over an oscillating plate, flow through a one-dimensional inviscid channel with oscillating back pressure, the steady-state flow through a square duct with a 90 degree bend, and the flow through an artificial heart configuration with moving boundaries. An adequate comparison with the analytical or experimental results is obtained in all cases. Numerical comparisons of the upwind differencing with central differencing plus artificial dissipation indicates that the upwind differencing provides a much more robust algorithm, which requires significantly less computing time. The time-dependent problems require on the order of 10 to 20 subiterations, indicating that the elliptical nature of the problem does require a substantial amount of computing effort.
Visual Odometry for Autonomous Deep-Space Navigation Project
NASA Technical Reports Server (NTRS)
Robinson, Shane; Pedrotty, Sam
2016-01-01
Autonomous rendezvous and docking (AR&D) is a critical need for manned spaceflight, especially in deep space where communication delays essentially leave crews on their own for critical operations like docking. Previously developed AR&D sensors have been large, heavy, power-hungry, and may still require further development (e.g. Flash LiDAR). Other approaches to vision-based navigation are not computationally efficient enough to operate quickly on slower, flight-like computers. The key technical challenge for visual odometry is to adapt it from the current terrestrial applications it was designed for to function in the harsh lighting conditions of space. This effort leveraged Draper Laboratory’s considerable prior development and expertise, benefitting both parties. The algorithm Draper has created is unique from other pose estimation efforts as it has a comparatively small computational footprint (suitable for use onboard a spacecraft, unlike alternatives) and potentially offers accuracy and precision needed for docking. This presents a solution to the AR&D problem that only requires a camera, which is much smaller, lighter, and requires far less power than competing AR&D sensors. We have demonstrated the algorithm’s performance and ability to process ‘flight-like’ imagery formats with a ‘flight-like’ trajectory, positioning ourselves to easily process flight data from the upcoming ‘ISS Selfie’ activity and then compare the algorithm’s quantified performance to the simulated imagery. This will bring visual odometry beyond TRL 5, proving its readiness to be demonstrated as part of an integrated system.Once beyond TRL 5, visual odometry will be poised to be demonstrated as part of a system in an in-space demo where relative pose is critical, like Orion AR&D, ISS robotic operations, asteroid proximity operations, and more.
Visual Odometry for Autonomous Deep-Space Navigation Project
NASA Technical Reports Server (NTRS)
Robinson, Shane; Pedrotty, Sam
2016-01-01
Autonomous rendezvous and docking (AR&D) is a critical need for manned spaceflight, especially in deep space where communication delays essentially leave crews on their own for critical operations like docking. Previously developed AR&D sensors have been large, heavy, power-hungry, and may still require further development (e.g. Flash LiDAR). Other approaches to vision-based navigation are not computationally efficient enough to operate quickly on slower, flight-like computers. The key technical challenge for visual odometry is to adapt it from the current terrestrial applications it was designed for to function in the harsh lighting conditions of space. This effort leveraged Draper Laboratory's considerable prior development and expertise, benefitting both parties. The algorithm Draper has created is unique from other pose estimation efforts as it has a comparatively small computational footprint (suitable for use onboard a spacecraft, unlike alternatives) and potentially offers accuracy and precision needed for docking. This presents a solution to the AR&D problem that only requires a camera, which is much smaller, lighter, and requires far less power than competing AR&D sensors. We have demonstrated the algorithm's performance and ability to process 'flight-like' imagery formats with a 'flight-like' trajectory, positioning ourselves to easily process flight data from the upcoming 'ISS Selfie' activity and then compare the algorithm's quantified performance to the simulated imagery. This will bring visual odometry beyond TRL 5, proving its readiness to be demonstrated as part of an integrated system. Once beyond TRL 5, visual odometry will be poised to be demonstrated as part of a system in an in-space demo where relative pose is critical, like Orion AR&D, ISS robotic operations, asteroid proximity operations, and more.
Static Schedulers for Embedded Real-Time Systems
1989-12-01
Because of the need for having efficient scheduling algorithms in large scale real time systems , software engineers put a lot of effort on developing...provide static schedulers for he Embedded Real Time Systems with single processor using Ada programming language. The independent nonpreemptable...support the Computer Aided Rapid Prototyping for Embedded Real Time Systems so that we determine whether the system, as designed, meets the required
System Engineering Concept Demonstration, Effort Summary. Volume 1
1992-12-01
involve only the system software, user frameworks and user tools. U •User Tool....s , Catalyst oExternal 00 Computer Framwork P OSystems • •~ Sysytem...analysis, synthesis, optimization, conceptual design of Catalyst. The paper discusses the definition, design, test, and evaluation; operational concept...This approach will allow system engineering The conceptual requirements for the Process Model practitioners to recognize and tailor the model. This
Natural Resource Information System, design analysis
NASA Technical Reports Server (NTRS)
1972-01-01
The computer-based system stores, processes, and displays map data relating to natural resources. The system was designed on the basis of requirements established in a user survey and an analysis of decision flow. The design analysis effort is described, and the rationale behind major design decisions, including map processing, cell vs. polygon, choice of classification systems, mapping accuracy, system hardware, and software language is summarized.
The Lag Model, a Turbulence Model for Wall Bounded Flows Including Separation
NASA Technical Reports Server (NTRS)
Olsen, Michael E.; Coakley, Thomas J.; Kwak, Dochan (Technical Monitor)
2001-01-01
A new class of turbulence model is described for wall bounded, high Reynolds number flows. A specific turbulence model is demonstrated, with results for favorable and adverse pressure gradient flowfields. Separation predictions are as good or better than either Spalart Almaras or SST models, do not require specification of wall distance, and have similar or reduced computational effort compared with these models.
3D automatic Cartesian grid generation for Euler flows
NASA Technical Reports Server (NTRS)
Melton, John E.; Enomoto, Francis Y.; Berger, Marsha J.
1993-01-01
We describe a Cartesian grid strategy for the study of three dimensional inviscid flows about arbitrary geometries that uses both conventional and CAD/CAM surface geometry databases. Initial applications of the technique are presented. The elimination of the body-fitted constraint allows the grid generation process to be automated, significantly reducing the time and effort required to develop suitable computational grids for inviscid flowfield simulations.
Fault Detection of Rotating Machinery using the Spectral Distribution Function
NASA Technical Reports Server (NTRS)
Davis, Sanford S.
1997-01-01
The spectral distribution function is introduced to characterize the process leading to faults in rotating machinery. It is shown to be a more robust indicator than conventional power spectral density estimates, but requires only slightly more computational effort. The method is illustrated with examples from seeded gearbox transmission faults and an analytical model of a defective bearing. Procedures are suggested for implementation in realistic environments.
Potential implementation of reservoir computing models based on magnetic skyrmions
NASA Astrophysics Data System (ADS)
Bourianoff, George; Pinna, Daniele; Sitte, Matthias; Everschor-Sitte, Karin
2018-05-01
Reservoir Computing is a type of recursive neural network commonly used for recognizing and predicting spatio-temporal events relying on a complex hierarchy of nested feedback loops to generate a memory functionality. The Reservoir Computing paradigm does not require any knowledge of the reservoir topology or node weights for training purposes and can therefore utilize naturally existing networks formed by a wide variety of physical processes. Most efforts to implement reservoir computing prior to this have focused on utilizing memristor techniques to implement recursive neural networks. This paper examines the potential of magnetic skyrmion fabrics and the complex current patterns which form in them as an attractive physical instantiation for Reservoir Computing. We argue that their nonlinear dynamical interplay resulting from anisotropic magnetoresistance and spin-torque effects allows for an effective and energy efficient nonlinear processing of spatial temporal events with the aim of event recognition and prediction.
Geometry definition and grid generation for a complete fighter aircraft
NASA Technical Reports Server (NTRS)
Edwards, T. A.
1986-01-01
Recent advances in computing power and numerical solution procedures have enabled computational fluid dynamicists to attempt increasingly difficult problems. In particular, efforts are focusing on computations of complex three-dimensional flow fields about realistic aerodynamic bodies. To perform such computations, a very accurate and detailed description of the surface geometry must be provided, and a three-dimensional grid must be generated in the space around the body. The geometry must be supplied in a format compatible with the grid generation requirements, and must be verified to be free of inconsistencies. This paper presents a procedure for performing the geometry definition of a fighter aircraft that makes use of a commercial computer-aided design/computer-aided manufacturing system. Furthermore, visual representations of the geometry are generated using a computer graphics system for verification of the body definition. Finally, the three-dimensional grids for fighter-like aircraft are generated by means of an efficient new parabolic grid generation method. This method exhibits good control of grid quality.
Geometry definition and grid generation for a complete fighter aircraft
NASA Technical Reports Server (NTRS)
Edwards, Thomas A.
1986-01-01
Recent advances in computing power and numerical solution procedures have enabled computational fluid dynamicists to attempt increasingly difficult problems. In particular, efforts are focusing on computations of complex three-dimensional flow fields about realistic aerodynamic bodies. To perform such computations, a very accurate and detailed description of the surface geometry must be provided, and a three-dimensional grid must be generated in the space around the body. The geometry must be supplied in a format compatible with the grid generation requirements, and must be verified to be free of inconsistencies. A procedure for performing the geometry definition of a fighter aircraft that makes use of a commercial computer-aided design/computer-aided manufacturing system is presented. Furthermore, visual representations of the geometry are generated using a computer graphics system for verification of the body definition. Finally, the three-dimensional grids for fighter-like aircraft are generated by means of an efficient new parabolic grid generation method. This method exhibits good control of grid quality.
Automated segmentation and dose-volume analysis with DICOMautomaton
NASA Astrophysics Data System (ADS)
Clark, H.; Thomas, S.; Moiseenko, V.; Lee, R.; Gill, B.; Duzenli, C.; Wu, J.
2014-03-01
Purpose: Exploration of historical data for regional organ dose sensitivity is limited by the effort needed to (sub-)segment large numbers of contours. A system has been developed which can rapidly perform autonomous contour sub-segmentation and generic dose-volume computations, substantially reducing the effort required for exploratory analyses. Methods: A contour-centric approach is taken which enables lossless, reversible segmentation and dramatically reduces computation time compared with voxel-centric approaches. Segmentation can be specified on a per-contour, per-organ, or per-patient basis, and can be performed along either an embedded plane or in terms of the contour's bounds (e.g., split organ into fractional-volume/dose pieces along any 3D unit vector). More complex segmentation techniques are available. Anonymized data from 60 head-and-neck cancer patients were used to compare dose-volume computations with Varian's EclipseTM (Varian Medical Systems, Inc.). Results: Mean doses and Dose-volume-histograms computed agree strongly with Varian's EclipseTM. Contours which have been segmented can be injected back into patient data permanently and in a Digital Imaging and Communication in Medicine (DICOM)-conforming manner. Lossless segmentation persists across such injection, and remains fully reversible. Conclusions: DICOMautomaton allows researchers to rapidly, accurately, and autonomously segment large amounts of data into intricate structures suitable for analyses of regional organ dose sensitivity.
Numerical simulation of helicopter engine plume in forward flight
NASA Technical Reports Server (NTRS)
Dimanlig, Arsenio C. B.; Vandam, Cornelis P.; Duque, Earl P. N.
1994-01-01
Flowfields around helicopters contain complex flow features such as large separated flow regions, vortices, shear layers, blown and suction surfaces and an inherently unsteady flow imposed by the rotor system. Another complicated feature of helicopters is their infrared signature. Typically, the aircraft's exhaust plume interacts with the rotor downwash, the fuselage's complicated flowfield, and the fuselage itself giving each aircraft a unique IR signature at given flight conditions. The goal of this project was to compute the flow about a realistic helicopter fuselage including the interaction of the engine air intakes and exhaust plume. The computations solve the Think-Layer Navier Stokes equations using overset type grids and in particular use the OVERFLOW code by Buning of NASA Ames. During this three month effort, an existing grid system of the Comanche Helicopter was to be modified to include the engine inlet and the hot engine exhaust. The engine exhaust was to be modeled as hot air exhaust. However, considerable changes in the fuselage geometry required a complete regriding of the surface and volume grids. The engine plume computations have been delayed to future efforts. The results of the current work consists of a complete regeneration of the surface and volume grids of the most recent Comanche fuselage along with a flowfield computation.
Interoperability of Neuroscience Modeling Software
Cannon, Robert C.; Gewaltig, Marc-Oliver; Gleeson, Padraig; Bhalla, Upinder S.; Cornelis, Hugo; Hines, Michael L.; Howell, Fredrick W.; Muller, Eilif; Stiles, Joel R.; Wils, Stefan; De Schutter, Erik
2009-01-01
Neuroscience increasingly uses computational models to assist in the exploration and interpretation of complex phenomena. As a result, considerable effort is invested in the development of software tools and technologies for numerical simulations and for the creation and publication of models. The diversity of related tools leads to the duplication of effort and hinders model reuse. Development practices and technologies that support interoperability between software systems therefore play an important role in making the modeling process more efficient and in ensuring that published models can be reliably and easily reused. Various forms of interoperability are possible including the development of portable model description standards, the adoption of common simulation languages or the use of standardized middleware. Each of these approaches finds applications within the broad range of current modeling activity. However more effort is required in many areas to enable new scientific questions to be addressed. Here we present the conclusions of the “Neuro-IT Interoperability of Simulators” workshop, held at the 11th computational neuroscience meeting in Edinburgh (July 19-20 2006; http://www.cnsorg.org). We assess the current state of interoperability of neural simulation software and explore the future directions that will enable the field to advance. PMID:17873374
Parallel Architectures for Planetary Exploration Requirements (PAPER)
NASA Technical Reports Server (NTRS)
Cezzar, Ruknet; Sen, Ranjan K.
1989-01-01
The Parallel Architectures for Planetary Exploration Requirements (PAPER) project is essentially research oriented towards technology insertion issues for NASA's unmanned planetary probes. It was initiated to complement and augment the long-term efforts for space exploration with particular reference to NASA/LaRC's (NASA Langley Research Center) research needs for planetary exploration missions of the mid and late 1990s. The requirements for space missions as given in the somewhat dated Advanced Information Processing Systems (AIPS) requirements document are contrasted with the new requirements from JPL/Caltech involving sensor data capture and scene analysis. It is shown that more stringent requirements have arisen as a result of technological advancements. Two possible architectures, the AIPS Proof of Concept (POC) configuration and the MAX Fault-tolerant dataflow multiprocessor, were evaluated. The main observation was that the AIPS design is biased towards fault tolerance and may not be an ideal architecture for planetary and deep space probes due to high cost and complexity. The MAX concepts appears to be a promising candidate, except that more detailed information is required. The feasibility for adding neural computation capability to this architecture needs to be studied. Key impact issues for architectural design of computing systems meant for planetary missions were also identified.
Minimal-effort planning of active alignment processes for beam-shaping optics
NASA Astrophysics Data System (ADS)
Haag, Sebastian; Schranner, Matthias; Müller, Tobias; Zontar, Daniel; Schlette, Christian; Losch, Daniel; Brecher, Christian; Roßmann, Jürgen
2015-03-01
In science and industry, the alignment of beam-shaping optics is usually a manual procedure. Many industrial applications utilizing beam-shaping optical systems require more scalable production solutions and therefore effort has been invested in research regarding the automation of optics assembly. In previous works, the authors and other researchers have proven the feasibility of automated alignment of beam-shaping optics such as collimation lenses or homogenization optics. Nevertheless, the planning efforts as well as additional knowledge from the fields of automation and control required for such alignment processes are immense. This paper presents a novel approach of planning active alignment processes of beam-shaping optics with the focus of minimizing the planning efforts for active alignment. The approach utilizes optical simulation and the genetic programming paradigm from computer science for automatically extracting features from a simulated data basis with a high correlation coefficient regarding the individual degrees of freedom of alignment. The strategy is capable of finding active alignment strategies that can be executed by an automated assembly system. The paper presents a tool making the algorithm available to end-users and it discusses the results of planning the active alignment of the well-known assembly of a fast-axis collimator. The paper concludes with an outlook on the transferability to other use cases such as application specific intensity distributions which will benefit from reduced planning efforts.
NASA Astrophysics Data System (ADS)
Manstetten, Paul; Filipovic, Lado; Hössinger, Andreas; Weinbub, Josef; Selberherr, Siegfried
2017-02-01
We present a computationally efficient framework to compute the neutral flux in high aspect ratio structures during three-dimensional plasma etching simulations. The framework is based on a one-dimensional radiosity approach and is applicable to simulations of convex rotationally symmetric holes and convex symmetric trenches with a constant cross-section. The framework is intended to replace the full three-dimensional simulation step required to calculate the neutral flux during plasma etching simulations. Especially for high aspect ratio structures, the computational effort, required to perform the full three-dimensional simulation of the neutral flux at the desired spatial resolution, conflicts with practical simulation time constraints. Our results are in agreement with those obtained by three-dimensional Monte Carlo based ray tracing simulations for various aspect ratios and convex geometries. With this framework we present a comprehensive analysis of the influence of the geometrical properties of high aspect ratio structures as well as of the particle sticking probability on the neutral particle flux.
The use of automatic programming techniques for fault tolerant computing systems
NASA Technical Reports Server (NTRS)
Wild, C.
1985-01-01
It is conjectured that the production of software for ultra-reliable computing systems such as required by Space Station, aircraft, nuclear power plants and the like will require a high degree of automation as well as fault tolerance. In this paper, the relationship between automatic programming techniques and fault tolerant computing systems is explored. Initial efforts in the automatic synthesis of code from assertions to be used for error detection as well as the automatic generation of assertions and test cases from abstract data type specifications is outlined. Speculation on the ability to generate truly diverse designs capable of recovery from errors by exploring alternate paths in the program synthesis tree is discussed. Some initial thoughts on the use of knowledge based systems for the global detection of abnormal behavior using expectations and the goal-directed reconfiguration of resources to meet critical mission objectives are given. One of the sources of information for these systems would be the knowledge captured during the automatic programming process.
An Analysis of Cloud Computing with Amazon Web Services for the Atmospheric Science Data Center
NASA Astrophysics Data System (ADS)
Gleason, J. L.; Little, M. M.
2013-12-01
NASA science and engineering efforts rely heavily on compute and data handling systems. The nature of NASA science data is such that it is not restricted to NASA users, instead it is widely shared across a globally distributed user community including scientists, educators, policy decision makers, and the public. Therefore NASA science computing is a candidate use case for cloud computing where compute resources are outsourced to an external vendor. Amazon Web Services (AWS) is a commercial cloud computing service developed to use excess computing capacity at Amazon, and potentially provides an alternative to costly and potentially underutilized dedicated acquisitions whenever NASA scientists or engineers require additional data processing. AWS desires to provide a simplified avenue for NASA scientists and researchers to share large, complex data sets with external partners and the public. AWS has been extensively used by JPL for a wide range of computing needs and was previously tested on a NASA Agency basis during the Nebula testing program. Its ability to support the Langley Science Directorate needs to be evaluated by integrating it with real world operational needs across NASA and the associated maturity that would come with that. The strengths and weaknesses of this architecture and its ability to support general science and engineering applications has been demonstrated during the previous testing. The Langley Office of the Chief Information Officer in partnership with the Atmospheric Sciences Data Center (ASDC) has established a pilot business interface to utilize AWS cloud computing resources on a organization and project level pay per use model. This poster discusses an effort to evaluate the feasibility of the pilot business interface from a project level perspective by specifically using a processing scenario involving the Clouds and Earth's Radiant Energy System (CERES) project.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Duran, Felicia Angelica; Waymire, Russell L.
2013-10-01
Sandia National Laboratories (SNL) is providing training and consultation activities on security planning and design for the Korea Hydro and Nuclear Power Central Research Institute (KHNPCRI). As part of this effort, SNL performed a literature review on computer security requirements, guidance and best practices that are applicable to an advanced nuclear power plant. This report documents the review of reports generated by SNL and other organizations [U.S. Nuclear Regulatory Commission, Nuclear Energy Institute, and International Atomic Energy Agency] related to protection of information technology resources, primarily digital controls and computer resources and their data networks. Copies of the key documentsmore » have also been provided to KHNP-CRI.« less
Kazakis, Georgios; Kanellopoulos, Ioannis; Sotiropoulos, Stefanos; Lagaros, Nikos D
2017-10-01
Construction industry has a major impact on the environment that we spend most of our life. Therefore, it is important that the outcome of architectural intuition performs well and complies with the design requirements. Architects usually describe as "optimal design" their choice among a rather limited set of design alternatives, dictated by their experience and intuition. However, modern design of structures requires accounting for a great number of criteria derived from multiple disciplines, often of conflicting nature. Such criteria derived from structural engineering, eco-design, bioclimatic and acoustic performance. The resulting vast number of alternatives enhances the need for computer-aided architecture in order to increase the possibility of arriving at a more preferable solution. Therefore, the incorporation of smart, automatic tools in the design process, able to further guide designer's intuition becomes even more indispensable. The principal aim of this study is to present possibilities to integrate automatic computational techniques related to topology optimization in the phase of intuition of civil structures as part of computer aided architectural design. In this direction, different aspects of a new computer aided architectural era related to the interpretation of the optimized designs, difficulties resulted from the increased computational effort and 3D printing capabilities are covered here in.
RELAP-7 Code Assessment Plan and Requirement Traceability Matrix
DOE Office of Scientific and Technical Information (OSTI.GOV)
Yoo, Junsoo; Choi, Yong-joon; Smith, Curtis L.
2016-10-01
The RELAP-7, a safety analysis code for nuclear reactor system, is under development at Idaho National Laboratory (INL). Overall, the code development is directed towards leveraging the advancements in computer science technology, numerical solution methods and physical models over the last decades. Recently, INL has also been putting an effort to establish the code assessment plan, which aims to ensure an improved final product quality through the RELAP-7 development process. The ultimate goal of this plan is to propose a suitable way to systematically assess the wide range of software requirements for RELAP-7, including the software design, user interface, andmore » technical requirements, etc. To this end, we first survey the literature (i.e., international/domestic reports, research articles) addressing the desirable features generally required for advanced nuclear system safety analysis codes. In addition, the V&V (verification and validation) efforts as well as the legacy issues of several recently-developed codes (e.g., RELAP5-3D, TRACE V5.0) are investigated. Lastly, this paper outlines the Requirement Traceability Matrix (RTM) for RELAP-7 which can be used to systematically evaluate and identify the code development process and its present capability.« less
Carasik, Lane B.; Shaver, Dillon R.; Haefner, Jonah B.; ...
2017-08-21
We report the development of molten salt cooled reactors (MSR) and fluoride-salt cooled high temperature reactors (FHR) requires the use of advanced design tools for the primary heat exchanger design. Due to geometric and flow characteristics, compact (pitch to diameter ratios equal to or less than 1.25) heat exchangers with a crossflow flow arrangement can become desirable for these reactors. Unfortunately, the available experimental data is limited for compact tube bundles or banks in crossflow. Computational Fluid Dynamics can be used to alleviate the lack of experimental data in these tube banks. Previous computational efforts have been primarily focused onmore » large S/D ratios (larger than 1.4) using unsteady Reynolds averaged Navier-Stokes and Large Eddy Simulation frameworks. These approaches are useful, but have large computational requirements that make comprehensive design studies impractical. A CFD study was conducted with steady RANS in an effort to provide a starting point for future design work. The study was performed for an in-line tube bank geometry with FLiBe (LiF-BeF2), a frequently selected molten salt, as the working fluid. Based on the estimated pressure drops, the pressure and velocity distributions in the domain, an appropriate meshing strategy was determined and presented. Periodic boundaries in the spanwise direction transverse flow were determined to be an appropriate boundary condition for reduced computational domains. The domain size was investigated and a minimum of 2-flow channels for a domain is recommended to ensure the behavior is accounted for. Finally, the standard low Re κ-ε (Lien) turbulence model was determined to be the most appropriate for steady RANS of this case at the time of writing.« less
DOE Office of Scientific and Technical Information (OSTI.GOV)
Carasik, Lane B.; Shaver, Dillon R.; Haefner, Jonah B.
We report the development of molten salt cooled reactors (MSR) and fluoride-salt cooled high temperature reactors (FHR) requires the use of advanced design tools for the primary heat exchanger design. Due to geometric and flow characteristics, compact (pitch to diameter ratios equal to or less than 1.25) heat exchangers with a crossflow flow arrangement can become desirable for these reactors. Unfortunately, the available experimental data is limited for compact tube bundles or banks in crossflow. Computational Fluid Dynamics can be used to alleviate the lack of experimental data in these tube banks. Previous computational efforts have been primarily focused onmore » large S/D ratios (larger than 1.4) using unsteady Reynolds averaged Navier-Stokes and Large Eddy Simulation frameworks. These approaches are useful, but have large computational requirements that make comprehensive design studies impractical. A CFD study was conducted with steady RANS in an effort to provide a starting point for future design work. The study was performed for an in-line tube bank geometry with FLiBe (LiF-BeF2), a frequently selected molten salt, as the working fluid. Based on the estimated pressure drops, the pressure and velocity distributions in the domain, an appropriate meshing strategy was determined and presented. Periodic boundaries in the spanwise direction transverse flow were determined to be an appropriate boundary condition for reduced computational domains. The domain size was investigated and a minimum of 2-flow channels for a domain is recommended to ensure the behavior is accounted for. Finally, the standard low Re κ-ε (Lien) turbulence model was determined to be the most appropriate for steady RANS of this case at the time of writing.« less
NASA Astrophysics Data System (ADS)
Rueda, Antonio J.; Noguera, José M.; Luque, Adrián
2016-02-01
In recent years GPU computing has gained wide acceptance as a simple low-cost solution for speeding up computationally expensive processing in many scientific and engineering applications. However, in most cases accelerating a traditional CPU implementation for a GPU is a non-trivial task that requires a thorough refactorization of the code and specific optimizations that depend on the architecture of the device. OpenACC is a promising technology that aims at reducing the effort required to accelerate C/C++/Fortran code on an attached multicore device. Virtually with this technology the CPU code only has to be augmented with a few compiler directives to identify the areas to be accelerated and the way in which data has to be moved between the CPU and GPU. Its potential benefits are multiple: better code readability, less development time, lower risk of errors and less dependency on the underlying architecture and future evolution of the GPU technology. Our aim with this work is to evaluate the pros and cons of using OpenACC against native GPU implementations in computationally expensive hydrological applications, using the classic D8 algorithm of O'Callaghan and Mark for river network extraction as case-study. We implemented the flow accumulation step of this algorithm in CPU, using OpenACC and two different CUDA versions, comparing the length and complexity of the code and its performance with different datasets. We advance that although OpenACC can not match the performance of a CUDA optimized implementation (×3.5 slower in average), it provides a significant performance improvement against a CPU implementation (×2-6) with by far a simpler code and less implementation effort.
Computational modeling of brain tumors: discrete, continuum or hybrid?
NASA Astrophysics Data System (ADS)
Wang, Zhihui; Deisboeck, Thomas S.
In spite of all efforts, patients diagnosed with highly malignant brain tumors (gliomas), continue to face a grim prognosis. Achieving significant therapeutic advances will also require a more detailed quantitative understanding of the dynamic interactions among tumor cells, and between these cells and their biological microenvironment. Data-driven computational brain tumor models have the potential to provide experimental tumor biologists with such quantitative and cost-efficient tools to generate and test hypotheses on tumor progression, and to infer fundamental operating principles governing bidirectional signal propagation in multicellular cancer systems. This review highlights the modeling objectives of and challenges with developing such in silico brain tumor models by outlining two distinct computational approaches: discrete and continuum, each with representative examples. Future directions of this integrative computational neuro-oncology field, such as hybrid multiscale multiresolution modeling are discussed.
Computational modeling of brain tumors: discrete, continuum or hybrid?
NASA Astrophysics Data System (ADS)
Wang, Zhihui; Deisboeck, Thomas S.
2008-04-01
In spite of all efforts, patients diagnosed with highly malignant brain tumors (gliomas), continue to face a grim prognosis. Achieving significant therapeutic advances will also require a more detailed quantitative understanding of the dynamic interactions among tumor cells, and between these cells and their biological microenvironment. Data-driven computational brain tumor models have the potential to provide experimental tumor biologists with such quantitative and cost-efficient tools to generate and test hypotheses on tumor progression, and to infer fundamental operating principles governing bidirectional signal propagation in multicellular cancer systems. This review highlights the modeling objectives of and challenges with developing such in silicobrain tumor models by outlining two distinct computational approaches: discrete and continuum, each with representative examples. Future directions of this integrative computational neuro-oncology field, such as hybrid multiscale multiresolution modeling are discussed.
Ontology-based tools to expedite predictive model construction.
Haug, Peter; Holmen, John; Wu, Xinzi; Mynam, Kumar; Ebert, Matthew; Ferraro, Jeffrey
2014-01-01
Large amounts of medical data are collected electronically during the course of caring for patients using modern medical information systems. This data presents an opportunity to develop clinically useful tools through data mining and observational research studies. However, the work necessary to make sense of this data and to integrate it into a research initiative can require substantial effort from medical experts as well as from experts in medical terminology, data extraction, and data analysis. This slows the process of medical research. To reduce the effort required for the construction of computable, diagnostic predictive models, we have developed a system that hybridizes a medical ontology with a large clinical data warehouse. Here we describe components of this system designed to automate the development of preliminary diagnostic models and to provide visual clues that can assist the researcher in planning for further analysis of the data behind these models.
Experience Transitioning Models and Data at the NOAA Space Weather Prediction Center
NASA Astrophysics Data System (ADS)
Berger, Thomas
2016-07-01
The NOAA Space Weather Prediction Center has a long history of transitioning research data and models into operations and with the validation activities required. The first stage in this process involves demonstrating that the capability has sufficient value to customers to justify the cost needed to transition it and to run it continuously and reliably in operations. Once the overall value is demonstrated, a substantial effort is then required to develop the operational software from the research codes. The next stage is to implement and test the software and product generation on the operational computers. Finally, effort must be devoted to establishing long-term measures of performance, maintaining the software, and working with forecasters, customers, and researchers to improve over time the operational capabilities. This multi-stage process of identifying, transitioning, and improving operational space weather capabilities will be discussed using recent examples. Plans for future activities will also be described.
Image-based ranging and guidance for rotorcraft
NASA Technical Reports Server (NTRS)
Menon, P. K. A.
1991-01-01
This report documents the research carried out under NASA Cooperative Agreement No. NCC2-575 during the period Oct. 1988 - Dec. 1991. Primary emphasis of this effort was on the development of vision based navigation methods for rotorcraft nap-of-the-earth flight regime. A family of field-based ranging algorithms were developed during this research period. These ranging schemes are capable of handling both stereo and motion image sequences, and permits both translational and rotational camera motion. The algorithms require minimal computational effort and appear to be implementable in real time. A series of papers were presented on these ranging schemes, some of which are included in this report. A small part of the research effort was expended on synthesizing a rotorcraft guidance law that directly uses the vision-based ranging data. This work is discussed in the last section.
A study of modelling simplifications in ground vibration predictions for railway traffic at grade
NASA Astrophysics Data System (ADS)
Germonpré, M.; Degrande, G.; Lombaert, G.
2017-10-01
Accurate computational models are required to predict ground-borne vibration due to railway traffic. Such models generally require a substantial computational effort. Therefore, much research has focused on developing computationally efficient methods, by either exploiting the regularity of the problem geometry in the direction along the track or assuming a simplified track structure. This paper investigates the modelling errors caused by commonly made simplifications of the track geometry. A case study is presented investigating a ballasted track in an excavation. The soil underneath the ballast is stiffened by a lime treatment. First, periodic track models with different cross sections are analyzed, revealing that a prediction of the rail receptance only requires an accurate representation of the soil layering directly underneath the ballast. A much more detailed representation of the cross sectional geometry is required, however, to calculate vibration transfer from track to free field. Second, simplifications in the longitudinal track direction are investigated by comparing 2.5D and periodic track models. This comparison shows that the 2.5D model slightly overestimates the track stiffness, while the transfer functions between track and free field are well predicted. Using a 2.5D model to predict the response during a train passage leads to an overestimation of both train-track interaction forces and free field vibrations. A combined periodic/2.5D approach is therefore proposed in this paper. First, the dynamic axle loads are computed by solving the train-track interaction problem with a periodic model. Next, the vibration transfer to the free field is computed with a 2.5D model. This combined periodic/2.5D approach only introduces small modelling errors compared to an approach in which a periodic model is used in both steps, while significantly reducing the computational cost.
HGML: a hypertext guideline markup language.
Hagerty, C. G.; Pickens, D.; Kulikowski, C.; Sonnenberg, F.
2000-01-01
Existing text-based clinical practice guidelines can be difficult to put into practice. While a growing number of such documents have gained acceptance in the medical community and contain a wealth of valuable information, the time required to digest them is substantial. Yet the expressive power, subtlety and flexibility of natural language pose challenges when designing computer tools that will help in their application. At the same time, formal computer languages typically lack such expressiveness and the effort required to translate existing documents into these languages may be costly. We propose a method based on the mark-up concept for converting text-based clinical guidelines into a machine-operable form. This allows existing guidelines to be manipulated by machine, and viewed in different formats at various levels of detail according to the needs of the practitioner, while preserving their originally published form. PMID:11079898
NASA's Climate Data Services Initiative
NASA Astrophysics Data System (ADS)
McInerney, M.; Duffy, D.; Schnase, J. L.; Webster, W. P.
2013-12-01
Our understanding of the Earth's processes is based on a combination of observational data records and mathematical models. The size of NASA's space-based observational data sets is growing dramatically as new missions come online. However a potentially bigger data challenge is posed by the work of climate scientists, whose models are regularly producing data sets of hundreds of terabytes or more. It is important to understand that the 'Big Data' challenge of climate science cannot be solved with a single technological approach or an ad hoc assemblage of technologies. It will require a multi-faceted, well-integrated suite of capabilities that include cloud computing, large-scale compute-storage systems, high-performance analytics, scalable data management, and advanced deployment mechanisms in addition to the existing, well-established array of mature information technologies. It will also require a coherent organizational effort that is able to focus on the specific and sometimes unique requirements of climate science. Given that it is the knowledge that is gained from data that is of ultimate benefit to society, data publication and data analytics will play a particularly important role. In an effort to accelerate scientific discovery and innovation through broader use of climate data, NASA Goddard Space Flight Center's Office of Computational and Information Sciences and Technology has embarked on a determined effort to build a comprehensive, integrated data publication and analysis capability for climate science. The Climate Data Services (CDS) Initiative integrates people, expertise, and technology into a highly-focused, next-generation, one-stop climate science information service. The CDS Initiative is providing the organizational framework, processes, and protocols needed to deploy existing information technologies quickly using a combination of enterprise-level services and an expanding array of cloud services. Crucial to its effectiveness, the CDS Initiative is developing the technical expertise to move new information technologies from R&D into operational use. This combination enables full, end-to-end support for climate data publishing and data analytics, and affords the flexibility required to meet future and unanticipated needs. Current science efforts being supported by the CDS Initiative include IPPC, OBS4MIP, ANA4MIPS, MERRA II, National Climate Assessment, the Ocean Data Assimilation project, NASA Earth Exchange (NEX), and the RECOVER Burned Area Emergency Response decision support system. Service offerings include an integrated suite of classic technologies (FTP, LAS, THREDDS, ESGF, GRaD-DODS, OPeNDAP, WMS, ArcGIS Server), emerging technologies (iRODS, UVCDAT), and advanced technologies (MERRA Analytic Services, MapReduce, Ontology Services, and the CDS API). This poster will describe the CDS Initiative, provide details about the Initiative's advanced offerings, and layout the CDS Initiative's deployment roadmap.
Using Computational Toxicology to Enable Risk-Based ...
presentation at Drug Safety Gordon Research Conference 2016 on research efforts in NCCT to enable Computational Toxicology to support risk assessment. Slide presentation at Drug Safety Gordon Research Conference 2016 on research efforts in NCCT to enable Computational Toxicology to support risk assessment.
EPA’s National Center for Computational Toxicology is engaged in high-profile research efforts to improve the ability to more efficiently and effectively prioritize and screen thousands of environmental chemicals for potential toxicity. A central component of these efforts invol...
DOE Office of Scientific and Technical Information (OSTI.GOV)
Heffelfinger, Grant S.; Martino, Anthony; Rintoul, Mark Daniel
This SAND report provides the technical progress through October 2004 of the Sandia-led project, %22Carbon Sequestration in Synechococcus Sp.: From Molecular Machines to Hierarchical Modeling,%22 funded by the DOE Office of Science Genomes to Life Program. Understanding, predicting, and perhaps manipulating carbon fixation in the oceans has long been a major focus of biological oceanography and has more recently been of interest to a broader audience of scientists and policy makers. It is clear that the oceanic sinks and sources of CO2 are important terms in the global environmental response to anthropogenic atmospheric inputs of CO2 and that oceanic microorganismsmore » play a key role in this response. However, the relationship between this global phenomenon and the biochemical mechanisms of carbon fixation in these microorganisms is poorly understood. In this project, we will investigate the carbon sequestration behavior of Synechococcus Sp., an abundant marine cyanobacteria known to be important to environmental responses to carbon dioxide levels, through experimental and computational methods. This project is a combined experimental and computational effort with emphasis on developing and applying new computational tools and methods. Our experimental effort will provide the biology and data to drive the computational efforts and include significant investment in developing new experimental methods for uncovering protein partners, characterizing protein complexes, identifying new binding domains. We will also develop and apply new data measurement and statistical methods for analyzing microarray experiments. Computational tools will be essential to our efforts to discover and characterize the function of the molecular machines of Synechococcus. To this end, molecular simulation methods will be coupled with knowledge discovery from diverse biological data sets for high-throughput discovery and characterization of protein-protein complexes. In addition, we will develop a set of novel capabilities for inference of regulatory pathways in microbial genomes across multiple sources of information through the integration of computational and experimental technologies. These capabilities will be applied to Synechococcus regulatory pathways to characterize their interaction map and identify component proteins in these - 4 - pathways. We will also investigate methods for combining experimental and computational results with visualization and natural language tools to accelerate discovery of regulatory pathways. The ultimate goal of this effort is develop and apply new experimental and computational methods needed to generate a new level of understanding of how the Synechococcus genome affects carbon fixation at the global scale. Anticipated experimental and computational methods will provide ever-increasing insight about the individual elements and steps in the carbon fixation process, however relating an organism's genome to its cellular response in the presence of varying environments will require systems biology approaches. Thus a primary goal for this effort is to integrate the genomic data generated from experiments and lower level simulations with data from the existing body of literature into a whole cell model. We plan to accomplish this by developing and applying a set of tools for capturing the carbon fixation behavior of complex of Synechococcus at different levels of resolution. Finally, the explosion of data being produced by high-throughput experiments requires data analysis and models which are more computationally complex, more heterogeneous, and require coupling to ever increasing amounts of experimentally obtained data in varying formats. These challenges are unprecedented in high performance scientific computing and necessitate the development of a companion computational infrastructure to support this effort. More information about this project, including a copy of the original proposal, can be found at www.genomes-to-life.org Acknowledgment We want to gratefully acknowledge the contributions of the GTL Project Team as follows: Grant S. Heffelfinger1*, Anthony Martino2, Andrey Gorin3, Ying Xu10,3, Mark D. Rintoul1, Al Geist3, Matthew Ennis1, Hashimi Al-Hashimi8, Nikita Arnold3, Andrei Borziak3, Bianca Brahamsha6, Andrea Belgrano12, Praveen Chandramohan3, Xin Chen9, Pan Chongle3, Paul Crozier1, PguongAn Dam10, George S. Davidson1, Robert Day3, Jean Loup Faulon2, Damian Gessler12, Arlene Gonzalez2, David Haaland1, William Hart1, Victor Havin3, Tao Jiang9, Howland Jones1, David Jung3, Ramya Krishnamurthy3, Yooli Light2, Shawn Martin1, Rajesh Munavalli3, Vijaya Natarajan3, Victor Olman10, Frank Olken4, Brian Palenik6, Byung Park3, Steven Plimpton1, Diana Roe2, Nagiza Samatova3, Arie Shoshani4, Michael Sinclair1, Alex Slepoy1, Shawn Stevens8, Chris Stork1, Charlie Strauss5, Zhengchang Su10, Edward Thomas1, Jerilyn A. Timlin1, Xiufeng Wan11, HongWei Wu10, Dong Xu11, Gong-Xin Yu3, Grover Yip8, Zhaoduo Zhang2, Erik Zuiderweg8 *Author to whom correspondence should be addressed (gsheffe%40sandia.gov) 1. Sandia National Laboratories, Albuquerque, NM 2. Sandia National Laboratories, Livermore, CA 3. Oak Ridge National Laboratory, Oak Ridge, TN 4. Lawrence Berkeley National Laboratory, Berkeley, CA 5. Los Alamos National Laboratory, Los Alamos, NM 6. University of California, San Diego 7. University of Illinois, Urbana/Champaign 8. University of Michigan, Ann Arbor 9. University of California, Riverside 10. University of Georgia, Athens 11. University of Missouri, Columbia 12. National Center for Genome Resources, Santa Fe, NM Sandia is a multiprogram laboratory operated by Sandia Corporation, a Lockheed Martin Company, for the United States Department of Energy's National Nuclear Security Administration under contract DE-AC04-94AL85000.« less
Genomes to Life Project Quarterly Report April 2005.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Heffelfinger, Grant S.; Martino, Anthony; Rintoul, Mark Daniel
2006-02-01
This SAND report provides the technical progress through April 2005 of the Sandia-led project, "Carbon Sequestration in Synechococcus Sp.: From Molecular Machines to Hierarchical Modeling," funded by the DOE Office of Science Genomics:GTL Program. Understanding, predicting, and perhaps manipulating carbon fixation in the oceans has long been a major focus of biological oceanography and has more recently been of interest to a broader audience of scientists and policy makers. It is clear that the oceanic sinks and sources of CO2 are important terms in the global environmental response to anthropogenic atmospheric inputs of CO2 and that oceanic microorganisms play amore » key role in this response. However, the relationship between this global phenomenon and the biochemical mechanisms of carbon fixation in these microorganisms is poorly understood. In this project, we will investigate the carbon sequestration behavior of Synechococcus Sp., an abundant marine cyanobacteria known to be important to environmental responses to carbon dioxide levels, through experimental and computational methods. This project is a combined experimental and computational effort with emphasis on developing and applying new computational tools and methods. Our experimental effort will provide the biology and data to drive the computational efforts and include significant investment in developing new experimental methods for uncovering protein partners, characterizing protein complexes, identifying new binding domains. We will also develop and apply new data measurement and statistical methods for analyzing microarray experiments. Computational tools will be essential to our efforts to discover and characterize the function of the molecular machines of Synechococcus. To this end, molecular simulation methods will be coupled with knowledge discovery from diverse biological data sets for high-throughput discovery and characterization of protein-protein complexes. In addition, we will develop a set of novel capabilities for inference of regulatory pathways in microbial genomes across multiple sources of information through the integration of computational and experimental technologies. These capabilities will be applied to Synechococcus regulatory pathways to characterize their interaction map and identify component proteins in these - 4 -pathways. We will also investigate methods for combining experimental and computational results with visualization and natural language tools to accelerate discovery of regulatory pathways. The ultimate goal of this effort is develop and apply new experimental and computational methods needed to generate a new level of understanding of how the Synechococcus genome affects carbon fixation at the global scale. Anticipated experimental and computational methods will provide ever-increasing insight about the individual elements and steps in the carbon fixation process, however relating an organism's genome to its cellular response in the presence of varying environments will require systems biology approaches. Thus a primary goal for this effort is to integrate the genomic data generated from experiments and lower level simulations with data from the existing body of literature into a whole cell model. We plan to accomplish this by developing and applying a set of tools for capturing the carbon fixation behavior of complex of Synechococcus at different levels of resolution. Finally, the explosion of data being produced by high-throughput experiments requires data analysis and models which are more computationally complex, more heterogeneous, and require coupling to ever increasing amounts of experimentally obtained data in varying formats. These challenges are unprecedented in high performance scientific computing and necessitate the development of a companion computational infrastructure to support this effort. More information about this project can be found at www.genomes-to-life.org Acknowledgment We want to gratefully acknowledge the contributions of: Grant Heffelfinger1*, Anthony Martino2, Brian Palenik6, Andrey Gorin3, Ying Xu10,3, Mark Daniel Rintoul1, Al Geist3, Matthew Ennis1, with Pratul Agrawal3, Hashim Al-Hashimi8, Andrea Belgrano12, Mike Brown1, Xin Chen9, Paul Crozier1, PguongAn Dam10, Jean-Loup Faulon2, Damian Gessler12, David Haaland1, Victor Havin4, C.F. Huang5, Tao Jiang9, Howland Jones1, David Jung3, Katherine Kang14, Michael Langston15, Shawn Martin1, Shawn Means1, Vijaya Natarajan4, Roy Nielson5, Frank Olken4, Victor Olman10, Ian Paulsen14, Steve Plimpton1, Andreas Reichsteiner5, Nagiza Samatova3, Arie Shoshani4, Michael Sinclair1, Alex Slepoy1, Shawn Stevens8, Charlie Strauss5, Zhengchang Su10, Ed Thomas1, Jerilyn Timlin1, WimVermaas13, Xiufeng Wan11, HongWei Wu10, Dong Xu11, Grover Yip8, Erik Zuiderweg8 *Author to whom correspondence should be addressed (gsheffe@sandia.gov) 1. Sandia National Laboratories, Albuquerque, NM 2. Sandia National Laboratories, Livermore, CA 3. Oak Ridge National Laboratory, Oak Ridge, TN 4. Lawrence Berkeley National Laboratory, Berkeley, CA 5. Los Alamos National Laboratory, Los Alamos, NM 6. University of California, San Diego 7. University of Illinois, Urbana/Champaign 8. University of Michigan, Ann Arbor 9. University of California, Riverside 10. University of Georgia, Athens 11. University of Missouri, Columbia 12. National Center for Genome Resources, Santa Fe, NM 13. Arizona State University 14. The Institute for Genomic Research 15. University of Tennessee 5 Sandia is a multiprogram laboratory operated by Sandia Corporation, a Lockheed Martin Company, for the United States Department of Energy's National Nuclear Security Administration under contract DE-AC04-94AL8500.« less
A 3D-CFD code for accurate prediction of fluid flows and fluid forces in seals
NASA Technical Reports Server (NTRS)
Athavale, M. M.; Przekwas, A. J.; Hendricks, R. C.
1994-01-01
Current and future turbomachinery requires advanced seal configurations to control leakage, inhibit mixing of incompatible fluids and to control the rotodynamic response. In recognition of a deficiency in the existing predictive methodology for seals, a seven year effort was established in 1990 by NASA's Office of Aeronautics Exploration and Technology, under the Earth-to-Orbit Propulsion program, to develop validated Computational Fluid Dynamics (CFD) concepts, codes and analyses for seals. The effort will provide NASA and the U.S. Aerospace Industry with advanced CFD scientific codes and industrial codes for analyzing and designing turbomachinery seals. An advanced 3D CFD cylindrical seal code has been developed, incorporating state-of-the-art computational methodology for flow analysis in straight, tapered and stepped seals. Relevant computational features of the code include: stationary/rotating coordinates, cylindrical and general Body Fitted Coordinates (BFC) systems, high order differencing schemes, colocated variable arrangement, advanced turbulence models, incompressible/compressible flows, and moving grids. This paper presents the current status of code development, code demonstration for predicting rotordynamic coefficients, numerical parametric study of entrance loss coefficients for generic annular seals, and plans for code extensions to labyrinth, damping, and other seal configurations.
Computational crystallization.
Altan, Irem; Charbonneau, Patrick; Snell, Edward H
2016-07-15
Crystallization is a key step in macromolecular structure determination by crystallography. While a robust theoretical treatment of the process is available, due to the complexity of the system, the experimental process is still largely one of trial and error. In this article, efforts in the field are discussed together with a theoretical underpinning using a solubility phase diagram. Prior knowledge has been used to develop tools that computationally predict the crystallization outcome and define mutational approaches that enhance the likelihood of crystallization. For the most part these tools are based on binary outcomes (crystal or no crystal), and the full information contained in an assembly of crystallization screening experiments is lost. The potential of this additional information is illustrated by examples where new biological knowledge can be obtained and where a target can be sub-categorized to predict which class of reagents provides the crystallization driving force. Computational analysis of crystallization requires complete and correctly formatted data. While massive crystallization screening efforts are under way, the data available from many of these studies are sparse. The potential for this data and the steps needed to realize this potential are discussed. Copyright © 2016 Elsevier Inc. All rights reserved.
Deployment of the OSIRIS EM-PIC code on the Intel Knights Landing architecture
NASA Astrophysics Data System (ADS)
Fonseca, Ricardo
2017-10-01
Electromagnetic particle-in-cell (EM-PIC) codes such as OSIRIS have found widespread use in modelling the highly nonlinear and kinetic processes that occur in several relevant plasma physics scenarios, ranging from astrophysical settings to high-intensity laser plasma interaction. Being computationally intensive, these codes require large scale HPC systems, and a continuous effort in adapting the algorithm to new hardware and computing paradigms. In this work, we report on our efforts on deploying the OSIRIS code on the new Intel Knights Landing (KNL) architecture. Unlike the previous generation (Knights Corner), these boards are standalone systems, and introduce several new features, include the new AVX-512 instructions and on-package MCDRAM. We will focus on the parallelization and vectorization strategies followed, as well as memory management, and present a detailed performance evaluation of code performance in comparison with the CPU code. This work was partially supported by Fundaçã para a Ciência e Tecnologia (FCT), Portugal, through Grant No. PTDC/FIS-PLA/2940/2014.
NASA Technical Reports Server (NTRS)
Easton, John W.; Struk, Peter M.; Rotella, Anthony
2008-01-01
As a part of efforts to develop an electronics repair capability for long duration space missions, techniques and materials for soldering components on a circuit board in reduced gravity must be developed. This paper presents results from testing solder joint formation in low gravity on a NASA Reduced Gravity Research Aircraft. The results presented include joints formed using eutectic tin-lead solder and one of the following fluxes: (1) a no-clean flux core, (2) a rosin flux core, and (3) a solid solder wire with external liquid no-clean flux. The solder joints are analyzed with a computed tomography (CT) technique which imaged the interior of the entire solder joint. This replaced an earlier technique that required the solder joint to be destructively ground down revealing a single plane which was subsequently analyzed. The CT analysis technique is described and results presented with implications for future testing as well as implications for the overall electronics repair effort discussed.
Automated Performance Prediction of Message-Passing Parallel Programs
NASA Technical Reports Server (NTRS)
Block, Robert J.; Sarukkai, Sekhar; Mehra, Pankaj; Woodrow, Thomas S. (Technical Monitor)
1995-01-01
The increasing use of massively parallel supercomputers to solve large-scale scientific problems has generated a need for tools that can predict scalability trends of applications written for these machines. Much work has been done to create simple models that represent important characteristics of parallel programs, such as latency, network contention, and communication volume. But many of these methods still require substantial manual effort to represent an application in the model's format. The NIK toolkit described in this paper is the result of an on-going effort to automate the formation of analytic expressions of program execution time, with a minimum of programmer assistance. In this paper we demonstrate the feasibility of our approach, by extending previous work to detect and model communication patterns automatically, with and without overlapped computations. The predictions derived from these models agree, within reasonable limits, with execution times of programs measured on the Intel iPSC/860 and Paragon. Further, we demonstrate the use of MK in selecting optimal computational grain size and studying various scalability metrics.
Irregular Applications: Architectures & Algorithms
DOE Office of Scientific and Technical Information (OSTI.GOV)
Feo, John T.; Villa, Oreste; Tumeo, Antonino
Irregular applications are characterized by irregular data structures, control and communication patterns. Novel irregular high performance applications which deal with large data sets and require have recently appeared. Unfortunately, current high performance systems and software infrastructures executes irregular algorithms poorly. Only coordinated efforts by end user, area specialists and computer scientists that consider both the architecture and the software stack may be able to provide solutions to the challenges of modern irregular applications.
Aeroelastic Uncertainty Quantification Studies Using the S4T Wind Tunnel Model
NASA Technical Reports Server (NTRS)
Nikbay, Melike; Heeg, Jennifer
2017-01-01
This paper originates from the joint efforts of an aeroelastic study team in the Applied Vehicle Technology Panel from NATO Science and Technology Organization, with the Task Group number AVT-191, titled "Application of Sensitivity Analysis and Uncertainty Quantification to Military Vehicle Design." We present aeroelastic uncertainty quantification studies using the SemiSpan Supersonic Transport wind tunnel model at the NASA Langley Research Center. The aeroelastic study team decided treat both structural and aerodynamic input parameters as uncertain and represent them as samples drawn from statistical distributions, propagating them through aeroelastic analysis frameworks. Uncertainty quantification processes require many function evaluations to asses the impact of variations in numerous parameters on the vehicle characteristics, rapidly increasing the computational time requirement relative to that required to assess a system deterministically. The increased computational time is particularly prohibitive if high-fidelity analyses are employed. As a remedy, the Istanbul Technical University team employed an Euler solver in an aeroelastic analysis framework, and implemented reduced order modeling with Polynomial Chaos Expansion and Proper Orthogonal Decomposition to perform the uncertainty propagation. The NASA team chose to reduce the prohibitive computational time by employing linear solution processes. The NASA team also focused on determining input sample distributions.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Mather, Barry
The increasing deployment of distribution-connected photovoltaic (DPV) systems requires utilities to complete complex interconnection studies. Relatively simple interconnection study methods worked well for low penetrations of photovoltaic systems, but more complicated quasi-static time-series (QSTS) analysis is required to make better interconnection decisions as DPV penetration levels increase. Tools and methods must be developed to support this. This paper presents a variable-time-step solver for QSTS analysis that significantly shortens the computational time and effort to complete a detailed analysis of the operation of a distribution circuit with many DPV systems. Specifically, it demonstrates that the proposed variable-time-step solver can reduce themore » required computational time by as much as 84% without introducing any important errors to metrics, such as the highest and lowest voltage occurring on the feeder, number of voltage regulator tap operations, and total amount of losses realized in the distribution circuit during a 1-yr period. Further improvement in computational speed is possible with the introduction of only modest errors in these metrics, such as a 91 percent reduction with less than 5 percent error when predicting voltage regulator operations.« less
NASA Astrophysics Data System (ADS)
Exby, J.; Busby, R.; Dimitrov, D. A.; Bruhwiler, D.; Cary, J. R.
2003-10-01
We present our design and initial implementation of a web service model for running particle-in-cell (PIC) codes remotely from a web browser interface. PIC codes have grown significantly in complexity and now often require parallel execution on multiprocessor computers, which in turn requires sophisticated post-processing and data analysis. A significant amount of time and effort is required for a physicist to develop all the necessary skills, at the expense of actually doing research. Moreover, parameter studies with a computationally intensive code justify the systematic management of results with an efficient way to communicate them among a group of remotely located collaborators. Our initial implementation uses the OOPIC Pro code [1], Linux, Apache, MySQL, Python, and PHP. The Interactive Data Language is used for visualization. [1] D.L. Bruhwiler et al., Phys. Rev. ST-AB 4, 101302 (2001). * This work is supported by DOE grant # DE-FG02-03ER83857 and by Tech-X Corp. ** Also University of Colorado.
Neutron skyshine calculations for the PDX tokamak
DOE Office of Scientific and Technical Information (OSTI.GOV)
Wheeler, F.J.; Nigg, D.W.
1979-01-01
The Poloidal Divertor Experiment (PDX) at Princeton will be the first operating tokamak to require a substantial radiation shield. The PDX shielding includes a water-filled roof shield over the machine to reduce air scattering skyshine dose in the PDX control room and at the site boundary. During the design of this roof shield a unique method was developed to compute the neutron source emerging from the top of the roof shield for use in Monte Carlo skyshine calculations. The method is based on simple, one-dimensional calculations rather than multidimensional calculations, resulting in considerable savings in computer time and input preparationmore » effort. This method is described.« less
DOE Office of Scientific and Technical Information (OSTI.GOV)
Gharibyan, N.
In order to fully characterize the NIF neutron spectrum, SAND-II-SNL software was requested/received from the Radiation Safety Information Computational Center. The software is designed to determine the neutron energy spectrum through analysis of experimental activation data. However, given that the source code was developed in Sparcstation 10, it is not compatible with current version of FORTRAN. Accounts have been established through the Lawrence Livermore National Laboratory’s High Performance Computing in order to access different compiles for FORTRAN (e.g. pgf77, pgf90). Additionally, several of the subroutines included in the SAND-II-SNL package have required debugging efforts to allow for proper compiling ofmore » the code.« less
Nonlinear Unsteady Aerodynamic Modeling Using Wind Tunnel and Computational Data
NASA Technical Reports Server (NTRS)
Murphy, Patrick C.; Klein, Vladislav; Frink, Neal T.
2016-01-01
Extensions to conventional aircraft aerodynamic models are required to adequately predict responses when nonlinear unsteady flight regimes are encountered, especially at high incidence angles and under maneuvering conditions. For a number of reasons, such as loss of control, both military and civilian aircraft may extend beyond normal and benign aerodynamic flight conditions. In addition, military applications may require controlled flight beyond the normal envelope, and civilian flight may require adequate recovery or prevention methods from these adverse conditions. These requirements have led to the development of more general aerodynamic modeling methods and provided impetus for researchers to improve both techniques and the degree of collaboration between analytical and experimental research efforts. In addition to more general mathematical model structures, dynamic test methods have been designed to provide sufficient information to allow model identification. This paper summarizes research to develop a modeling methodology appropriate for modeling aircraft aerodynamics that include nonlinear unsteady behaviors using both experimental and computational test methods. This work was done at Langley Research Center, primarily under the NASA Aviation Safety Program, to address aircraft loss of control, prevention, and recovery aerodynamics.
NASA Technical Reports Server (NTRS)
Renaud, John E.; Batill, Stephen M.; Brockman, Jay B.
1998-01-01
This research effort is a joint program between the Departments of Aerospace and Mechanical Engineering and the Computer Science and Engineering Department at the University of Notre Dame. Three Principal Investigators; Drs. Renaud, Brockman and Batill directed this effort. During the four and a half year grant period, six Aerospace and Mechanical Engineering Ph.D. students and one Masters student received full or partial support, while four Computer Science and Engineering Ph.D. students and one Masters student were supported. During each of the summers up to four undergraduate students were involved in related research activities. The purpose of the project was to develop a framework and systematic methodology to facilitate the application of Multidisciplinary Design Optimization (N4DO) to a diverse class of system design problems. For all practical aerospace systems, the design of a systems is a complex sequence of events which integrates the activities of a variety of discipline "experts" and their associated "tools". The development, archiving and exchange of information between these individual experts is central to the design task and it is this information which provides the basis for these experts to make coordinated design decisions (i.e., compromises and trade-offs) - resulting in the final product design. Grant efforts focused on developing and evaluating frameworks for effective design coordination within a MDO environment. Central to these research efforts was the concept that the individual discipline "expert", using the most appropriate "tools" available and the most complete description of the system should be empowered to have the greatest impact on the design decisions and final design. This means that the overall process must be highly interactive and efficiently conducted if the resulting design is to be developed in a manner consistent with cost and time requirements. The methods developed as part of this research effort include; extensions to a sensitivity based Concurrent Subspace Optimization (CSSO) MDO algorithm; the development of a neural network response surface based CSSO-MDO algorithm; and the integration of distributed computing and process scheduling into the MDO environment. This report overviews research efforts in each of these focus. A complete bibliography of research produced with support of this grant is attached.
Computationally efficient control allocation
NASA Technical Reports Server (NTRS)
Durham, Wayne (Inventor)
2001-01-01
A computationally efficient method for calculating near-optimal solutions to the three-objective, linear control allocation problem is disclosed. The control allocation problem is that of distributing the effort of redundant control effectors to achieve some desired set of objectives. The problem is deemed linear if control effectiveness is affine with respect to the individual control effectors. The optimal solution is that which exploits the collective maximum capability of the effectors within their individual physical limits. Computational efficiency is measured by the number of floating-point operations required for solution. The method presented returned optimal solutions in more than 90% of the cases examined; non-optimal solutions returned by the method were typically much less than 1% different from optimal and the errors tended to become smaller than 0.01% as the number of controls was increased. The magnitude of the errors returned by the present method was much smaller than those that resulted from either pseudo inverse or cascaded generalized inverse solutions. The computational complexity of the method presented varied linearly with increasing numbers of controls; the number of required floating point operations increased from 5.5 i, to seven times faster than did the minimum-norm solution (the pseudoinverse), and at about the same rate as did the cascaded generalized inverse solution. The computational requirements of the method presented were much better than that of previously described facet-searching methods which increase in proportion to the square of the number of controls.
Exact parallel algorithms for some members of the traveling salesman problem family
DOE Office of Scientific and Technical Information (OSTI.GOV)
Pekny, J.F.
1989-01-01
The traveling salesman problem and its many generalizations comprise one of the best known combinatorial optimization problem families. Most members of the family are NP-complete problems so that exact algorithms require an unpredictable and sometimes large computational effort. Parallel computers offer hope for providing the power required to meet these demands. A major barrier to applying parallel computers is the lack of parallel algorithms. The contributions presented in this thesis center around new exact parallel algorithms for the asymmetric traveling salesman problem (ATSP), prize collecting traveling salesman problem (PCTSP), and resource constrained traveling salesman problem (RCTSP). The RCTSP is amore » particularly difficult member of the family since finding a feasible solution is an NP-complete problem. An exact sequential algorithm is also presented for the directed hamiltonian cycle problem (DHCP). The DHCP algorithm is superior to current heuristic approaches and represents the first exact method applicable to large graphs. Computational results presented for each of the algorithms demonstrates the effectiveness of combining efficient algorithms with parallel computing methods. Performance statistics are reported for randomly generated ATSPs with 7,500 cities, PCTSPs with 200 cities, RCTSPs with 200 cities, DHCPs with 3,500 vertices, and assignment problems of size 10,000. Sequential results were collected on a Sun 4/260 engineering workstation, while parallel results were collected using a 14 and 100 processor BBN Butterfly Plus computer. The computational results represent the largest instances ever solved to optimality on any type of computer.« less
HTSFinder: Powerful Pipeline of DNA Signature Discovery by Parallel and Distributed Computing
Karimi, Ramin; Hajdu, Andras
2016-01-01
Comprehensive effort for low-cost sequencing in the past few years has led to the growth of complete genome databases. In parallel with this effort, a strong need, fast and cost-effective methods and applications have been developed to accelerate sequence analysis. Identification is the very first step of this task. Due to the difficulties, high costs, and computational challenges of alignment-based approaches, an alternative universal identification method is highly required. Like an alignment-free approach, DNA signatures have provided new opportunities for the rapid identification of species. In this paper, we present an effective pipeline HTSFinder (high-throughput signature finder) with a corresponding k-mer generator GkmerG (genome k-mers generator). Using this pipeline, we determine the frequency of k-mers from the available complete genome databases for the detection of extensive DNA signatures in a reasonably short time. Our application can detect both unique and common signatures in the arbitrarily selected target and nontarget databases. Hadoop and MapReduce as parallel and distributed computing tools with commodity hardware are used in this pipeline. This approach brings the power of high-performance computing into the ordinary desktop personal computers for discovering DNA signatures in large databases such as bacterial genome. A considerable number of detected unique and common DNA signatures of the target database bring the opportunities to improve the identification process not only for polymerase chain reaction and microarray assays but also for more complex scenarios such as metagenomics and next-generation sequencing analysis. PMID:26884678
HTSFinder: Powerful Pipeline of DNA Signature Discovery by Parallel and Distributed Computing.
Karimi, Ramin; Hajdu, Andras
2016-01-01
Comprehensive effort for low-cost sequencing in the past few years has led to the growth of complete genome databases. In parallel with this effort, a strong need, fast and cost-effective methods and applications have been developed to accelerate sequence analysis. Identification is the very first step of this task. Due to the difficulties, high costs, and computational challenges of alignment-based approaches, an alternative universal identification method is highly required. Like an alignment-free approach, DNA signatures have provided new opportunities for the rapid identification of species. In this paper, we present an effective pipeline HTSFinder (high-throughput signature finder) with a corresponding k-mer generator GkmerG (genome k-mers generator). Using this pipeline, we determine the frequency of k-mers from the available complete genome databases for the detection of extensive DNA signatures in a reasonably short time. Our application can detect both unique and common signatures in the arbitrarily selected target and nontarget databases. Hadoop and MapReduce as parallel and distributed computing tools with commodity hardware are used in this pipeline. This approach brings the power of high-performance computing into the ordinary desktop personal computers for discovering DNA signatures in large databases such as bacterial genome. A considerable number of detected unique and common DNA signatures of the target database bring the opportunities to improve the identification process not only for polymerase chain reaction and microarray assays but also for more complex scenarios such as metagenomics and next-generation sequencing analysis.
Thrombosis in Cerebral Aneurysms and the Computational Modeling Thereof: A Review
Ngoepe, Malebogo N.; Frangi, Alejandro F.; Byrne, James V.; Ventikos, Yiannis
2018-01-01
Thrombosis is a condition closely related to cerebral aneurysms and controlled thrombosis is the main purpose of endovascular embolization treatment. The mechanisms governing thrombus initiation and evolution in cerebral aneurysms have not been fully elucidated and this presents challenges for interventional planning. Significant effort has been directed towards developing computational methods aimed at streamlining the interventional planning process for unruptured cerebral aneurysm treatment. Included in these methods are computational models of thrombus development following endovascular device placement. The main challenge with developing computational models for thrombosis in disease cases is that there exists a wide body of literature that addresses various aspects of the clotting process, but it may not be obvious what information is of direct consequence for what modeling purpose (e.g., for understanding the effect of endovascular therapies). The aim of this review is to present the information so it will be of benefit to the community attempting to model cerebral aneurysm thrombosis for interventional planning purposes, in a simplified yet appropriate manner. The paper begins by explaining current understanding of physiological coagulation and highlights the documented distinctions between the physiological process and cerebral aneurysm thrombosis. Clinical observations of thrombosis following endovascular device placement are then presented. This is followed by a section detailing the demands placed on computational models developed for interventional planning. Finally, existing computational models of thrombosis are presented. This last section begins with description and discussion of physiological computational clotting models, as they are of immense value in understanding how to construct a general computational model of clotting. This is then followed by a review of computational models of clotting in cerebral aneurysms, specifically. Even though some progress has been made towards computational predictions of thrombosis following device placement in cerebral aneurysms, many gaps still remain. Answering the key questions will require the combined efforts of the clinical, experimental and computational communities. PMID:29670533
Thrombosis in Cerebral Aneurysms and the Computational Modeling Thereof: A Review.
Ngoepe, Malebogo N; Frangi, Alejandro F; Byrne, James V; Ventikos, Yiannis
2018-01-01
Thrombosis is a condition closely related to cerebral aneurysms and controlled thrombosis is the main purpose of endovascular embolization treatment. The mechanisms governing thrombus initiation and evolution in cerebral aneurysms have not been fully elucidated and this presents challenges for interventional planning. Significant effort has been directed towards developing computational methods aimed at streamlining the interventional planning process for unruptured cerebral aneurysm treatment. Included in these methods are computational models of thrombus development following endovascular device placement. The main challenge with developing computational models for thrombosis in disease cases is that there exists a wide body of literature that addresses various aspects of the clotting process, but it may not be obvious what information is of direct consequence for what modeling purpose (e.g., for understanding the effect of endovascular therapies). The aim of this review is to present the information so it will be of benefit to the community attempting to model cerebral aneurysm thrombosis for interventional planning purposes, in a simplified yet appropriate manner. The paper begins by explaining current understanding of physiological coagulation and highlights the documented distinctions between the physiological process and cerebral aneurysm thrombosis. Clinical observations of thrombosis following endovascular device placement are then presented. This is followed by a section detailing the demands placed on computational models developed for interventional planning. Finally, existing computational models of thrombosis are presented. This last section begins with description and discussion of physiological computational clotting models, as they are of immense value in understanding how to construct a general computational model of clotting. This is then followed by a review of computational models of clotting in cerebral aneurysms, specifically. Even though some progress has been made towards computational predictions of thrombosis following device placement in cerebral aneurysms, many gaps still remain. Answering the key questions will require the combined efforts of the clinical, experimental and computational communities.
Multi-level Hierarchical Poly Tree computer architectures
NASA Technical Reports Server (NTRS)
Padovan, Joe; Gute, Doug
1990-01-01
Based on the concept of hierarchical substructuring, this paper develops an optimal multi-level Hierarchical Poly Tree (HPT) parallel computer architecture scheme which is applicable to the solution of finite element and difference simulations. Emphasis is given to minimizing computational effort, in-core/out-of-core memory requirements, and the data transfer between processors. In addition, a simplified communications network that reduces the number of I/O channels between processors is presented. HPT configurations that yield optimal superlinearities are also demonstrated. Moreover, to generalize the scope of applicability, special attention is given to developing: (1) multi-level reduction trees which provide an orderly/optimal procedure by which model densification/simplification can be achieved, as well as (2) methodologies enabling processor grading that yields architectures with varying types of multi-level granularity.
A Roadmap for HEP Software and Computing R&D for the 2020s
DOE Office of Scientific and Technical Information (OSTI.GOV)
Alves, Antonio Augusto, Jr; et al.
Particle physics has an ambitious and broad experimental programme for the coming decades. This programme requires large investments in detector hardware, either to build new facilities and experiments, or to upgrade existing ones. Similarly, it requires commensurate investment in the R&D of software to acquire, manage, process, and analyse the shear amounts of data to be recorded. In planning for the HL-LHC in particular, it is critical that all of the collaborating stakeholders agree on the software goals and priorities, and that the efforts complement each other. In this spirit, this white paper describes the R&D activities required to preparemore » for this software upgrade.« less
History of the numerical aerodynamic simulation program
NASA Technical Reports Server (NTRS)
Peterson, Victor L.; Ballhaus, William F., Jr.
1987-01-01
The Numerical Aerodynamic Simulation (NAS) program has reached a milestone with the completion of the initial operating configuration of the NAS Processing System Network. This achievement is the first major milestone in the continuing effort to provide a state-of-the-art supercomputer facility for the national aerospace community and to serve as a pathfinder for the development and use of future supercomputer systems. The underlying factors that motivated the initiation of the program are first identified and then discussed. These include the emergence and evolution of computational aerodynamics as a powerful new capability in aerodynamics research and development, the computer power required for advances in the discipline, the complementary nature of computation and wind tunnel testing, and the need for the government to play a pathfinding role in the development and use of large-scale scientific computing systems. Finally, the history of the NAS program is traced from its inception in 1975 to the present time.
Detonation product EOS studies: Using ISLS to refine CHEETAH
NASA Astrophysics Data System (ADS)
Zaug, Joseph; Fried, Larry; Hansen, Donald
2001-06-01
Knowledge of an effective interatomic potential function underlies any effort to predict or rationalize the properties of solids and liquids. The experiments we undertake are directed towards determination of equilibrium and dynamic properties of simple fluids at densities sufficiently high that traditional computational methods and semi-empirical forms successful at ambient conditions may require reconsideration. In this paper we present high-pressure and temperature experimental sound speed data on a suite of non-ideal simple fluids and fluid mixtures. Impulsive Stimulated Light Scattering conducted in the diamond-anvil cell offers an experimental approach to determine cross-pair potential interactions through equation of state determinations. In addition the kinetics of structural relaxation in fluids can be studied. We compare our experimental results with our thermochemical computational model CHEETAH. Computational models are systematically improved with each addition of experimental data. Experimentally grounded computational models provide a good basis to confidently understand the chemical nature of reactions at extreme conditions.
Tu, Samson W; Hrabak, Karen M; Campbell, James R; Glasgow, Julie; Nyman, Mark A; McClure, Robert; McClay, James; Abarbanel, Robert; Mansfield, James G; Martins, Susana M; Goldstein, Mary K; Musen, Mark A
2006-01-01
Developing computer-interpretable clinical practice guidelines (CPGs) to provide decision support for guideline-based care is an extremely labor-intensive task. In the EON/ATHENA and SAGE projects, we formulated substantial portions of CPGs as computable statements that express declarative relationships between patient conditions and possible interventions. We developed query and expression languages that allow a decision-support system (DSS) to evaluate these statements in specific patient situations. A DSS can use these guideline statements in multiple ways, including: (1) as inputs for determining preferred alternatives in decision-making, and (2) as a way to provide targeted commentaries in the clinical information system. The use of these declarative statements significantly reduces the modeling expertise and effort required to create and maintain computer-interpretable knowledge bases for decision-support purpose. We discuss possible implications for sharing of such knowledge bases.
Exploring Effective Decision Making through Human-Centered and Computational Intelligence Methods
DOE Office of Scientific and Technical Information (OSTI.GOV)
Han, Kyungsik; Cook, Kristin A.; Shih, Patrick C.
Decision-making has long been studied to understand a psychological, cognitive, and social process of selecting an effective choice from alternative options. Its studies have been extended from a personal level to a group and collaborative level, and many computer-aided decision-making systems have been developed to help people make right decisions. There has been significant research growth in computational aspects of decision-making systems, yet comparatively little effort has existed in identifying and articulating user needs and requirements in assessing system outputs and the extent to which human judgments could be utilized for making accurate and reliable decisions. Our research focus ismore » decision-making through human-centered and computational intelligence methods in a collaborative environment, and the objectives of this position paper are to bring our research ideas to the workshop, and share and discuss ideas.« less
González-Nilo, Fernando; Pérez-Acle, Tomás; Guínez-Molinos, Sergio; Geraldo, Daniela A; Sandoval, Claudia; Yévenes, Alejandro; Santos, Leonardo S; Laurie, V Felipe; Mendoza, Hegaly; Cachau, Raúl E
2011-01-01
After the progress made during the genomics era, bioinformatics was tasked with supporting the flow of information generated by nanobiotechnology efforts. This challenge requires adapting classical bioinformatic and computational chemistry tools to store, standardize, analyze, and visualize nanobiotechnological information. Thus, old and new bioinformatic and computational chemistry tools have been merged into a new sub-discipline: nanoinformatics. This review takes a second look at the development of this new and exciting area as seen from the perspective of the evolution of nanobiotechnology applied to the life sciences. The knowledge obtained at the nano-scale level implies answers to new questions and the development of new concepts in different fields. The rapid convergence of technologies around nanobiotechnologies has spun off collaborative networks and web platforms created for sharing and discussing the knowledge generated in nanobiotechnology. The implementation of new database schemes suitable for storage, processing and integrating physical, chemical, and biological properties of nanoparticles will be a key element in achieving the promises in this convergent field. In this work, we will review some applications of nanobiotechnology to life sciences in generating new requirements for diverse scientific fields, such as bioinformatics and computational chemistry.
Science and Observation Recommendations for Future NASA Carbon Cycle Research
NASA Technical Reports Server (NTRS)
McClain, Charles R.; Collatz, G. J.; Kawa, S. R.; Gregg, W. W.; Gervin, J. C.; Abshire, J. B.; Andrews, A. E.; Behrenfeld, M. J.; Demaio, L. D.; Knox, R. G.
2002-01-01
Between October 2000 and June 2001, an Agency-wide planning, effort was organized by elements of NASA Goddard Space Flight Center (GSFC) to define future research and technology development activities. This planning effort was conducted at the request of the Associate Administrator of the Office of Earth Science (Code Y), Dr. Ghassem Asrar, at NASA Headquarters (HQ). The primary points of contact were Dr. Mary Cleave, Deputy Associate Administrator for Advanced Planning at NASA HQ (Headquarters) and Dr. Charles McClain of the Office of Global Carbon Studies (Code 970.2) at GSFC. During this period, GSFC hosted three workshops to define the science requirements and objectives, the observational and modeling requirements to meet the science objectives, the technology development requirements, and a cost plan for both the science program and new flight projects that will be needed for new observations beyond the present or currently planned. The plan definition process was very intensive as HQ required the final presentation package by mid-June 2001. This deadline was met and the recommendations were ultimately refined and folded into a broader program plan, which also included climate modeling, aerosol observations, and science computing technology development, for contributing to the President's Climate Change Research Initiative. This technical memorandum outlines the process and recommendations made for cross-cutting carbon cycle research as presented in June. A separate NASA document outlines the budget profiles or cost analyses conducted as part of the planning effort.
Modeling and Analysis of Power Processing Systems (MAPPS), initial phase 2
NASA Technical Reports Server (NTRS)
Yu, Y.; Lee, F. C.; Wangenheim, H.; Warren, D.
1977-01-01
The overall objective of the program is to provide the engineering tools to reduce the analysis, design, and development effort, and thus the cost, in achieving the required performances for switching regulators and dc-dc converter systems. The program was both tutorial and application oriented. Various analytical methods were described in detail and supplemented with examples, and those with standardization appeals were reduced into computer-based subprograms. Major program efforts included those concerning small and large signal control-dependent performance analysis and simulation, control circuit design, power circuit design and optimization, system configuration study, and system performance simulation. Techniques including discrete time domain, conventional frequency domain, Lagrange multiplier, nonlinear programming, and control design synthesis were employed in these efforts. To enhance interactive conversation between the modeling and analysis subprograms and the user, a working prototype of the Data Management Program was also developed to facilitate expansion as future subprogram capabilities increase.
Multicore job scheduling in the Worldwide LHC Computing Grid
NASA Astrophysics Data System (ADS)
Forti, A.; Pérez-Calero Yzquierdo, A.; Hartmann, T.; Alef, M.; Lahiff, A.; Templon, J.; Dal Pra, S.; Gila, M.; Skipsey, S.; Acosta-Silva, C.; Filipcic, A.; Walker, R.; Walker, C. J.; Traynor, D.; Gadrat, S.
2015-12-01
After the successful first run of the LHC, data taking is scheduled to restart in Summer 2015 with experimental conditions leading to increased data volumes and event complexity. In order to process the data generated in such scenario and exploit the multicore architectures of current CPUs, the LHC experiments have developed parallelized software for data reconstruction and simulation. However, a good fraction of their computing effort is still expected to be executed as single-core tasks. Therefore, jobs with diverse resources requirements will be distributed across the Worldwide LHC Computing Grid (WLCG), making workload scheduling a complex problem in itself. In response to this challenge, the WLCG Multicore Deployment Task Force has been created in order to coordinate the joint effort from experiments and WLCG sites. The main objective is to ensure the convergence of approaches from the different LHC Virtual Organizations (VOs) to make the best use of the shared resources in order to satisfy their new computing needs, minimizing any inefficiency originated from the scheduling mechanisms, and without imposing unnecessary complexities in the way sites manage their resources. This paper describes the activities and progress of the Task Force related to the aforementioned topics, including experiences from key sites on how to best use different batch system technologies, the evolution of workload submission tools by the experiments and the knowledge gained from scale tests of the different proposed job submission strategies.
The SGI/CRAY T3E: Experiences and Insights
NASA Technical Reports Server (NTRS)
Bernard, Lisa Hamet
1999-01-01
The focus of the HPCC Earth and Space Sciences (ESS) Project is capability computing - pushing highly scalable computing testbeds to their performance limits. The drivers of this focus are the Grand Challenge problems in Earth and space science: those that could not be addressed in a capacity computing environment where large jobs must continually compete for resources. These Grand Challenge codes require a high degree of communication, large memory, and very large I/O (throughout the duration of the processing, not just in loading initial conditions and saving final results). This set of parameters led to the selection of an SGI/Cray T3E as the current ESS Computing Testbed. The T3E at the Goddard Space Flight Center is a unique computational resource within NASA. As such, it must be managed to effectively support the diverse research efforts across the NASA research community yet still enable the ESS Grand Challenge Investigator teams to achieve their performance milestones, for which the system was intended. To date, all Grand Challenge Investigator teams have achieved the 10 GFLOPS milestone, eight of nine have achieved the 50 GFLOPS milestone, and three have achieved the 100 GFLOPS milestone. In addition, many technical papers have been published highlighting results achieved on the NASA T3E, including some at this Workshop. The successes enabled by the NASA T3E computing environment are best illustrated by the 512 PE upgrade funded by the NASA Earth Science Enterprise earlier this year. Never before has an HPCC computing testbed been so well received by the general NASA science community that it was deemed critical to the success of a core NASA science effort. NASA looks forward to many more success stories before the conclusion of the NASA-SGI/Cray cooperative agreement in June 1999.
Latest Sensors and Data Acquisition Development Efforts at KSC
NASA Technical Reports Server (NTRS)
Perotti, Jose M.
2002-01-01
This viewgraph presentation summarizes the characteristics required on sensors by consumers desiring access to space, a long term plan developed at KSC (Kennedy Space Center) to identify promising technologies for NASA's own future sensor needs, and the characteristics of several smart sensors already developed. Also addressed are the computer hardware and architecture used to operate sensors, and generic testing capabilities. Consumers desire sensors which are lightweight, inexpensive, intelligent, and easy to use.
Approximate Confidence Limit Procedures for Complex Systems
1991-09-01
requirements for the degree of MASTER OF SCIENCE IN OPERATIONS RESEARCH from the NAVAL POSTGRADUATE SCHOOL September 1991 A uthor...34 YEE, Kah-Chee Approved by: ? )t. 7 " ’& W. M. WOODS, Thesis Advisor R. R. READ, Second Reader P. PURDE, airman Department of Operations Research ii...cautioned that the computer programs developed in this research may not have been exercised for all cases of interest. While every effort has been made
Hints for an extension of the early exercise premium formula for American options
NASA Astrophysics Data System (ADS)
Bermin, Hans-Peter; Kohatsu-Higa, Arturo; Perelló, Josep
2005-09-01
There exists a non-closed formula for the American put option price and non-trivial computations are required to solve it. Strong efforts have been made to propose efficient numerical techniques but few have strong mathematical reasoning to ascertain why they work well. We present an extension of the American put price aiming to catch weaknesses of the numerical methods based on their non-fulfillment of the smooth pasting condition.
Increasing reliability of Gauss-Kronrod quadrature by Eratosthenes' sieve method
NASA Astrophysics Data System (ADS)
Adam, Gh.; Adam, S.
2001-04-01
The reliability of the local error estimates returned by the Gauss-Kronrod quadrature rules can be raised up to the theoretical 100% rate of success, under error estimate sharpening, provided a number of natural validating conditions are required. The self-validating scheme of the local error estimates, which is easy to implement and adds little supplementary computing effort, strengthens considerably the correctness of the decisions within the automatic adaptive quadrature.
The NASA aircraft icing research program
NASA Technical Reports Server (NTRS)
Shaw, Robert J.; Reinmann, John J.
1990-01-01
The objective of the NASA aircraft icing research program is to develop and make available to industry icing technology to support the needs and requirements for all-weather aircraft designs. Research is being done for both fixed wing and rotary wing applications. The NASA program emphasizes technology development in two areas, advanced ice protection concepts and icing simulation. Reviewed here are the computer code development/validation, icing wind tunnel testing, and icing flight testing efforts.
Improving Mobile Infrastructure for Pervasive Personal Computing
2007-11-01
fulfillment of the requirements for the degree of Master of Science. Copyright c© 2007 Ajay Surie This research was supported by the National Science Foundation...NSF) under grant number CNS-0509004 and by the Army Research Office (ARO) through grant number DAAD19-02-1-0389 (“Perpetually Available and Secure...efforts my final project could not have been successful. Working with the members of my research group, Niraj Tolia, Benjamin Gilbert, Jan Harkes, Adam
Uncertainty Analysis via Failure Domain Characterization: Polynomial Requirement Functions
NASA Technical Reports Server (NTRS)
Crespo, Luis G.; Munoz, Cesar A.; Narkawicz, Anthony J.; Kenny, Sean P.; Giesy, Daniel P.
2011-01-01
This paper proposes an uncertainty analysis framework based on the characterization of the uncertain parameter space. This characterization enables the identification of worst-case uncertainty combinations and the approximation of the failure and safe domains with a high level of accuracy. Because these approximations are comprised of subsets of readily computable probability, they enable the calculation of arbitrarily tight upper and lower bounds to the failure probability. A Bernstein expansion approach is used to size hyper-rectangular subsets while a sum of squares programming approach is used to size quasi-ellipsoidal subsets. These methods are applicable to requirement functions whose functional dependency on the uncertainty is a known polynomial. Some of the most prominent features of the methodology are the substantial desensitization of the calculations from the uncertainty model assumed (i.e., the probability distribution describing the uncertainty) as well as the accommodation for changes in such a model with a practically insignificant amount of computational effort.
Uncertainty Analysis via Failure Domain Characterization: Unrestricted Requirement Functions
NASA Technical Reports Server (NTRS)
Crespo, Luis G.; Kenny, Sean P.; Giesy, Daniel P.
2011-01-01
This paper proposes an uncertainty analysis framework based on the characterization of the uncertain parameter space. This characterization enables the identification of worst-case uncertainty combinations and the approximation of the failure and safe domains with a high level of accuracy. Because these approximations are comprised of subsets of readily computable probability, they enable the calculation of arbitrarily tight upper and lower bounds to the failure probability. The methods developed herein, which are based on nonlinear constrained optimization, are applicable to requirement functions whose functional dependency on the uncertainty is arbitrary and whose explicit form may even be unknown. Some of the most prominent features of the methodology are the substantial desensitization of the calculations from the assumed uncertainty model (i.e., the probability distribution describing the uncertainty) as well as the accommodation for changes in such a model with a practically insignificant amount of computational effort.
S-Cube: Enabling the Next Generation of Software Services
NASA Astrophysics Data System (ADS)
Metzger, Andreas; Pohl, Klaus
The Service Oriented Architecture (SOA) paradigm is increasingly adopted by industry for building distributed software systems. However, when designing, developing and operating innovative software services and servicebased systems, several challenges exist. Those challenges include how to manage the complexity of those systems, how to establish, monitor and enforce Quality of Service (QoS) and Service Level Agreements (SLAs), as well as how to build those systems such that they can proactively adapt to dynamically changing requirements and context conditions. Developing foundational solutions for those challenges requires joint efforts of different research communities such as Business Process Management, Grid Computing, Service Oriented Computing and Software Engineering. This paper provides an overview of S-Cube, the European Network of Excellence on Software Services and Systems. S-Cube brings together researchers from leading research institutions across Europe, who join their competences to develop foundations, theories as well as methods and tools for future service-based systems.
Comparative analysis of techniques for evaluating the effectiveness of aircraft computing systems
NASA Technical Reports Server (NTRS)
Hitt, E. F.; Bridgman, M. S.; Robinson, A. C.
1981-01-01
Performability analysis is a technique developed for evaluating the effectiveness of fault-tolerant computing systems in multiphase missions. Performability was evaluated for its accuracy, practical usefulness, and relative cost. The evaluation was performed by applying performability and the fault tree method to a set of sample problems ranging from simple to moderately complex. The problems involved as many as five outcomes, two to five mission phases, permanent faults, and some functional dependencies. Transient faults and software errors were not considered. A different analyst was responsible for each technique. Significantly more time and effort were required to learn performability analysis than the fault tree method. Performability is inherently as accurate as fault tree analysis. For the sample problems, fault trees were more practical and less time consuming to apply, while performability required less ingenuity and was more checkable. Performability offers some advantages for evaluating very complex problems.
NASA Astrophysics Data System (ADS)
Zerkle, Ronald D.; Prakash, Chander
1995-03-01
This viewgraph presentation summarizes some CFD experience at GE Aircraft Engines for flows in the primary gaspath of a gas turbine engine and in turbine blade cooling passages. It is concluded that application of the standard k-epsilon turbulence model with wall functions is not adequate for accurate CFD simulation of aerodynamic performance and heat transfer in the primary gas path of a gas turbine engine. New models are required in the near-wall region which include more physics than wall functions. The two-layer modeling approach appears attractive because of its computational complexity. In addition, improved CFD simulation of film cooling and turbine blade internal cooling passages will require anisotropic turbulence models. New turbulence models must be practical in order to have a significant impact on the engine design process. A coordinated turbulence modeling effort between NASA centers would be beneficial to the gas turbine industry.
NASA Technical Reports Server (NTRS)
Zerkle, Ronald D.; Prakash, Chander
1995-01-01
This viewgraph presentation summarizes some CFD experience at GE Aircraft Engines for flows in the primary gaspath of a gas turbine engine and in turbine blade cooling passages. It is concluded that application of the standard k-epsilon turbulence model with wall functions is not adequate for accurate CFD simulation of aerodynamic performance and heat transfer in the primary gas path of a gas turbine engine. New models are required in the near-wall region which include more physics than wall functions. The two-layer modeling approach appears attractive because of its computational complexity. In addition, improved CFD simulation of film cooling and turbine blade internal cooling passages will require anisotropic turbulence models. New turbulence models must be practical in order to have a significant impact on the engine design process. A coordinated turbulence modeling effort between NASA centers would be beneficial to the gas turbine industry.
NASA Astrophysics Data System (ADS)
Park, Jin-Young; Lee, Dong-Eun; Kim, Byung-Soo
2017-10-01
Due to the increasing concern about climate change, efforts to reduce environmental load are continuously being made in construction industry, and LCA (life cycle assessment) is being presented as an effective method to assess environmental load. Since LCA requires information on construction quantity used for environmental load estimation, however, it is not being utilized in the environmental review in the early design phase where it is difficult to obtain such information. In this study, computation system for construction quantity based on standard cross section of road drainage facilities was developed to compute construction quantity required for LCA using only information available in the early design phase to develop and verify the effectiveness of a model that can perform environmental load estimation. The result showed that it is an effective model that can be used in the early design phase as it revealed a 13.39% mean absolute error rate.
Resource Balancing Control Allocation
NASA Technical Reports Server (NTRS)
Frost, Susan A.; Bodson, Marc
2010-01-01
Next generation aircraft with a large number of actuators will require advanced control allocation methods to compute the actuator commands needed to follow desired trajectories while respecting system constraints. Previously, algorithms were proposed to minimize the l1 or l2 norms of the tracking error and of the control effort. The paper discusses the alternative choice of using the l1 norm for minimization of the tracking error and a normalized l(infinity) norm, or sup norm, for minimization of the control effort. The algorithm computes the norm of the actuator deflections scaled by the actuator limits. Minimization of the control effort then translates into the minimization of the maximum actuator deflection as a percentage of its range of motion. The paper shows how the problem can be solved effectively by converting it into a linear program and solving it using a simplex algorithm. Properties of the algorithm are investigated through examples. In particular, the min-max criterion results in a type of resource balancing, where the resources are the control surfaces and the algorithm balances these resources to achieve the desired command. A study of the sensitivity of the algorithms to the data is presented, which shows that the normalized l(infinity) algorithm has the lowest sensitivity, although high sensitivities are observed whenever the limits of performance are reached.
Proposed Directions for Research in Computer-Based Education.
ERIC Educational Resources Information Center
Waugh, Michael L.
Several directions for potential research efforts in the field of computer-based education (CBE) are discussed. (For the purposes of this paper, CBE is defined as any use of computers to promote learning with no intended inference as to the specific nature or organization of the educational application under discussion.) Efforts should be directed…
Lindberg, D A; Humphreys, B L
1995-01-01
The High-Performance Computing and Communications (HPCC) program is a multiagency federal effort to advance the state of computing and communications and to provide the technologic platform on which the National Information Infrastructure (NII) can be built. The HPCC program supports the development of high-speed computers, high-speed telecommunications, related software and algorithms, education and training, and information infrastructure technology and applications. The vision of the NII is to extend access to high-performance computing and communications to virtually every U.S. citizen so that the technology can be used to improve the civil infrastructure, lifelong learning, energy management, health care, etc. Development of the NII will require resolution of complex economic and social issues, including information privacy. Health-related applications supported under the HPCC program and NII initiatives include connection of health care institutions to the Internet; enhanced access to gene sequence data; the "Visible Human" Project; and test-bed projects in telemedicine, electronic patient records, shared informatics tool development, and image systems. PMID:7614116
Predictive Capability Maturity Model for computational modeling and simulation.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Oberkampf, William Louis; Trucano, Timothy Guy; Pilch, Martin M.
2007-10-01
The Predictive Capability Maturity Model (PCMM) is a new model that can be used to assess the level of maturity of computational modeling and simulation (M&S) efforts. The development of the model is based on both the authors experience and their analysis of similar investigations in the past. The perspective taken in this report is one of judging the usefulness of a predictive capability that relies on the numerical solution to partial differential equations to better inform and improve decision making. The review of past investigations, such as the Software Engineering Institute's Capability Maturity Model Integration and the National Aeronauticsmore » and Space Administration and Department of Defense Technology Readiness Levels, indicates that a more restricted, more interpretable method is needed to assess the maturity of an M&S effort. The PCMM addresses six contributing elements to M&S: (1) representation and geometric fidelity, (2) physics and material model fidelity, (3) code verification, (4) solution verification, (5) model validation, and (6) uncertainty quantification and sensitivity analysis. For each of these elements, attributes are identified that characterize four increasing levels of maturity. Importantly, the PCMM is a structured method for assessing the maturity of an M&S effort that is directed toward an engineering application of interest. The PCMM does not assess whether the M&S effort, the accuracy of the predictions, or the performance of the engineering system satisfies or does not satisfy specified application requirements.« less
DOE Office of Scientific and Technical Information (OSTI.GOV)
Oubeidillah, Abdoul A; Kao, Shih-Chieh; Ashfaq, Moetasim
2014-01-01
To extend geographical coverage, refine spatial resolution, and improve modeling efficiency, a computation- and data-intensive effort was conducted to organize a comprehensive hydrologic dataset with post-calibrated model parameters for hydro-climate impact assessment. Several key inputs for hydrologic simulation including meteorologic forcings, soil, land class, vegetation, and elevation were collected from multiple best-available data sources and organized for 2107 hydrologic subbasins (8-digit hydrologic units, HUC8s) in the conterminous United States at refined 1/24 (~4 km) spatial resolution. Using high-performance computing for intensive model calibration, a high-resolution parameter dataset was prepared for the macro-scale Variable Infiltration Capacity (VIC) hydrologic model. The VICmore » simulation was driven by DAYMET daily meteorological forcing and was calibrated against USGS WaterWatch monthly runoff observations for each HUC8. The results showed that this new parameter dataset may help reasonably simulate runoff at most US HUC8 subbasins. Based on this exhaustive calibration effort, it is now possible to accurately estimate the resources required for further model improvement across the entire conterminous United States. We anticipate that through this hydrologic parameter dataset, the repeated effort of fundamental data processing can be lessened, so that research efforts can emphasize the more challenging task of assessing climate change impacts. The pre-organized model parameter dataset will be provided to interested parties to support further hydro-climate impact assessment.« less
Biomechanical Analyses of the Efficacy of Patterns of Maternal Effort on Second-Stage Progress
Lien, Kuo-Cheng; DeLancey, John O.L.; Ashton-Miller, James A.
2009-01-01
OBJECTIVE To develop and use a biomechanical computer model to simulate the effect of varying the timing of voluntary maternal pushes during uterine contraction on second-stage labor duration. METHODS Published initial pelvic floor geometry was imported into technical computing software to build a simplified 3-D biomechanical model with six representative viscoelastic levator muscle bands interconnected by a hyperelastic iliococcygeal raphé. An incompressible sphere simulated the molded fetal head. Forces from uterine contraction and voluntary expulsive efforts were summed to push the model fetal head along the Curve of Carus opposed by the resistance of the pelvic floor structures to stretch. Holding uterine maximal contraction force and push strength constant, pushes were timed before (“Pre”), at (“Peak”), and after (“Post”) maximal uterine contraction force. The effect of different combinations of pushes on second stage duration and the number of pushes required for delivery were evaluated. RESULTS Calculated second stage durations ranged from 57.5 minutes (“triple” or Pre-Peak-Post pattern) to 75.8 minutes (“pre-push” and “post-push” patterns). Delivery with the “triple push” pattern required 59 voluntary pushes, while the “peak push” pattern required 23 voluntary pushes, a 61% reduction. The corresponding reduction for the “pre-and-peak push” pattern was 29%, the “peak-and-post push” pattern was 30%, the “pre-push” pattern was 54%, and the “post-push” pattern was 56%. CONCLUSION Although the “triple push” pattern resulted in a 16% shorter second stage, this came at the energetic expense of a 61% increase in the number of pushes required. PMID:19305333
Finite Element Models for Electron Beam Freeform Fabrication Process
NASA Technical Reports Server (NTRS)
Chandra, Umesh
2012-01-01
Electron beam freeform fabrication (EBF3) is a member of an emerging class of direct manufacturing processes known as solid freeform fabrication (SFF); another member of the class is the laser deposition process. Successful application of the EBF3 process requires precise control of a number of process parameters such as the EB power, speed, and metal feed rate in order to ensure thermal management; good fusion between the substrate and the first layer and between successive layers; minimize part distortion and residual stresses; and control the microstructure of the finished product. This is the only effort thus far that has addressed computer simulation of the EBF3 process. The models developed in this effort can assist in reducing the number of trials in the laboratory or on the shop floor while making high-quality parts. With some modifications, their use can be further extended to the simulation of laser, TIG (tungsten inert gas), and other deposition processes. A solid mechanics-based finite element code, ABAQUS, was chosen as the primary engine in developing these models whereas a computational fluid dynamics (CFD) code, Fluent, was used in a support role. Several innovative concepts were developed, some of which are highlighted below. These concepts were implemented in a number of new computer models either in the form of stand-alone programs or as user subroutines for ABAQUS and Fluent codes. A database of thermo-physical, mechanical, fluid, and metallurgical properties of stainless steel 304 was developed. Computing models for Gaussian and raster modes of the electron beam heat input were developed. Also, new schemes were devised to account for the heat sink effect during the deposition process. These innovations, and others, lead to improved models for thermal management and prediction of transient/residual stresses and distortions. Two approaches for the prediction of microstructure were pursued. The first was an empirical approach involving the computation of thermal gradient, solidification rate, and velocity (G,R,V) coupled with the use of a solidification map that should be known a priori. The second approach relies completely on computer simulation. For this purpose a criterion for the prediction of morphology was proposed, which was combined with three alternative models for the prediction of microstructure; one based on solidification kinetics, the second on phase diagram, and the third on differential scanning calorimetry data. The last was found to be the simplest and the most versatile; it can be used with multicomponent alloys and rapid solidification without any additional difficulty. For the purpose of (limited) experimental validation, finite element models developed in this effort were applied to three different shapes made of stainless steel 304 material, designed expressly for this effort with an increasing level of complexity. These finite element models require large computation time, especially when applied to deposits with multiple adjacent beads and layers. This problem can be overcome, to some extent, by the use of fast, multi-core computers. Also, due to their numerical nature coupled with the fact that solid mechanics- based models are being used to represent the material behavior in liquid and vapor phases as well, the models have some inherent approximations that become more pronounced when dealing with multi-bead and multi-layer deposits.
Space station Simulation Computer System (SCS) study for NASA/MSFC. Volume 1: Overview and summary
NASA Technical Reports Server (NTRS)
1989-01-01
NASA's Space Station Freedom Program (SSFP) planning efforts have identified a need for a payload training simulator system to serve as both a training facility and as a demonstrator to validate operational concepts. The envisioned Marshall Space Flight Center (MSFC) Payload Training Complex (PTC) required to meet this need will train the space station payload scientists, station scientists, and ground controllers to operate the wide variety of experiments that will be onboard the Space Station Freedom. The Simulation Computer System (SCS) is the computer hardware, software, and workstations that will support the Payload Training Complex at MSFC. The purpose of this SCS study is to investigate issues related to the SCS, alternative requirements, simulator approaches, and state-of-the-art technologies to develop candidate concepts and designs. This study was performed August 1988 to October 1989. Thus, the results are based on the SSFP August 1989 baseline, i.e., pre-Langley configuration/budget review (C/BR) baseline. Some terms, e.g., combined trainer, are being redefined. An overview of the study activities and a summary of study results are given here.
Computationally efficient stochastic optimization using multiple realizations
NASA Astrophysics Data System (ADS)
Bayer, P.; Bürger, C. M.; Finkel, M.
2008-02-01
The presented study is concerned with computationally efficient methods for solving stochastic optimization problems involving multiple equally probable realizations of uncertain parameters. A new and straightforward technique is introduced that is based on dynamically ordering the stack of realizations during the search procedure. The rationale is that a small number of critical realizations govern the output of a reliability-based objective function. By utilizing a problem, which is typical to designing a water supply well field, several variants of this "stack ordering" approach are tested. The results are statistically assessed, in terms of optimality and nominal reliability. This study demonstrates that the simple ordering of a given number of 500 realizations while applying an evolutionary search algorithm can save about half of the model runs without compromising the optimization procedure. More advanced variants of stack ordering can, if properly configured, save up to more than 97% of the computational effort that would be required if the entire number of realizations were considered. The findings herein are promising for similar problems of water management and reliability-based design in general, and particularly for non-convex problems that require heuristic search techniques.
Using OSG Computing Resources with (iLC)Dirac
NASA Astrophysics Data System (ADS)
Sailer, A.; Petric, M.; CLICdp Collaboration
2017-10-01
CPU cycles for small experiments and projects can be scarce, thus making use of all available resources, whether dedicated or opportunistic, is mandatory. While enabling uniform access to the LCG computing elements (ARC, CREAM), the DIRAC grid interware was not able to use OSG computing elements (GlobusCE, HTCondor-CE) without dedicated support at the grid site through so called ‘SiteDirectors’, which directly submit to the local batch system. This in turn requires additional dedicated effort for small experiments on the grid site. Adding interfaces to the OSG CEs through the respective grid middleware is therefore allowing accessing them within the DIRAC software without additional site-specific infrastructure. This enables greater use of opportunistic resources for experiments and projects without dedicated clusters or an established computing infrastructure with the DIRAC software. To allow sending jobs to HTCondor-CE and legacy Globus computing elements inside DIRAC the required wrapper classes were developed. Not only is the usage of these types of computing elements now completely transparent for all DIRAC instances, which makes DIRAC a flexible solution for OSG based virtual organisations, but it also allows LCG Grid Sites to move to the HTCondor-CE software, without shutting DIRAC based VOs out of their site. In these proceedings we detail how we interfaced the DIRAC system to the HTCondor-CE and Globus computing elements and explain the encountered obstacles and solutions developed, and how the linear collider community uses resources in the OSG.
Chalmers, Eric; Luczak, Artur; Gruber, Aaron J.
2016-01-01
The mammalian brain is thought to use a version of Model-based Reinforcement Learning (MBRL) to guide “goal-directed” behavior, wherein animals consider goals and make plans to acquire desired outcomes. However, conventional MBRL algorithms do not fully explain animals' ability to rapidly adapt to environmental changes, or learn multiple complex tasks. They also require extensive computation, suggesting that goal-directed behavior is cognitively expensive. We propose here that key features of processing in the hippocampus support a flexible MBRL mechanism for spatial navigation that is computationally efficient and can adapt quickly to change. We investigate this idea by implementing a computational MBRL framework that incorporates features inspired by computational properties of the hippocampus: a hierarchical representation of space, “forward sweeps” through future spatial trajectories, and context-driven remapping of place cells. We find that a hierarchical abstraction of space greatly reduces the computational load (mental effort) required for adaptation to changing environmental conditions, and allows efficient scaling to large problems. It also allows abstract knowledge gained at high levels to guide adaptation to new obstacles. Moreover, a context-driven remapping mechanism allows learning and memory of multiple tasks. Simulating dorsal or ventral hippocampal lesions in our computational framework qualitatively reproduces behavioral deficits observed in rodents with analogous lesions. The framework may thus embody key features of how the brain organizes model-based RL to efficiently solve navigation and other difficult tasks. PMID:28018203
A new look at the simultaneous analysis and design of structures
NASA Technical Reports Server (NTRS)
Striz, Alfred G.
1994-01-01
The minimum weight optimization of structural systems, subject to strength and displacement constraints as well as size side constraints, was investigated by the Simultaneous ANalysis and Design (SAND) approach. As an optimizer, the code NPSOL was used which is based on a sequential quadratic programming (SQP) algorithm. The structures were modeled by the finite element method. The finite element related input to NPSOL was automatically generated from the input decks of such standard FEM/optimization codes as NASTRAN or ASTROS, with the stiffness matrices, at present, extracted from the FEM code ANALYZE. In order to avoid ill-conditioned matrices that can be encountered when the global stiffness equations are used as additional nonlinear equality constraints in the SAND approach (with the displacements as additional variables), the matrix displacement method was applied. In this approach, the element stiffness equations are used as constraints instead of the global stiffness equations, in conjunction with the nodal force equilibrium equations. This approach adds the element forces as variables to the system. Since, for complex structures and the associated large and very sparce matrices, the execution times of the optimization code became excessive due to the large number of required constraint gradient evaluations, the Kreisselmeier-Steinhauser function approach was used to decrease the computational effort by reducing the nonlinear equality constraint system to essentially a single combined constraint equation. As the linear equality and inequality constraints require much less computational effort to evaluate, they were kept in their previous form to limit the complexity of the KS function evaluation. To date, the standard three-bar, ten-bar, and 72-bar trusses have been tested. For the standard SAND approach, correct results were obtained for all three trusses although convergence became slower for the 72-bar truss. When the matrix displacement method was used, correct results were still obtained, but the execution times became excessive due to the large number of constraint gradient evaluations required. Using the KS function, the computational effort dropped, but the optimization seemed to become less robust. The investigation of this phenomenon is continuing. As an alternate approach, the code MINOS for the optimization of sparse matrices can be applied to the problem in lieu of the Kreisselmeier-Steinhauser function. This investigation is underway.
Changes and challenges in the Software Engineering Laboratory
NASA Technical Reports Server (NTRS)
Pajerski, Rose
1994-01-01
Since 1976, the Software Engineering Laboratory (SEL) has been dedicated to understanding and improving the way in which one NASA organization, the Flight Dynamics Division (FDD), develops, maintains, and manages complex flight dynamics systems. The SEL is composed of three member organizations: NASA/GSFC, the University of Maryland, and Computer Sciences Corporation. During the past 18 years, the SEL's overall goal has remained the same: to improve the FDD's software products and processes in a measured manner. This requires that each development and maintenance effort be viewed, in part, as a SEL experiment which examines a specific technology or builds a model of interest for use on subsequent efforts. The SEL has undertaken many technology studies while developing operational support systems for numerous NASA spacecraft missions.
A review of evaluative studies of computer-based learning in nursing education.
Lewis, M J; Davies, R; Jenkins, D; Tait, M I
2001-01-01
Although there have been numerous attempts to evaluate the learning benefits of computer-based learning (CBL) packages in nursing education, the results obtained have been equivocal. A literature search conducted for this review found 25 reports of the evaluation of nursing CBL packages since 1966. Detailed analysis of the evaluation methods used in these reports revealed that most had significant design flaws, including the use of too small a sample group, the lack of a control group, etc. Because of this, the conclusions reached were not always valid. More effort is required in the design of future evaluation studies of nursing CBL packages. Copyright 2001 Harcourt Publishers Ltd.
NASA Technical Reports Server (NTRS)
Cake, J. E.; Regetz, J. D., Jr.
1975-01-01
A method is presented for open loop guidance of a solar electric propulsion spacecraft to geosynchronous orbit. The method consists of determining the thrust vector profiles on the ground with an optimization computer program, and performing updates based on the difference between the actual trajectory and that predicted with a precision simulation computer program. The motivation for performing the guidance analysis during the mission planning phase is discussed, and a spacecraft design option that employs attitude orientation constraints is presented. The improvements required in both the optimization program and simulation program are set forth, together with the efforts to integrate the programs into the ground support software for the guidance system.
A Demons algorithm for image registration with locally adaptive regularization.
Cahill, Nathan D; Noble, J Alison; Hawkes, David J
2009-01-01
Thirion's Demons is a popular algorithm for nonrigid image registration because of its linear computational complexity and ease of implementation. It approximately solves the diffusion registration problem by successively estimating force vectors that drive the deformation toward alignment and smoothing the force vectors by Gaussian convolution. In this article, we show how the Demons algorithm can be generalized to allow image-driven locally adaptive regularization in a manner that preserves both the linear complexity and ease of implementation of the original Demons algorithm. We show that the proposed algorithm exhibits lower target registration error and requires less computational effort than the original Demons algorithm on the registration of serial chest CT scans of patients with lung nodules.
NASA Technical Reports Server (NTRS)
Cake, J. E.; Regetz, J. D., Jr.
1975-01-01
A method is presented for open loop guidance of a solar electric propulsion spacecraft to geosynchronsus orbit. The method consists of determining the thrust vector profiles on the ground with an optimization computer program, and performing updates based on the difference between the actual trajectory and that predicted with a precision simulation computer program. The motivation for performing the guidance analysis during the mission planning phase is discussed, and a spacecraft design option that employs attitude orientation constraints is presented. The improvements required in both the optimization program and simulation program are set forth, together with the efforts to integrate the programs into the ground support software for the guidance system.
NASA Technical Reports Server (NTRS)
Malin, Jane T.; Schreckenghost, Debra L.; Woods, David D.; Potter, Scott S.; Johannesen, Leila; Holloway, Matthew; Forbus, Kenneth D.
1991-01-01
Initial results are reported from a multi-year, interdisciplinary effort to provide guidance and assistance for designers of intelligent systems and their user interfaces. The objective is to achieve more effective human-computer interaction (HCI) for systems with real time fault management capabilities. Intelligent fault management systems within the NASA were evaluated for insight into the design of systems with complex HCI. Preliminary results include: (1) a description of real time fault management in aerospace domains; (2) recommendations and examples for improving intelligent systems design and user interface design; (3) identification of issues requiring further research; and (4) recommendations for a development methodology integrating HCI design into intelligent system design.
The Man computer Interactive Data Access System: 25 Years of Interactive Processing.
NASA Astrophysics Data System (ADS)
Lazzara, Matthew A.; Benson, John M.; Fox, Robert J.; Laitsch, Denise J.; Rueden, Joseph P.; Santek, David A.; Wade, Delores M.; Whittaker, Thomas M.; Young, J. T.
1999-02-01
On 12 October 1998, it was the 25th anniversary of the Man computer Interactive Data Access System (McIDAS). On that date in 1973, McIDAS was first used operationally by scientists as a tool for data analysis. Over the last 25 years, McIDAS has undergone numerous architectural changes in an effort to keep pace with changing technology. In its early years, significant technological breakthroughs were required to achieve the functionality needed by atmospheric scientists. Today McIDAS is challenged by new Internet-based approaches to data access and data display. The history and impact of McIDAS, along with some of the lessons learned, are presented here
Optimization of a Monte Carlo Model of the Transient Reactor Test Facility
DOE Office of Scientific and Technical Information (OSTI.GOV)
Smith, Kristin; DeHart, Mark; Goluoglu, Sedat
2017-03-01
The ultimate goal of modeling and simulation is to obtain reasonable answers to problems that don’t have representations which can be easily evaluated while minimizing the amount of computational resources. With the advances during the last twenty years of large scale computing centers, researchers have had the ability to create a multitude of tools to minimize the number of approximations necessary when modeling a system. The tremendous power of these centers requires the user to possess an immense amount of knowledge to optimize the models for accuracy and efficiency.This paper seeks to evaluate the KENO model of TREAT to optimizemore » calculational efforts.« less
A community effort to protect genomic data sharing, collaboration and outsourcing.
Wang, Shuang; Jiang, Xiaoqian; Tang, Haixu; Wang, Xiaofeng; Bu, Diyue; Carey, Knox; Dyke, Stephanie Om; Fox, Dov; Jiang, Chao; Lauter, Kristin; Malin, Bradley; Sofia, Heidi; Telenti, Amalio; Wang, Lei; Wang, Wenhao; Ohno-Machado, Lucila
2017-01-01
The human genome can reveal sensitive information and is potentially re-identifiable, which raises privacy and security concerns about sharing such data on wide scales. In 2016, we organized the third Critical Assessment of Data Privacy and Protection competition as a community effort to bring together biomedical informaticists, computer privacy and security researchers, and scholars in ethical, legal, and social implications (ELSI) to assess the latest advances on privacy-preserving techniques for protecting human genomic data. Teams were asked to develop novel protection methods for emerging genome privacy challenges in three scenarios: Track (1) data sharing through the Beacon service of the Global Alliance for Genomics and Health. Track (2) collaborative discovery of similar genomes between two institutions; and Track (3) data outsourcing to public cloud services. The latter two tracks represent continuing themes from our 2015 competition, while the former was new and a response to a recently established vulnerability. The winning strategy for Track 1 mitigated the privacy risk by hiding approximately 11% of the variation in the database while permitting around 160,000 queries, a significant improvement over the baseline. The winning strategies in Tracks 2 and 3 showed significant progress over the previous competition by achieving multiple orders of magnitude performance improvement in terms of computational runtime and memory requirements. The outcomes suggest that applying highly optimized privacy-preserving and secure computation techniques to safeguard genomic data sharing and analysis is useful. However, the results also indicate that further efforts are needed to refine these techniques into practical solutions.
ERIC Educational Resources Information Center
Ashcraft, Catherine
2015-01-01
To date, girls and women are significantly underrepresented in computer science and technology. Concerns about this underrepresentation have sparked a wealth of educational efforts to promote girls' participation in computing, but these programs have demonstrated limited impact on reversing current trends. This paper argues that this is, in part,…
Portable parallel stochastic optimization for the design of aeropropulsion components
NASA Technical Reports Server (NTRS)
Sues, Robert H.; Rhodes, G. S.
1994-01-01
This report presents the results of Phase 1 research to develop a methodology for performing large-scale Multi-disciplinary Stochastic Optimization (MSO) for the design of aerospace systems ranging from aeropropulsion components to complete aircraft configurations. The current research recognizes that such design optimization problems are computationally expensive, and require the use of either massively parallel or multiple-processor computers. The methodology also recognizes that many operational and performance parameters are uncertain, and that uncertainty must be considered explicitly to achieve optimum performance and cost. The objective of this Phase 1 research was to initialize the development of an MSO methodology that is portable to a wide variety of hardware platforms, while achieving efficient, large-scale parallelism when multiple processors are available. The first effort in the project was a literature review of available computer hardware, as well as review of portable, parallel programming environments. The first effort was to implement the MSO methodology for a problem using the portable parallel programming language, Parallel Virtual Machine (PVM). The third and final effort was to demonstrate the example on a variety of computers, including a distributed-memory multiprocessor, a distributed-memory network of workstations, and a single-processor workstation. Results indicate the MSO methodology can be well-applied towards large-scale aerospace design problems. Nearly perfect linear speedup was demonstrated for computation of optimization sensitivity coefficients on both a 128-node distributed-memory multiprocessor (the Intel iPSC/860) and a network of workstations (speedups of almost 19 times achieved for 20 workstations). Very high parallel efficiencies (75 percent for 31 processors and 60 percent for 50 processors) were also achieved for computation of aerodynamic influence coefficients on the Intel. Finally, the multi-level parallelization strategy that will be needed for large-scale MSO problems was demonstrated to be highly efficient. The same parallel code instructions were used on both platforms, demonstrating portability. There are many applications for which MSO can be applied, including NASA's High-Speed-Civil Transport, and advanced propulsion systems. The use of MSO will reduce design and development time and testing costs dramatically.
NASA Technical Reports Server (NTRS)
Ferzali, Wassim; Zacharakis, Vassilis; Upadhyay, Triveni; Weed, Dennis; Burke, Gregory
1995-01-01
The ICAO Aeronautical Mobile Communications Panel (AMCP) completed the drafting of the Aeronautical Mobile Satellite Service (AMSS) Standards and Recommended Practices (SARP's) and the associated Guidance Material and submitted these documents to ICAO Air Navigation Commission (ANC) for ratification in May 1994. This effort, encompassed an extensive, multi-national SARP's validation. As part of this activity, the US Federal Aviation Administration (FAA) sponsored an effort to validate the SARP's via computer simulation. This paper provides a description of this effort. Specifically, it describes: (1) the approach selected for the creation of a high-fidelity AMSS computer model; (2) the test traffic generation scenarios; and (3) the resultant AMSS performance assessment. More recently, the AMSS computer model was also used to provide AMSS performance statistics in support of the RTCA standardization activities. This paper describes this effort as well.
Detonation Product EOS Studies: Using ISLS to Refine Cheetah
DOE Office of Scientific and Technical Information (OSTI.GOV)
Zaug, J M; Howard, W M; Fried, L E
2001-08-08
Knowledge of an effective interatomic potential function underlies any effort to predict or rationalize the properties of solids and liquids. The experiments we undertake are directed towards determination of equilibrium and dynamic properties of simple fluids at densities sufficiently high that traditional computational methods and semi-empirical forms successful at ambient conditions may require reconsideration. In this paper we present high-pressure and temperature experimental sound speed data on a simple fluid, methanol. Impulsive Stimulated Light Scattering (ISLS) conducted on diamond-anvil cell (DAC) encapsulated samples offers an experimental approach to determine cross-pair potential interactions through equation of state determinations. In addition themore » kinetics of structural relaxation in fluids can be studied. We compare our experimental results with our thermochemical computational model Cheetah. Computational models are systematically improved with each addition of experimental data.« less
Atomic Detail Visualization of Photosynthetic Membranes with GPU-Accelerated Ray Tracing
Vandivort, Kirby L.; Barragan, Angela; Singharoy, Abhishek; Teo, Ivan; Ribeiro, João V.; Isralewitz, Barry; Liu, Bo; Goh, Boon Chong; Phillips, James C.; MacGregor-Chatwin, Craig; Johnson, Matthew P.; Kourkoutis, Lena F.; Hunter, C. Neil
2016-01-01
The cellular process responsible for providing energy for most life on Earth, namely photosynthetic light-harvesting, requires the cooperation of hundreds of proteins across an organelle, involving length and time scales spanning several orders of magnitude over quantum and classical regimes. Simulation and visualization of this fundamental energy conversion process pose many unique methodological and computational challenges. We present, in two accompanying movies, light-harvesting in the photosynthetic apparatus found in purple bacteria, the so-called chromatophore. The movies are the culmination of three decades of modeling efforts, featuring the collaboration of theoretical, experimental, and computational scientists. We describe the techniques that were used to build, simulate, analyze, and visualize the structures shown in the movies, and we highlight cases where scientific needs spurred the development of new parallel algorithms that efficiently harness GPU accelerators and petascale computers. PMID:27274603
The Berlin Brain-Computer Interface: Progress Beyond Communication and Control
Blankertz, Benjamin; Acqualagna, Laura; Dähne, Sven; Haufe, Stefan; Schultze-Kraft, Matthias; Sturm, Irene; Ušćumlic, Marija; Wenzel, Markus A.; Curio, Gabriel; Müller, Klaus-Robert
2016-01-01
The combined effect of fundamental results about neurocognitive processes and advancements in decoding mental states from ongoing brain signals has brought forth a whole range of potential neurotechnological applications. In this article, we review our developments in this area and put them into perspective. These examples cover a wide range of maturity levels with respect to their applicability. While we assume we are still a long way away from integrating Brain-Computer Interface (BCI) technology in general interaction with computers, or from implementing neurotechnological measures in safety-critical workplaces, results have already now been obtained involving a BCI as research tool. In this article, we discuss the reasons why, in some of the prospective application domains, considerable effort is still required to make the systems ready to deal with the full complexity of the real world. PMID:27917107
The Berlin Brain-Computer Interface: Progress Beyond Communication and Control.
Blankertz, Benjamin; Acqualagna, Laura; Dähne, Sven; Haufe, Stefan; Schultze-Kraft, Matthias; Sturm, Irene; Ušćumlic, Marija; Wenzel, Markus A; Curio, Gabriel; Müller, Klaus-Robert
2016-01-01
The combined effect of fundamental results about neurocognitive processes and advancements in decoding mental states from ongoing brain signals has brought forth a whole range of potential neurotechnological applications. In this article, we review our developments in this area and put them into perspective. These examples cover a wide range of maturity levels with respect to their applicability. While we assume we are still a long way away from integrating Brain-Computer Interface (BCI) technology in general interaction with computers, or from implementing neurotechnological measures in safety-critical workplaces, results have already now been obtained involving a BCI as research tool. In this article, we discuss the reasons why, in some of the prospective application domains, considerable effort is still required to make the systems ready to deal with the full complexity of the real world.
NASA Technical Reports Server (NTRS)
Shankar, V.; Rowell, C.; Hall, W. F.; Mohammadian, A. H.; Schuh, M.; Taylor, K.
1992-01-01
Accurate and rapid evaluation of radar signature for alternative aircraft/store configurations would be of substantial benefit in the evolution of integrated designs that meet radar cross-section (RCS) requirements across the threat spectrum. Finite-volume time domain methods offer the possibility of modeling the whole aircraft, including penetrable regions and stores, at longer wavelengths on today's gigaflop supercomputers and at typical airborne radar wavelengths on the teraflop computers of tomorrow. A structured-grid finite-volume time domain computational fluid dynamics (CFD)-based RCS code has been developed at the Rockwell Science Center, and this code incorporates modeling techniques for general radar absorbing materials and structures. Using this work as a base, the goal of the CFD-based CEM effort is to define, implement and evaluate various code development issues suitable for rapid prototype signature prediction.
Eruptive event generator based on the Gibson-Low magnetic configuration
NASA Astrophysics Data System (ADS)
Borovikov, D.; Sokolov, I. V.; Manchester, W. B.; Jin, M.; Gombosi, T. I.
2017-08-01
Coronal mass ejections (CMEs), a kind of energetic solar eruptions, are an integral subject of space weather research. Numerical magnetohydrodynamic (MHD) modeling, which requires powerful computational resources, is one of the primary means of studying the phenomenon. With increasing accessibility of such resources, grows the demand for user-friendly tools that would facilitate the process of simulating CMEs for scientific and operational purposes. The Eruptive Event Generator based on Gibson-Low flux rope (EEGGL), a new publicly available computational model presented in this paper, is an effort to meet this demand. EEGGL allows one to compute the parameters of a model flux rope driving a CME via an intuitive graphical user interface. We provide a brief overview of the physical principles behind EEGGL and its functionality. Ways toward future improvements of the tool are outlined.
ICASE/LaRC Workshop on Benchmark Problems in Computational Aeroacoustics (CAA)
NASA Technical Reports Server (NTRS)
Hardin, Jay C. (Editor); Ristorcelli, J. Ray (Editor); Tam, Christopher K. W. (Editor)
1995-01-01
The proceedings of the Benchmark Problems in Computational Aeroacoustics Workshop held at NASA Langley Research Center are the subject of this report. The purpose of the Workshop was to assess the utility of a number of numerical schemes in the context of the unusual requirements of aeroacoustical calculations. The schemes were assessed from the viewpoint of dispersion and dissipation -- issues important to long time integration and long distance propagation in aeroacoustics. Also investigated were the effect of implementation of different boundary conditions. The Workshop included a forum in which practical engineering problems related to computational aeroacoustics were discussed. This discussion took the form of a dialogue between an industrial panel and the workshop participants and was an effort to suggest the direction of evolution of this field in the context of current engineering needs.
Vinogradova, Tatiana; Paul, Raja; Grimaldi, Ashley D.; Loncarek, Jadranka; Miller, Paul M.; Yampolsky, Dmitry; Magidson, Valentin; Khodjakov, Alexey; Mogilner, Alex; Kaverina, Irina
2012-01-01
Assembly of an integral Golgi complex is driven by microtubule (MT)-dependent transport. Conversely, the Golgi itself functions as an unconventional MT-organizing center (MTOC). This raises the question of whether Golgi assembly requires centrosomal MTs or can be self-organized, relying on its own MTOC activity. The computational model presented here predicts that each MT population is capable of gathering Golgi stacks but not of establishing Golgi complex integrity or polarity. In contrast, the concerted effort of two MT populations would assemble an integral, polarized Golgi complex. Indeed, while laser ablation of the centrosome did not alter already-formed Golgi complexes, acentrosomal cells fail to reassemble an integral complex upon nocodazole washout. Moreover, polarity of post-Golgi trafficking was compromised under these conditions, leading to strong deficiency in polarized cell migration. Our data indicate that centrosomal MTs complement Golgi self-organization for proper Golgi assembly and motile-cell polarization. PMID:22262454
The MST radar technique: Requirements for operational weather forecasting
NASA Technical Reports Server (NTRS)
Larsen, M. F.
1983-01-01
There is a feeling that the accuracy of mesoscale forecasts for spatial scales of less than 1000 km and time scales of less than 12 hours can be improved significantly if resources are applied to the problem in an intensive effort over the next decade. Since the most dangerous and damaging types of weather occur at these scales, there are major advantages to be gained if such a program is successful. The interest in improving short term forecasting is evident. The technology at the present time is sufficiently developed, both in terms of new observing systems and the computing power to handle the observations, to warrant an intensive effort to improve stormscale forecasting. An assessment of the extent to which the so-called MST radar technique fulfills the requirements for an operational mesoscale observing network is reviewed and the extent to which improvements in various types of forecasting could be expected if such a network is put into operation are delineated.
DART -- Data acquisition for the next generation of Fermilab fixed target experiments
DOE Office of Scientific and Technical Information (OSTI.GOV)
Oleynik, G.; Anderson, J.; Appleton, L.
1994-02-01
DART is the name of the data acquisition effort for Fermilab experiments taking data in the '94--'95 time frame and beyond. Its charge is to provide a common system of hardware and software, which can be easily configured and extended to meet the wide range of data acquisition requirements of the experiments. Its strategy is to provide incrementally functional data acquisition systems to the experiments at frequent intervals to support the ongoing DA activities of the experiments. DART is a collaborative development effort between the experimenters and the Fermilab Computing Division. Experiments collaborating in DART cover a range of requirementsmore » from 400 Kbytes/sec event readout using a single DA processor, to 200 Mbytes/sec event readout involving 10 parallel readout streams, 10 VME event building planes and greater than 1,000 MIPs of event filter processing. The authors describe the requirements, architecture, and plans for the project and report on its current status.« less
NASA Technical Reports Server (NTRS)
Renaud, John E.; Batill, Stephen M.; Brockman, Jay B.
1999-01-01
This research effort is a joint program between the Departments of Aerospace and Mechanical Engineering and the Computer Science and Engineering Department at the University of Notre Dame. The purpose of the project was to develop a framework and systematic methodology to facilitate the application of Multidisciplinary Design Optimization (MDO) to a diverse class of system design problems. For all practical aerospace systems, the design of a systems is a complex sequence of events which integrates the activities of a variety of discipline "experts" and their associated "tools". The development, archiving and exchange of information between these individual experts is central to the design task and it is this information which provides the basis for these experts to make coordinated design decisions (i.e., compromises and trade-offs) - resulting in the final product design. Grant efforts focused on developing and evaluating frameworks for effective design coordination within a MDO environment. Central to these research efforts was the concept that the individual discipline "expert", using the most appropriate "tools" available and the most complete description of the system should be empowered to have the greatest impact on the design decisions and final design. This means that the overall process must be highly interactive and efficiently conducted if the resulting design is to be developed in a manner consistent with cost and time requirements. The methods developed as part of this research effort include; extensions to a sensitivity based Concurrent Subspace Optimization (CSSO) NMO algorithm; the development of a neural network response surface based CSSO-MDO algorithm; and the integration of distributed computing and process scheduling into the MDO environment. This report overviews research efforts in each of these focus. A complete bibliography of research produced with support of this grant is attached.
Functional structure and dynamics of the human nervous system
NASA Technical Reports Server (NTRS)
Lawrence, J. A.
1981-01-01
The status of an effort to define the directions needed to take in extending pilot models is reported. These models are needed to perform closed-loop (man-in-the-loop) feedback flight control system designs and to develop cockpit display requirements. The approach taken is to develop a hypothetical working model of the human nervous system by reviewing the current literature in neurology and psychology and to develop a computer model of this hypothetical working model.
Research in digital adaptive flight controllers
NASA Technical Reports Server (NTRS)
Kaufman, H.
1976-01-01
A design study of adaptive control logic suitable for implementation in modern airborne digital flight computers was conducted. Both explicit controllers which directly utilize parameter identification and implicit controllers which do not require identification were considered. Extensive analytical and simulation efforts resulted in the recommendation of two explicit digital adaptive flight controllers. Interface weighted least squares estimation procedures with control logic were developed using either optimal regulator theory or with control logic based upon single stage performance indices.
The Quantum Engineering Conundrum
NASA Astrophysics Data System (ADS)
Monroe, Christopher
2017-04-01
There is newfound rush and excitement in Quantum Information Science, as this field seems to be moving toward an industrial/engineering phase. However, this evolution will require that quantum science, long the domain of academics and other researchers, make the leap to sustained engineering efforts in order to fabricate practical devices. I will address the conundrum, that full-blooded engineering does not generally happen on campuses, while many in the professional engineering and computer science community do not believe in quantum physics!
Zhang, Hong; Abhyankar, Shrirang; Constantinescu, Emil; ...
2017-01-24
Sensitivity analysis is an important tool for describing power system dynamic behavior in response to parameter variations. It is a central component in preventive and corrective control applications. The existing approaches for sensitivity calculations, namely, finite-difference and forward sensitivity analysis, require a computational effort that increases linearly with the number of sensitivity parameters. In this paper, we investigate, implement, and test a discrete adjoint sensitivity approach whose computational effort is effectively independent of the number of sensitivity parameters. The proposed approach is highly efficient for calculating sensitivities of larger systems and is consistent, within machine precision, with the function whosemore » sensitivity we are seeking. This is an essential feature for use in optimization applications. Moreover, our approach includes a consistent treatment of systems with switching, such as dc exciters, by deriving and implementing the adjoint jump conditions that arise from state-dependent and time-dependent switchings. The accuracy and the computational efficiency of the proposed approach are demonstrated in comparison with the forward sensitivity analysis approach. In conclusion, this paper focuses primarily on the power system dynamics, but the approach is general and can be applied to hybrid dynamical systems in a broader range of fields.« less
DOE Office of Scientific and Technical Information (OSTI.GOV)
Zhang, Hong; Abhyankar, Shrirang; Constantinescu, Emil
Sensitivity analysis is an important tool for describing power system dynamic behavior in response to parameter variations. It is a central component in preventive and corrective control applications. The existing approaches for sensitivity calculations, namely, finite-difference and forward sensitivity analysis, require a computational effort that increases linearly with the number of sensitivity parameters. In this paper, we investigate, implement, and test a discrete adjoint sensitivity approach whose computational effort is effectively independent of the number of sensitivity parameters. The proposed approach is highly efficient for calculating sensitivities of larger systems and is consistent, within machine precision, with the function whosemore » sensitivity we are seeking. This is an essential feature for use in optimization applications. Moreover, our approach includes a consistent treatment of systems with switching, such as dc exciters, by deriving and implementing the adjoint jump conditions that arise from state-dependent and time-dependent switchings. The accuracy and the computational efficiency of the proposed approach are demonstrated in comparison with the forward sensitivity analysis approach. In conclusion, this paper focuses primarily on the power system dynamics, but the approach is general and can be applied to hybrid dynamical systems in a broader range of fields.« less
A Comparison of PETSC Library and HPF Implementations of an Archetypal PDE Computation
NASA Technical Reports Server (NTRS)
Hayder, M. Ehtesham; Keyes, David E.; Mehrotra, Piyush
1997-01-01
Two paradigms for distributed-memory parallel computation that free the application programmer from the details of message passing are compared for an archetypal structured scientific computation a nonlinear, structured-grid partial differential equation boundary value problem using the same algorithm on the same hardware. Both paradigms, parallel libraries represented by Argonne's PETSC, and parallel languages represented by the Portland Group's HPF, are found to be easy to use for this problem class, and both are reasonably effective in exploiting concurrency after a short learning curve. The level of involvement required by the application programmer under either paradigm includes specification of the data partitioning (corresponding to a geometrically simple decomposition of the domain of the PDE). Programming in SPAM style for the PETSC library requires writing the routines that discretize the PDE and its Jacobian, managing subdomain-to-processor mappings (affine global- to-local index mappings), and interfacing to library solver routines. Programming for HPF requires a complete sequential implementation of the same algorithm, introducing concurrency through subdomain blocking (an effort similar to the index mapping), and modest experimentation with rewriting loops to elucidate to the compiler the latent concurrency. Correctness and scalability are cross-validated on up to 32 nodes of an IBM SP2.
Space Station Simulation Computer System (SCS) study for NASA/MSFC. Concept document
NASA Technical Reports Server (NTRS)
1990-01-01
NASA's Space Station Freedom Program (SSFP) planning efforts have identified a need for a payload training simulator system to serve as both a training facility and as a demonstrator to validate operational concepts. The envisioned MSFC Payload Training Complex (PTC) required to meet this need will train the Space Station Payload of experiments that will be onboard the Space Station Freedom. The simulation will support the Payload Training Complex at MSFC. The purpose of this SCS Study is to investigate issues related to the SCS, alternative requirements, simulator approaches, and state-of-the-art technologies to develop candidate concepts and designs.
Radiocardiography in clinical cardiology
DOE Office of Scientific and Technical Information (OSTI.GOV)
Pierson, R.N. Jr.; Alam, S.; Kemp, H.G.
1977-01-01
Quantitative radiocardiography provides a variety of noninvasive measurements of value in cardiology. A gamma camera and computer processing are required for most of these measurements. The advantages of ease, economy, and safety of these procedures are, in part, offset by the complexity of as yet unstandardized methods and incomplete validation of results. The expansion of these techniques will inevitably be rapid. Their careful performance requires, for the moment, a major and perhaps dedicated effort by at least one member of the professional team, if the pitfalls that lead to unrecognized error are to be avoided. We may anticipate more automatedmore » and reliable results with increased experience and validation.« less
NASA Technical Reports Server (NTRS)
Kiusalaas, J.; Reddy, G. B.
1977-01-01
A finite element program is presented for computer-automated, minimum weight design of elastic structures with constraints on stresses (including local instability criteria) and displacements. Volume 1 of the report contains the theoretical and user's manual of the program. Sample problems and the listing of the program are included in Volumes 2 and 3. The element subroutines are organized so as to facilitate additions and changes by the user. As a result, a relatively minor programming effort would be required to make DESAP 1 into a special purpose program to handle the user's specific design requirements and failure criteria.
NASA Technical Reports Server (NTRS)
Goss, Ernest Preston
1991-01-01
The objectives were to: (1) survey state-of-the-art computing architectures, tools, and technologies for implementing an Executive Information System (EIS); (2) review MSFC capabilities and efforts in developing an EIS for Shuttle Projects Office and the Payloads Project Office; (3) review management reporting requirements for the NASA Accounting and Financial Information System (NAFIS) Project in the areas of cost, schedule, and technical performance, and insure that the EIS fully supports these requirements; and (4) develop and implement a pilot concept for a NAFIS EIS. A summary of the findings of this work is presented.
COSTMODL - AN AUTOMATED SOFTWARE DEVELOPMENT COST ESTIMATION TOOL
NASA Technical Reports Server (NTRS)
Roush, G. B.
1994-01-01
The cost of developing computer software consumes an increasing portion of many organizations' budgets. As this trend continues, the capability to estimate the effort and schedule required to develop a candidate software product becomes increasingly important. COSTMODL is an automated software development estimation tool which fulfills this need. Assimilating COSTMODL to any organization's particular environment can yield significant reduction in the risk of cost overruns and failed projects. This user-customization capability is unmatched by any other available estimation tool. COSTMODL accepts a description of a software product to be developed and computes estimates of the effort required to produce it, the calendar schedule required, and the distribution of effort and staffing as a function of the defined set of development life-cycle phases. This is accomplished by the five cost estimation algorithms incorporated into COSTMODL: the NASA-developed KISS model; the Basic, Intermediate, and Ada COCOMO models; and the Incremental Development model. This choice affords the user the ability to handle project complexities ranging from small, relatively simple projects to very large projects. Unique to COSTMODL is the ability to redefine the life-cycle phases of development and the capability to display a graphic representation of the optimum organizational structure required to develop the subject project, along with required staffing levels and skills. The program is menu-driven and mouse sensitive with an extensive context-sensitive help system that makes it possible for a new user to easily install and operate the program and to learn the fundamentals of cost estimation without having prior training or separate documentation. The implementation of these functions, along with the customization feature, into one program makes COSTMODL unique within the industry. COSTMODL was written for IBM PC compatibles, and it requires Turbo Pascal 5.0 or later and Turbo Professional 5.0 for recompilation. An executable is provided on the distribution diskettes. COSTMODL requires 512K RAM. The standard distribution medium for COSTMODL is three 5.25 inch 360K MS-DOS format diskettes. The contents of the diskettes are compressed using the PKWARE archiving tools. The utility to unarchive the files, PKUNZIP.EXE, is included. COSTMODL was developed in 1991. IBM PC is a registered trademark of International Business Machines. Borland and Turbo Pascal are registered trademarks of Borland International, Inc. Turbo Professional is a trademark of TurboPower Software. MS-DOS is a registered trademark of Microsoft Corporation. Turbo Professional is a trademark of TurboPower Software.
Cyberinfrastructure to Support Collaborative and Reproducible Computational Hydrologic Modeling
NASA Astrophysics Data System (ADS)
Goodall, J. L.; Castronova, A. M.; Bandaragoda, C.; Morsy, M. M.; Sadler, J. M.; Essawy, B.; Tarboton, D. G.; Malik, T.; Nijssen, B.; Clark, M. P.; Liu, Y.; Wang, S. W.
2017-12-01
Creating cyberinfrastructure to support reproducibility of computational hydrologic models is an important research challenge. Addressing this challenge requires open and reusable code and data with machine and human readable metadata, organized in ways that allow others to replicate results and verify published findings. Specific digital objects that must be tracked for reproducible computational hydrologic modeling include (1) raw initial datasets, (2) data processing scripts used to clean and organize the data, (3) processed model inputs, (4) model results, and (5) the model code with an itemization of all software dependencies and computational requirements. HydroShare is a cyberinfrastructure under active development designed to help users store, share, and publish digital research products in order to improve reproducibility in computational hydrology, with an architecture supporting hydrologic-specific resource metadata. Researchers can upload data required for modeling, add hydrology-specific metadata to these resources, and use the data directly within HydroShare.org for collaborative modeling using tools like CyberGIS, Sciunit-CLI, and JupyterHub that have been integrated with HydroShare to run models using notebooks, Docker containers, and cloud resources. Current research aims to implement the Structure For Unifying Multiple Modeling Alternatives (SUMMA) hydrologic model within HydroShare to support hypothesis-driven hydrologic modeling while also taking advantage of the HydroShare cyberinfrastructure. The goal of this integration is to create the cyberinfrastructure that supports hypothesis-driven model experimentation, education, and training efforts by lowering barriers to entry, reducing the time spent on informatics technology and software development, and supporting collaborative research within and across research groups.
System-Level Virtualization Research at Oak Ridge National Laboratory
DOE Office of Scientific and Technical Information (OSTI.GOV)
Scott, Stephen L; Vallee, Geoffroy R; Naughton, III, Thomas J
2010-01-01
System-level virtualization is today enjoying a rebirth as a technique to effectively share what were then considered large computing resources to subsequently fade from the spotlight as individual workstations gained in popularity with a one machine - one user approach. One reason for this resurgence is that the simple workstation has grown in capability to rival that of anything available in the past. Thus, computing centers are again looking at the price/performance benefit of sharing that single computing box via server consolidation. However, industry is only concentrating on the benefits of using virtualization for server consolidation (enterprise computing) whereas ourmore » interest is in leveraging virtualization to advance high-performance computing (HPC). While these two interests may appear to be orthogonal, one consolidating multiple applications and users on a single machine while the other requires all the power from many machines to be dedicated solely to its purpose, we propose that virtualization does provide attractive capabilities that may be exploited to the benefit of HPC interests. This does raise the two fundamental questions of: is the concept of virtualization (a machine sharing technology) really suitable for HPC and if so, how does one go about leveraging these virtualization capabilities for the benefit of HPC. To address these questions, this document presents ongoing studies on the usage of system-level virtualization in a HPC context. These studies include an analysis of the benefits of system-level virtualization for HPC, a presentation of research efforts based on virtualization for system availability, and a presentation of research efforts for the management of virtual systems. The basis for this document was material presented by Stephen L. Scott at the Collaborative and Grid Computing Technologies meeting held in Cancun, Mexico on April 12-14, 2007.« less
Application of infrared thermography in computer aided diagnosis
NASA Astrophysics Data System (ADS)
Faust, Oliver; Rajendra Acharya, U.; Ng, E. Y. K.; Hong, Tan Jen; Yu, Wenwei
2014-09-01
The invention of thermography, in the 1950s, posed a formidable problem to the research community: What is the relationship between disease and heat radiation captured with Infrared (IR) cameras? The research community responded with a continuous effort to find this crucial relationship. This effort was aided by advances in processing techniques, improved sensitivity and spatial resolution of thermal sensors. However, despite this progress fundamental issues with this imaging modality still remain. The main problem is that the link between disease and heat radiation is complex and in many cases even non-linear. Furthermore, the change in heat radiation as well as the change in radiation pattern, which indicate disease, is minute. On a technical level, this poses high requirements on image capturing and processing. On a more abstract level, these problems lead to inter-observer variability and on an even more abstract level they lead to a lack of trust in this imaging modality. In this review, we adopt the position that these problems can only be solved through a strict application of scientific principles and objective performance assessment. Computing machinery is inherently objective; this helps us to apply scientific principles in a transparent way and to assess the performance results. As a consequence, we aim to promote thermography based Computer-Aided Diagnosis (CAD) systems. Another benefit of CAD systems comes from the fact that the diagnostic accuracy is linked to the capability of the computing machinery and, in general, computers become ever more potent. We predict that a pervasive application of computers and networking technology in medicine will help us to overcome the shortcomings of any single imaging modality and this will pave the way for integrated health care systems which maximize the quality of patient care.
A semi-automatic annotation tool for cooking video
NASA Astrophysics Data System (ADS)
Bianco, Simone; Ciocca, Gianluigi; Napoletano, Paolo; Schettini, Raimondo; Margherita, Roberto; Marini, Gianluca; Gianforme, Giorgio; Pantaleo, Giuseppe
2013-03-01
In order to create a cooking assistant application to guide the users in the preparation of the dishes relevant to their profile diets and food preferences, it is necessary to accurately annotate the video recipes, identifying and tracking the foods of the cook. These videos present particular annotation challenges such as frequent occlusions, food appearance changes, etc. Manually annotate the videos is a time-consuming, tedious and error-prone task. Fully automatic tools that integrate computer vision algorithms to extract and identify the elements of interest are not error free, and false positive and false negative detections need to be corrected in a post-processing stage. We present an interactive, semi-automatic tool for the annotation of cooking videos that integrates computer vision techniques under the supervision of the user. The annotation accuracy is increased with respect to completely automatic tools and the human effort is reduced with respect to completely manual ones. The performance and usability of the proposed tool are evaluated on the basis of the time and effort required to annotate the same video sequences.
Combining analysis with optimization at Langley Research Center. An evolutionary process
NASA Technical Reports Server (NTRS)
Rogers, J. L., Jr.
1982-01-01
The evolutionary process of combining analysis and optimization codes was traced with a view toward providing insight into the long term goal of developing the methodology for an integrated, multidisciplinary software system for the concurrent analysis and optimization of aerospace structures. It was traced along the lines of strength sizing, concurrent strength and flutter sizing, and general optimization to define a near-term goal for combining analysis and optimization codes. Development of a modular software system combining general-purpose, state-of-the-art, production-level analysis computer programs for structures, aerodynamics, and aeroelasticity with a state-of-the-art optimization program is required. Incorporation of a modular and flexible structural optimization software system into a state-of-the-art finite element analysis computer program will facilitate this effort. This effort results in the software system used that is controlled with a special-purpose language, communicates with a data management system, and is easily modified for adding new programs and capabilities. A 337 degree-of-freedom finite element model is used in verifying the accuracy of this system.
Support for Debugging Automatically Parallelized Programs
NASA Technical Reports Server (NTRS)
Jost, Gabriele; Hood, Robert; Biegel, Bryan (Technical Monitor)
2001-01-01
We describe a system that simplifies the process of debugging programs produced by computer-aided parallelization tools. The system uses relative debugging techniques to compare serial and parallel executions in order to show where the computations begin to differ. If the original serial code is correct, errors due to parallelization will be isolated by the comparison. One of the primary goals of the system is to minimize the effort required of the user. To that end, the debugging system uses information produced by the parallelization tool to drive the comparison process. In particular the debugging system relies on the parallelization tool to provide information about where variables may have been modified and how arrays are distributed across multiple processes. User effort is also reduced through the use of dynamic instrumentation. This allows us to modify the program execution without changing the way the user builds the executable. The use of dynamic instrumentation also permits us to compare the executions in a fine-grained fashion and only involve the debugger when a difference has been detected. This reduces the overhead of executing instrumentation.
Simultaneous quaternion estimation (QUEST) and bias determination
NASA Technical Reports Server (NTRS)
Markley, F. Landis
1989-01-01
Tests of a new method for the simultaneous estimation of spacecraft attitude and sensor biases, based on a quaternion estimation algorithm minimizing Wahba's loss function are presented. The new method is compared with a conventional batch least-squares differential correction algorithm. The estimates are based on data from strapdown gyros and star trackers, simulated with varying levels of Gaussian noise for both inertially-fixed and Earth-pointing reference attitudes. Both algorithms solve for the spacecraft attitude and the gyro drift rate biases. They converge to the same estimates at the same rate for inertially-fixed attitude, but the new algorithm converges more slowly than the differential correction for Earth-pointing attitude. The slower convergence of the new method for non-zero attitude rates is believed to be due to the use of an inadequate approximation for a partial derivative matrix. The new method requires about twice the computational effort of the differential correction. Improving the approximation for the partial derivative matrix in the new method is expected to improve its convergence at the cost of increased computational effort.
Integrated modeling of advanced optical systems
NASA Astrophysics Data System (ADS)
Briggs, Hugh C.; Needels, Laura; Levine, B. Martin
1993-02-01
This poster session paper describes an integrated modeling and analysis capability being developed at JPL under funding provided by the JPL Director's Discretionary Fund and the JPL Control/Structure Interaction Program (CSI). The posters briefly summarize the program capabilities and illustrate them with an example problem. The computer programs developed under this effort will provide an unprecedented capability for integrated modeling and design of high performance optical spacecraft. The engineering disciplines supported include structural dynamics, controls, optics and thermodynamics. Such tools are needed in order to evaluate the end-to-end system performance of spacecraft such as OSI, POINTS, and SMMM. This paper illustrates the proof-of-concept tools that have been developed to establish the technology requirements and demonstrate the new features of integrated modeling and design. The current program also includes implementation of a prototype tool based upon the CAESY environment being developed under the NASA Guidance and Control Research and Technology Computational Controls Program. This prototype will be available late in FY-92. The development plan proposes a major software production effort to fabricate, deliver, support and maintain a national-class tool from FY-93 through FY-95.
Relative Debugging of Automatically Parallelized Programs
NASA Technical Reports Server (NTRS)
Jost, Gabriele; Hood, Robert; Biegel, Bryan (Technical Monitor)
2002-01-01
We describe a system that simplifies the process of debugging programs produced by computer-aided parallelization tools. The system uses relative debugging techniques to compare serial and parallel executions in order to show where the computations begin to differ. If the original serial code is correct, errors due to parallelization will be isolated by the comparison. One of the primary goals of the system is to minimize the effort required of the user. To that end, the debugging system uses information produced by the parallelization tool to drive the comparison process. In particular, the debugging system relies on the parallelization tool to provide information about where variables may have been modified and how arrays are distributed across multiple processes. User effort is also reduced through the use of dynamic instrumentation. This allows us to modify, the program execution with out changing the way the user builds the executable. The use of dynamic instrumentation also permits us to compare the executions in a fine-grained fashion and only involve the debugger when a difference has been detected. This reduces the overhead of executing instrumentation.
Report on the Human Genome Initiative for the Office of Health and Environmental Research
DOE R&D Accomplishments Database
Tinoco, I.; Cahill, G.; Cantor, C.; Caskey, T.; Dulbecco, R.; Engelhardt, D. L.; Hood, L.; Lerman, L. S.; Mendelsohn, M. L.; Sinsheimer, R. L.; Smith, T.; Soll, D.; Stormo, G.; White, R. L.
1987-04-01
The report urges DOE and the Nation to commit to a large, multi-year, multidisciplinary, technological undertaking to order and sequence the human genome. This effort will first require significant innovation in general capability to manipulate DNA, major new analytical methods for ordering and sequencing, theoretical developments in computer science and mathematical biology, and great expansions in our ability to store and manipulate the information and to interface it with other large and diverse genetic databases. The actual ordering and sequencing involves the coordinated processing of some 3 billion bases from a reference human genome. Science is poised on the rudimentary edge of being able to read and understand human genes. A concerted, broadly based, scientific effort to provide new methods of sufficient power and scale should transform this activity from an inefficient one-gene-at-a-time, single laboratory effort into a coordinated, worldwide, comprehensive reading of "the book of man". The effort will be extraordinary in scope and magnitude, but so will be the benefit to biological understanding, new technology and the diagnosis and treatment of human disease.
Commercial Off-The-Shelf (COTS) Graphics Processing Board (GPB) Radiation Test Evaluation Report
NASA Technical Reports Server (NTRS)
Salazar, George A.; Steele, Glen F.
2013-01-01
Large round trip communications latency for deep space missions will require more onboard computational capabilities to enable the space vehicle to undertake many tasks that have traditionally been ground-based, mission control responsibilities. As a result, visual display graphics will be required to provide simpler vehicle situational awareness through graphical representations, as well as provide capabilities never before done in a space mission, such as augmented reality for in-flight maintenance or Telepresence activities. These capabilities will require graphics processors and associated support electronic components for high computational graphics processing. In an effort to understand the performance of commercial graphics card electronics operating in the expected radiation environment, a preliminary test was performed on five commercial offthe- shelf (COTS) graphics cards. This paper discusses the preliminary evaluation test results of five COTS graphics processing cards tested to the International Space Station (ISS) low earth orbit radiation environment. Three of the five graphics cards were tested to a total dose of 6000 rads (Si). The test articles, test configuration, preliminary results, and recommendations are discussed.
NASA Technical Reports Server (NTRS)
Scheper, C.; Baker, R.; Frank, G.; Yalamanchili, S.; Gray, G.
1992-01-01
Systems for Space Defense Initiative (SDI) space applications typically require both high performance and very high reliability. These requirements present the systems engineer evaluating such systems with the extremely difficult problem of conducting performance and reliability trade-offs over large design spaces. A controlled development process supported by appropriate automated tools must be used to assure that the system will meet design objectives. This report describes an investigation of methods, tools, and techniques necessary to support performance and reliability modeling for SDI systems development. Models of the JPL Hypercubes, the Encore Multimax, and the C.S. Draper Lab Fault-Tolerant Parallel Processor (FTPP) parallel-computing architectures using candidate SDI weapons-to-target assignment algorithms as workloads were built and analyzed as a means of identifying the necessary system models, how the models interact, and what experiments and analyses should be performed. As a result of this effort, weaknesses in the existing methods and tools were revealed and capabilities that will be required for both individual tools and an integrated toolset were identified.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Eyler, L L; Trent, D S; Budden, M J
During the course of the TEMPEST computer code development a concurrent effort was conducted to assess the code's performance and the validity of computed results. The results of this work are presented in this document. The principal objective of this effort was to assure the code's computational correctness for a wide range of hydrothermal phenomena typical of fast breeder reactor application. 47 refs., 94 figs., 6 tabs.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Kopp, H.J.; Mortensen, G.A.
1978-04-01
Approximately 60% of the full CDC 6600/7600 Datatran 2.0 capability was made operational on IBM 360/370 equipment. Sufficient capability was made operational to demonstrate adequate performance for modular program linking applications. Also demonstrated were the basic capabilities and performance required to support moderate-sized data base applications and moderately active scratch input/output applications. Approximately one to two calendar years are required to develop DATATRAN 2.0 capabilities fully for the entire spectrum of applications proposed. Included in the next stage of conversion should be syntax checking and syntax conversion features that would foster greater FORTRAN compatibility between IBM and CDC developed modules.more » The batch portion of the JOSHUA Modular System, which was developed by Savannah River Laboratory to run on an IBM computer, was examined for the feasibility of conversion to run on a Control Data Corporation (CDC) computer. Portions of the JOSHUA Precompiler were changed so as to be operable on the CDC computer. The Data Manager and Batch Monitor were also examined for conversion feasibility, but no changes were made in them. It appears to be feasible to convert the batch portion of the JOSHUA Modular System to run on a CDC computer with an estimated additional two to three man-years of effort. 9 tables.« less
42 CFR 441.182 - Maintenance of effort: Computation.
Code of Federal Regulations, 2010 CFR
2010-10-01
... SERVICES Inpatient Psychiatric Services for Individuals Under Age 21 in Psychiatric Facilities or Programs § 441.182 Maintenance of effort: Computation. (a) For expenditures for inpatient psychiatric services... total State Medicaid expenditures in the current quarter for inpatient psychiatric services and...
NASA Astrophysics Data System (ADS)
Cavigelli, Lukas; Bernath, Dominic; Magno, Michele; Benini, Luca
2016-10-01
Detecting and classifying targets in video streams from surveillance cameras is a cumbersome, error-prone and expensive task. Often, the incurred costs are prohibitive for real-time monitoring. This leads to data being stored locally or transmitted to a central storage site for post-incident examination. The required communication links and archiving of the video data are still expensive and this setup excludes preemptive actions to respond to imminent threats. An effective way to overcome these limitations is to build a smart camera that analyzes the data on-site, close to the sensor, and transmits alerts when relevant video sequences are detected. Deep neural networks (DNNs) have come to outperform humans in visual classifications tasks and are also performing exceptionally well on other computer vision tasks. The concept of DNNs and Convolutional Networks (ConvNets) can easily be extended to make use of higher-dimensional input data such as multispectral data. We explore this opportunity in terms of achievable accuracy and required computational effort. To analyze the precision of DNNs for scene labeling in an urban surveillance scenario we have created a dataset with 8 classes obtained in a field experiment. We combine an RGB camera with a 25-channel VIS-NIR snapshot sensor to assess the potential of multispectral image data for target classification. We evaluate several new DNNs, showing that the spectral information fused together with the RGB frames can be used to improve the accuracy of the system or to achieve similar accuracy with a 3x smaller computation effort. We achieve a very high per-pixel accuracy of 99.1%. Even for scarcely occurring, but particularly interesting classes, such as cars, 75% of the pixels are labeled correctly with errors occurring only around the border of the objects. This high accuracy was obtained with a training set of only 30 labeled images, paving the way for fast adaptation to various application scenarios.
London, Nir; Ambroggio, Xavier
2014-02-01
Computational protein design efforts aim to create novel proteins and functions in an automated manner and, in the process, these efforts shed light on the factors shaping natural proteins. The focus of these efforts has progressed from the interior of proteins to their surface and the design of functions, such as binding or catalysis. Here we examine progress in the development of robust methods for the computational design of non-natural interactions between proteins and molecular targets such as other proteins or small molecules. This problem is referred to as the de novo computational design of interactions. Recent successful efforts in de novo enzyme design and the de novo design of protein-protein interactions open a path towards solving this problem. We examine the common themes in these efforts, and review recent studies aimed at understanding the nature of successes and failures in the de novo computational design of interactions. While several approaches culminated in success, the use of a well-defined structural model for a specific binding interaction in particular has emerged as a key strategy for a successful design, and is therefore reviewed with special consideration. Copyright © 2013 Elsevier Inc. All rights reserved.
Fechner, Hanna B; Schooler, Lael J; Pachur, Thorsten
2018-01-01
Several theories of cognition distinguish between strategies that differ in the mental effort that their use requires. But how can the effort-or cognitive costs-associated with a strategy be conceptualized and measured? We propose an approach that decomposes the effort a strategy requires into the time costs associated with the demands for using specific cognitive resources. We refer to this approach as resource demand decomposition analysis (RDDA) and instantiate it in the cognitive architecture Adaptive Control of Thought-Rational (ACT-R). ACT-R provides the means to develop computer simulations of the strategies. These simulations take into account how strategies interact with quantitative implementations of cognitive resources and incorporate the possibility of parallel processing. Using this approach, we quantified, decomposed, and compared the time costs of two prominent strategies for decision making, take-the-best and tallying. Because take-the-best often ignores information and foregoes information integration, it has been considered simpler than strategies like tallying. However, in both ACT-R simulations and an empirical study we found that under increasing cognitive demands the response times (i.e., time costs) of take-the-best sometimes exceeded those of tallying. The RDDA suggested that this pattern is driven by greater requirements for working memory updates, memory retrievals, and the coordination of mental actions when using take-the-best compared to tallying. The results illustrate that assessing the relative simplicity of strategies requires consideration of the overall cognitive system in which the strategies are embedded. Copyright © 2017 Elsevier B.V. All rights reserved.
IITET and shadow TT: an innovative approach to training at the point of need
NASA Astrophysics Data System (ADS)
Gross, Andrew; Lopez, Favio; Dirkse, James; Anderson, Darran; Berglie, Stephen; May, Christopher; Harkrider, Susan
2014-06-01
The Image Intensification and Thermal Equipment Training (IITET) project is a joint effort between Night Vision and Electronics Sensors Directorate (NVESD) Modeling and Simulation Division (MSD) and the Army Research Institute (ARI) Fort Benning Research Unit. The IITET effort develops a reusable and extensible training architecture that supports the Army Learning Model and trains Manned-Unmanned Teaming (MUM-T) concepts to Shadow Unmanned Aerial Systems (UAS) payload operators. The training challenge of MUM-T during aviation operations is that UAS payload operators traditionally learn few of the scout-reconnaissance skills and coordination appropriate to MUM-T at the schoolhouse. The IITET effort leveraged the simulation experience and capabilities at NVESD and ARI's research to develop a novel payload operator training approach consistent with the Army Learning Model. Based on the training and system requirements, the team researched and identified candidate capabilities in several distinct technology areas. The training capability will support a variety of training missions as well as a full campaign. Data from these missions will be captured in a fully integrated AAR capability, which will provide objective feedback to the user in near-real-time. IITET will be delivered via a combination of browser and video streaming technologies, eliminating the requirement for a client download and reducing user computer system requirements. The result is a novel UAS Payload Operator training capability, nested within an architecture capable of supporting a wide variety of training needs for air and ground tactical platforms and sensors, and potentially several other areas requiring vignette-based serious games training.
The DYNES Instrument: A Description and Overview
NASA Astrophysics Data System (ADS)
Zurawski, Jason; Ball, Robert; Barczyk, Artur; Binkley, Mathew; Boote, Jeff; Boyd, Eric; Brown, Aaron; Brown, Robert; Lehman, Tom; McKee, Shawn; Meekhof, Benjeman; Mughal, Azher; Newman, Harvey; Rozsa, Sandor; Sheldon, Paul; Tackett, Alan; Voicu, Ramiro; Wolff, Stephen; Yang, Xi
2012-12-01
Scientific innovation continues to increase requirements for the computing and networking infrastructures of the world. Collaborative partners, instrumentation, storage, and processing facilities are often geographically and topologically separated, as is the case with LHC virtual organizations. These separations challenge the technology used to interconnect available resources, often delivered by Research and Education (R&E) networking providers, and leads to complications in the overall process of end-to-end data management. Capacity and traffic management are key concerns of R&E network operators; a delicate balance is required to serve both long-lived, high capacity network flows, as well as more traditional end-user activities. The advent of dynamic circuit services, a technology that enables the creation of variable duration, guaranteed bandwidth networking channels, allows for the efficient use of common network infrastructures. These gains are seen particularly in locations where overall capacity is scarce compared to the (sustained peak) needs of user communities. Related efforts, including those of the LHCOPN [3] operations group and the emerging LHCONE [4] project, may take advantage of available resources by designating specific network activities as a “high priority”, allowing reservation of dedicated bandwidth or optimizing for deadline scheduling and predicable delivery patterns. This paper presents the DYNES instrument, an NSF funded cyberinfrastructure project designed to facilitate end-to-end dynamic circuit services [2]. This combination of hardware and software innovation is being deployed across R&E networks in the United States at selected end-sites located on University Campuses. DYNES is peering with international efforts in other countries using similar solutions, and is increasing the reach of this emerging technology. This global data movement solution could be integrated into computing paradigms such as cloud and grid computing platforms, and through the use of APIs can be integrated into existing data movement software.
Lessons Learned in Deploying the World s Largest Scale Lustre File System
DOE Office of Scientific and Technical Information (OSTI.GOV)
Dillow, David A; Fuller, Douglas; Wang, Feiyi
2010-01-01
The Spider system at the Oak Ridge National Laboratory's Leadership Computing Facility (OLCF) is the world's largest scale Lustre parallel file system. Envisioned as a shared parallel file system capable of delivering both the bandwidth and capacity requirements of the OLCF's diverse computational environment, the project had a number of ambitious goals. To support the workloads of the OLCF's diverse computational platforms, the aggregate performance and storage capacity of Spider exceed that of our previously deployed systems by a factor of 6x - 240 GB/sec, and 17x - 10 Petabytes, respectively. Furthermore, Spider supports over 26,000 clients concurrently accessing themore » file system, which exceeds our previously deployed systems by nearly 4x. In addition to these scalability challenges, moving to a center-wide shared file system required dramatically improved resiliency and fault-tolerance mechanisms. This paper details our efforts in designing, deploying, and operating Spider. Through a phased approach of research and development, prototyping, deployment, and transition to operations, this work has resulted in a number of insights into large-scale parallel file system architectures, from both the design and the operational perspectives. We present in this paper our solutions to issues such as network congestion, performance baselining and evaluation, file system journaling overheads, and high availability in a system with tens of thousands of components. We also discuss areas of continued challenges, such as stressed metadata performance and the need for file system quality of service alongside with our efforts to address them. Finally, operational aspects of managing a system of this scale are discussed along with real-world data and observations.« less
Desjardins, Jamie L
2016-01-01
Older listeners with hearing loss may exert more cognitive resources to maintain a level of listening performance similar to that of younger listeners with normal hearing. Unfortunately, this increase in cognitive load, which is often conceptualized as increased listening effort, may come at the cost of cognitive processing resources that might otherwise be available for other tasks. The purpose of this study was to evaluate the independent and combined effects of a hearing aid directional microphone and a noise reduction (NR) algorithm on reducing the listening effort older listeners with hearing loss expend on a speech-in-noise task. Participants were fitted with study worn commercially available behind-the-ear hearing aids. Listening effort on a sentence recognition in noise task was measured using an objective auditory-visual dual-task paradigm. The primary task required participants to repeat sentences presented in quiet and in a four-talker babble. The secondary task was a digital visual pursuit rotor-tracking test, for which participants were instructed to use a computer mouse to track a moving target around an ellipse that was displayed on a computer screen. Each of the two tasks was presented separately and concurrently at a fixed overall speech recognition performance level of 50% correct with and without the directional microphone and/or the NR algorithm activated in the hearing aids. In addition, participants reported how effortful it was to listen to the sentences in quiet and in background noise in the different hearing aid listening conditions. Fifteen older listeners with mild sloping to severe sensorineural hearing loss participated in this study. Listening effort in background noise was significantly reduced with the directional microphones activated in the hearing aids. However, there was no significant change in listening effort with the hearing aid NR algorithm compared to no noise processing. Correlation analysis between objective and self-reported ratings of listening effort showed no significant relation. Directional microphone processing effectively reduced the cognitive load of listening to speech in background noise. This is significant because it is likely that listeners with hearing impairment will frequently encounter noisy speech in their everyday communications. American Academy of Audiology.
High performance transcription factor-DNA docking with GPU computing
2012-01-01
Background Protein-DNA docking is a very challenging problem in structural bioinformatics and has important implications in a number of applications, such as structure-based prediction of transcription factor binding sites and rational drug design. Protein-DNA docking is very computational demanding due to the high cost of energy calculation and the statistical nature of conformational sampling algorithms. More importantly, experiments show that the docking quality depends on the coverage of the conformational sampling space. It is therefore desirable to accelerate the computation of the docking algorithm, not only to reduce computing time, but also to improve docking quality. Methods In an attempt to accelerate the sampling process and to improve the docking performance, we developed a graphics processing unit (GPU)-based protein-DNA docking algorithm. The algorithm employs a potential-based energy function to describe the binding affinity of a protein-DNA pair, and integrates Monte-Carlo simulation and a simulated annealing method to search through the conformational space. Algorithmic techniques were developed to improve the computation efficiency and scalability on GPU-based high performance computing systems. Results The effectiveness of our approach is tested on a non-redundant set of 75 TF-DNA complexes and a newly developed TF-DNA docking benchmark. We demonstrated that the GPU-based docking algorithm can significantly accelerate the simulation process and thereby improving the chance of finding near-native TF-DNA complex structures. This study also suggests that further improvement in protein-DNA docking research would require efforts from two integral aspects: improvement in computation efficiency and energy function design. Conclusions We present a high performance computing approach for improving the prediction accuracy of protein-DNA docking. The GPU-based docking algorithm accelerates the search of the conformational space and thus increases the chance of finding more near-native structures. To the best of our knowledge, this is the first ad hoc effort of applying GPU or GPU clusters to the protein-DNA docking problem. PMID:22759575
Pilot Program Evaluating Personal Tablet Device Use Across Campus
2012-01-01
that we had to individually activate the device through iTunes and manu- ally load the apps to the devices. As discussed below, the technology for...beginning of the effort, though, individual iTunes accounts were created for each of the students and tied to a domain email ad- dress provided by...required a USB connection to a computer in order to activate prior to use. We initially had a bank of laptops with iTunes installed that students
A Pseudo-Reversing Theorem for Rotation and its Application to Orientation Theory
2012-03-01
approach to the task of constructing the appropriate course a ship must steer in order for the wind to appear to come from some given direction with some...axes, although the theorem doesn’t actually require such axes. The Pseudo-Reversing Theorem can often be invoked to give a different pedagogical basis to...of validity will quickly become obvious when it’s implemented on a computer. It does not seem to me that a great deal of pedagogical effort has found
Software for visualization, analysis, and manipulation of laser scan images
NASA Astrophysics Data System (ADS)
Burnsides, Dennis B.
1997-03-01
The recent introduction of laser surface scanning to scientific applications presents a challenge to computer scientists and engineers. Full utilization of this two- dimensional (2-D) and three-dimensional (3-D) data requires advances in techniques and methods for data processing and visualization. This paper explores the development of software to support the visualization, analysis and manipulation of laser scan images. Specific examples presented are from on-going efforts at the Air Force Computerized Anthropometric Research and Design (CARD) Laboratory.
Behavior-based multi-robot collaboration for autonomous construction tasks
NASA Technical Reports Server (NTRS)
Stroupe, Ashley; Huntsberger, Terry; Okon, Avi; Aghazarian, Hrand; Robinson, Matthew
2005-01-01
The Robot Construction Crew (RCC) is a heterogeneous multi-robot system for autonomous construction of a structure through assembly of Long components. The two robot team demonstrates component placement into an existing structure in a realistic environment. The task requires component acquisition, cooperative transport, and cooperative precision manipulation. A behavior-based architecture provides adaptability. The RCC approach minimizes computation, power, communication, and sensing for applicability to space-related construction efforts, but the techniques are applicable to terrestrial construction tasks.
Behavior-Based Multi-Robot Collaboration for Autonomous Construction Tasks
NASA Technical Reports Server (NTRS)
Stroupe, Ashley; Huntsberger, Terry; Okon, Avi; Aghazarian, Hrand; Robinson, Matthew
2005-01-01
We present a heterogeneous multi-robot system for autonomous construction of a structure through assembly of long components. Placement of a component within an existing structure in a realistic environment is demonstrated on a two-robot team. The task requires component acquisition, cooperative transport, and cooperative precision manipulation. Far adaptability, the system is designed as a behavior-based architecture. Far applicability to space-related construction efforts, computation, power, communication, and sensing are minimized, though the techniques developed are also applicable to terrestrial construction tasks.
Ant colony system algorithm for the optimization of beer fermentation control.
Xiao, Jie; Zhou, Ze-Kui; Zhang, Guang-Xin
2004-12-01
Beer fermentation is a dynamic process that must be guided along a temperature profile to obtain the desired results. Ant colony system algorithm was applied to optimize the kinetic model of this process. During a fixed period of fermentation time, a series of different temperature profiles of the mixture were constructed. An optimal one was chosen at last. Optimal temperature profile maximized the final ethanol production and minimized the byproducts concentration and spoilage risk. The satisfactory results obtained did not require much computation effort.
1996-12-01
This includes an exemption from publishing the opportunity in the Commerce Business Daily ( CBD ) and elimination of the requirement to hold the...of assigned programs. In discharging this responsibility, thc 990 coor.int-tes his efforts with ether ASN(RDMA) offices, b. TEFlO..M-VAL OPERATIONS...Communications, Computers and Information Systems CA Civil Affairs CAIV Cost as an Independent Variable CBD Commerce Business Daily CBPL Capabilities
NASA Technical Reports Server (NTRS)
1974-01-01
A computer printout is presented of the mission requirement for the TERSSE missions and their associated user tasks. The data included in the data base represents a broad-based attempt to define the amount, extent, and type of information needed for an earth resources management program in the era of the space shuttle. An effort was made to consider all aspects of remote sensing and resource management; because of its broad scope, it is not intended that the data be used without verification for in-depth studies of particular missions and/or users. The data base represents the quantitative structure necessary to define the TERSSE architecture and requirements, and to an overall integrated view of the earth resources technology requirements of the 1980's.
The Fox and the Grapes-How Physical Constraints Affect Value Based Decision Making.
Gross, Jörg; Woelbert, Eva; Strobel, Martin
2015-01-01
One fundamental question in decision making research is how humans compute the values that guide their decisions. Recent studies showed that people assign higher value to goods that are closer to them, even when physical proximity should be irrelevant for the decision from a normative perspective. This phenomenon, however, seems reasonable from an evolutionary perspective. Most foraging decisions of animals involve the trade-off between the value that can be obtained and the associated effort of obtaining. Anticipated effort for physically obtaining a good could therefore affect the subjective value of this good. In this experiment, we test this hypothesis by letting participants state their subjective value for snack food while the effort that would be incurred when reaching for it was manipulated. Even though reaching was not required in the experiment, we find that willingness to pay was significantly lower when subjects wore heavy wristbands on their arms. Thus, when reaching was more difficult, items were perceived as less valuable. Importantly, this was only the case when items were physically in front of the participants but not when items were presented as text on a computer screen. Our results suggest automatic interactions of motor and valuation processes which are unexplored to this date and may account for irrational decisions that occur when reward is particularly easy to reach.
High level cognitive information processing in neural networks
NASA Technical Reports Server (NTRS)
Barnden, John A.; Fields, Christopher A.
1992-01-01
Two related research efforts were addressed: (1) high-level connectionist cognitive modeling; and (2) local neural circuit modeling. The goals of the first effort were to develop connectionist models of high-level cognitive processes such as problem solving or natural language understanding, and to understand the computational requirements of such models. The goals of the second effort were to develop biologically-realistic model of local neural circuits, and to understand the computational behavior of such models. In keeping with the nature of NASA's Innovative Research Program, all the work conducted under the grant was highly innovative. For instance, the following ideas, all summarized, are contributions to the study of connectionist/neural networks: (1) the temporal-winner-take-all, relative-position encoding, and pattern-similarity association techniques; (2) the importation of logical combinators into connection; (3) the use of analogy-based reasoning as a bridge across the gap between the traditional symbolic paradigm and the connectionist paradigm; and (4) the application of connectionism to the domain of belief representation/reasoning. The work on local neural circuit modeling also departs significantly from the work of related researchers. In particular, its concentration on low-level neural phenomena that could support high-level cognitive processing is unusual within the area of biological local circuit modeling, and also serves to expand the horizons of the artificial neural net field.
The Fox and the Grapes—How Physical Constraints Affect Value Based Decision Making
Strobel, Martin
2015-01-01
One fundamental question in decision making research is how humans compute the values that guide their decisions. Recent studies showed that people assign higher value to goods that are closer to them, even when physical proximity should be irrelevant for the decision from a normative perspective. This phenomenon, however, seems reasonable from an evolutionary perspective. Most foraging decisions of animals involve the trade-off between the value that can be obtained and the associated effort of obtaining. Anticipated effort for physically obtaining a good could therefore affect the subjective value of this good. In this experiment, we test this hypothesis by letting participants state their subjective value for snack food while the effort that would be incurred when reaching for it was manipulated. Even though reaching was not required in the experiment, we find that willingness to pay was significantly lower when subjects wore heavy wristbands on their arms. Thus, when reaching was more difficult, items were perceived as less valuable. Importantly, this was only the case when items were physically in front of the participants but not when items were presented as text on a computer screen. Our results suggest automatic interactions of motor and valuation processes which are unexplored to this date and may account for irrational decisions that occur when reward is particularly easy to reach. PMID:26061087
Graphics Processing Unit Assisted Thermographic Compositing
NASA Technical Reports Server (NTRS)
Ragasa, Scott; McDougal, Matthew; Russell, Sam
2012-01-01
Objective: To develop a software application utilizing general purpose graphics processing units (GPUs) for the analysis of large sets of thermographic data. Background: Over the past few years, an increasing effort among scientists and engineers to utilize the GPU in a more general purpose fashion is allowing for supercomputer level results at individual workstations. As data sets grow, the methods to work them grow at an equal, and often great, pace. Certain common computations can take advantage of the massively parallel and optimized hardware constructs of the GPU to allow for throughput that was previously reserved for compute clusters. These common computations have high degrees of data parallelism, that is, they are the same computation applied to a large set of data where the result does not depend on other data elements. Signal (image) processing is one area were GPUs are being used to greatly increase the performance of certain algorithms and analysis techniques. Technical Methodology/Approach: Apply massively parallel algorithms and data structures to the specific analysis requirements presented when working with thermographic data sets.
High-performance computing with quantum processing units
Britt, Keith A.; Oak Ridge National Lab.; Humble, Travis S.; ...
2017-03-01
The prospects of quantum computing have driven efforts to realize fully functional quantum processing units (QPUs). Recent success in developing proof-of-principle QPUs has prompted the question of how to integrate these emerging processors into modern high-performance computing (HPC) systems. We examine how QPUs can be integrated into current and future HPC system architectures by accounting for func- tional and physical design requirements. We identify two integration pathways that are differentiated by infrastructure constraints on the QPU and the use cases expected for the HPC system. This includes a tight integration that assumes infrastructure bottlenecks can be overcome as well asmore » a loose integration that as- sumes they cannot. We find that the performance of both approaches is likely to depend on the quantum interconnect that serves to entangle multiple QPUs. As a result, we also identify several challenges in assessing QPU performance for HPC, and we consider new metrics that capture the interplay between system architecture and the quantum parallelism underlying computational performance.« less
Foley, Finbar; Rajagopalan, Srinivasan; Raghunath, Sushravya M; Boland, Jennifer M; Karwoski, Ronald A; Maldonado, Fabien; Bartholmai, Brian J; Peikert, Tobias
2016-01-01
Increased clinical use of chest high-resolution computed tomography results in increased identification of lung adenocarcinomas and persistent subsolid opacities. However, these lesions range from very indolent to extremely aggressive tumors. Clinically relevant diagnostic tools to noninvasively risk stratify and guide individualized management of these lesions are lacking. Research efforts investigating semiquantitative measures to decrease interrater and intrarater variability are emerging, and in some cases steps have been taken to automate this process. However, many such methods currently are still suboptimal, require validation and are not yet clinically applicable. The computer-aided nodule assessment and risk yield software application represents a validated tool for the automated, quantitative, and noninvasive tool for risk stratification of adenocarcinoma lung nodules. Computer-aided nodule assessment and risk yield correlates well with consensus histology and postsurgical patient outcomes, and therefore may help to guide individualized patient management, for example, in identification of nodules amenable to radiological surveillance, or in need of adjunctive therapy. Copyright © 2016 Elsevier Inc. All rights reserved.
Role of computational fluid dynamics in unsteady aerodynamics for aeroelasticity
NASA Technical Reports Server (NTRS)
Guruswamy, Guru P.; Goorjian, Peter M.
1989-01-01
In the last two decades there have been extensive developments in computational unsteady transonic aerodynamics. Such developments are essential since the transonic regime plays an important role in the design of modern aircraft. Therefore, there has been a large effort to develop computational tools with which to accurately perform flutter analysis at transonic speeds. In the area of Computational Fluid Dynamics (CFD), unsteady transonic aerodynamics are characterized by the feature of modeling the motion of shock waves over aerodynamic bodies, such as wings. This modeling requires the solution of nonlinear partial differential equations. Most advanced codes such as XTRAN3S use the transonic small perturbation equation. Currently, XTRAN3S is being used for generic research in unsteady aerodynamics and aeroelasticity of almost full aircraft configurations. Use of Euler/Navier Stokes equations for simple typical sections has just begun. A brief history of the development of CFD for aeroelastic applications is summarized. The development of unsteady transonic aerodynamics and aeroelasticity are also summarized.
High-performance computing with quantum processing units
DOE Office of Scientific and Technical Information (OSTI.GOV)
Britt, Keith A.; Oak Ridge National Lab.; Humble, Travis S.
The prospects of quantum computing have driven efforts to realize fully functional quantum processing units (QPUs). Recent success in developing proof-of-principle QPUs has prompted the question of how to integrate these emerging processors into modern high-performance computing (HPC) systems. We examine how QPUs can be integrated into current and future HPC system architectures by accounting for func- tional and physical design requirements. We identify two integration pathways that are differentiated by infrastructure constraints on the QPU and the use cases expected for the HPC system. This includes a tight integration that assumes infrastructure bottlenecks can be overcome as well asmore » a loose integration that as- sumes they cannot. We find that the performance of both approaches is likely to depend on the quantum interconnect that serves to entangle multiple QPUs. As a result, we also identify several challenges in assessing QPU performance for HPC, and we consider new metrics that capture the interplay between system architecture and the quantum parallelism underlying computational performance.« less
Nonequilibrium hypersonic flows simulations with asymptotic-preserving Monte Carlo methods
NASA Astrophysics Data System (ADS)
Ren, Wei; Liu, Hong; Jin, Shi
2014-12-01
In the rarefied gas dynamics, the DSMC method is one of the most popular numerical tools. It performs satisfactorily in simulating hypersonic flows surrounding re-entry vehicles and micro-/nano- flows. However, the computational cost is expensive, especially when Kn → 0. Even for flows in the near-continuum regime, pure DSMC simulations require a number of computational efforts for most cases. Albeit several DSMC/NS hybrid methods are proposed to deal with this, those methods still suffer from the boundary treatment, which may cause nonphysical solutions. Filbet and Jin [1] proposed a framework of new numerical methods of Boltzmann equation, called asymptotic preserving schemes, whose computational costs are affordable as Kn → 0. Recently, Ren et al. [2] realized the AP schemes with Monte Carlo methods (AP-DSMC), which have better performance than counterpart methods. In this paper, AP-DSMC is applied in simulating nonequilibrium hypersonic flows. Several numerical results are computed and analyzed to study the efficiency and capability of capturing complicated flow characteristics.
The Guide to Better Hospital Computer Decisions
Dorenfest, Sheldon I.
1981-01-01
A soon-to-be-published major study of hospital computer use entitled “The Guide to Better Hospital Computer Decisions” was conducted by my firm over the past 2½ years. The study required over twenty (20) man years of effort at a cost of over $300,000, and the six (6) volume final report provides more than 1,000 pages of data about how hospitals are and will be using computerized medical and business information systems. It describes the current status and future expectations for computer use in major application areas, such as, but not limited to, finance, admitting, pharmacy, laboratory, data collection and hospital or medical information systems. It also includes profiles of over 100 companies and other types of organizations providing data processing products and services to hospitals. In this paper, we discuss the need for the study, the specific objectives of the study, the methodology and approach taken to complete the study and a few major conclusions.
Montague, P. Read; Dolan, Raymond J.; Friston, Karl J.; Dayan, Peter
2013-01-01
Computational ideas pervade many areas of science and have an integrative explanatory role in neuroscience and cognitive science. However, computational depictions of cognitive function have had surprisingly little impact on the way we assess mental illness because diseases of the mind have not been systematically conceptualized in computational terms. Here, we outline goals and nascent efforts in the new field of computational psychiatry, which seeks to characterize mental dysfunction in terms of aberrant computations over multiple scales. We highlight early efforts in this area that employ reinforcement learning and game theoretic frameworks to elucidate decision-making in health and disease. Looking forwards, we emphasize a need for theory development and large-scale computational phenotyping in human subjects. PMID:22177032
NASA Astrophysics Data System (ADS)
Stout, Jane G.; Blaney, Jennifer M.
2017-10-01
Research suggests growth mindset, or the belief that knowledge is acquired through effort, may enhance women's sense of belonging in male-dominated disciplines, like computing. However, other research indicates women who spend a great deal of time and energy in technical fields experience a low sense of belonging. The current study assessed the benefits of a growth mindset on women's (and men's) sense of intellectual belonging in computing, accounting for the amount of time and effort dedicated to academics. We define "intellectual belonging" as the sense that one is believed to be a competent member of the community. Whereas a stronger growth mindset was associated with stronger intellectual belonging for men, a growth mindset only boosted women's intellectual belonging when they did not work hard on academics. Our findings suggest, paradoxically, women may not benefit from a growth mindset in computing when they exert a lot of effort.
NASA Technical Reports Server (NTRS)
Rodriguez, Juan Jared
2014-01-01
The purpose of this report is to detail the tasks accomplished as a NASA NIFS intern for the summer 2014 session. This internship opportunity is to develop an issue tracker Ruby on Rails web application to improve the communication of developmental anomalies between the Support Software Computer Software Configuration Item (CSCI) teams, System Build and Information Architecture. As many may know software development is an arduous, time consuming, collaborative effort. It involves nearly as much work designing, planning, collaborating, discussing, and resolving issues as effort expended in actual development. This internship opportunity was put in place to help alleviate the amount of time spent discussing issues such as bugs, missing tests, new requirements, and usability concerns that arise during development and throughout the life cycle of software applications once in production.
Current Grid operation and future role of the Grid
NASA Astrophysics Data System (ADS)
Smirnova, O.
2012-12-01
Grid-like technologies and approaches became an integral part of HEP experiments. Some other scientific communities also use similar technologies for data-intensive computations. The distinct feature of Grid computing is the ability to federate heterogeneous resources of different ownership into a seamless infrastructure, accessible via a single log-on. Like other infrastructures of similar nature, Grid functioning requires not only technologically sound basis, but also reliable operation procedures, monitoring and accounting. The two aspects, technological and operational, are closely related: weaker is the technology, more burden is on operations, and other way around. As of today, Grid technologies are still evolving: at CERN alone, every LHC experiment uses an own Grid-like system. This inevitably creates a heavy load on operations. Infrastructure maintenance, monitoring and incident response are done on several levels, from local system administrators to large international organisations, involving massive human effort worldwide. The necessity to commit substantial resources is one of the obstacles faced by smaller research communities when moving computing to the Grid. Moreover, most current Grid solutions were developed under significant influence of HEP use cases, and thus need additional effort to adapt them to other applications. Reluctance of many non-HEP researchers to use Grid negatively affects the outlook for national Grid organisations, which strive to provide multi-science services. We started from the situation where Grid organisations were fused with HEP laboratories and national HEP research programmes; we hope to move towards the world where Grid will ultimately reach the status of generic public computing and storage service provider and permanent national and international Grid infrastructures will be established. How far will we be able to advance along this path, depends on us. If no standardisation and convergence efforts will take place, Grid will become limited to HEP; if however the current multitude of Grid-like systems will converge to a generic, modular and extensible solution, Grid will become true to its name.
Computer Based Training: Field Deployable Trainer and Shared Virtual Reality
NASA Technical Reports Server (NTRS)
Mullen, Terence J.
1997-01-01
Astronaut training has traditionally been conducted at specific sites with specialized facilities. Because of its size and nature the training equipment is generally not portable. Efforts are now under way to develop training tools that can be taken to remote locations, including into orbit. Two of these efforts are the Field Deployable Trainer and Shared Virtual Reality projects. Field Deployable Trainer NASA has used the recent shuttle mission by astronaut Shannon Lucid to the Russian space station, Mir, as an opportunity to develop and test a prototype of an on-orbit computer training system. A laptop computer with a customized user interface, a set of specially prepared CD's, and video tapes were taken to the Mir by Ms. Lucid. Based upon the feedback following the launch of the Lucid flight, our team prepared materials for the next Mir visitor. Astronaut John Blaha will fly on NASA/MIR Long Duration Mission 3, set to launch in mid September. He will take with him a customized hard disk drive and a package of compact disks containing training videos, references and maps. The FDT team continues to explore and develop new and innovative ways to conduct offsite astronaut training using personal computers. Shared Virtual Reality Training NASA's Space Flight Training Division has been investigating the use of virtual reality environments for astronaut training. Recent efforts have focused on activities requiring interaction by two or more people, called shared VR. Dr. Bowen Loftin, from the University of Houston, directs a virtual reality laboratory that conducts much of the NASA sponsored research. I worked on a project involving the development of a virtual environment that can be used to train astronauts and others to operate a science unit called a Biological Technology Facility (BTF). Facilities like this will be used to house and control microgravity experiments on the space station. It is hoped that astronauts and instructors will ultimately be able to share common virtual environments and, using telephone links, conduct interactive training from separate locations.
Atomic detail visualization of photosynthetic membranes with GPU-accelerated ray tracing
DOE Office of Scientific and Technical Information (OSTI.GOV)
Stone, John E.; Sener, Melih; Vandivort, Kirby L.
The cellular process responsible for providing energy for most life on Earth, namely, photosynthetic light-harvesting, requires the cooperation of hundreds of proteins across an organelle, involving length and time scales spanning several orders of magnitude over quantum and classical regimes. Simulation and visualization of this fundamental energy conversion process pose many unique methodological and computational challenges. In this paper, we present, in two accompanying movies, light-harvesting in the photosynthetic apparatus found in purple bacteria, the so-called chromatophore. The movies are the culmination of three decades of modeling efforts, featuring the collaboration of theoretical, experimental, and computational scientists. Finally, we describemore » the techniques that were used to build, simulate, analyze, and visualize the structures shown in the movies, and we highlight cases where scientific needs spurred the development of new parallel algorithms that efficiently harness GPU accelerators and petascale computers.« less
Detonation Product EOS Studies: Using ISLS to Refine Cheetah
NASA Astrophysics Data System (ADS)
Zaug, J. M.; Howard, W. M.; Fried, L. E.; Hansen, D. W.
2002-07-01
Knowledge of an effective interatomic potential function underlies any effort to predict or rationalize the properties of solids and liquids. The experiments we undertake are directed towards determination of equilibrium and dynamic properties of simple fluids at densities sufficiently high that traditional computational methods and semi-empirical forms successful at ambient conditions may require reconsideration. In this paper we present high-pressure and temperature experimental sound speed data on a simple fluid, methanol. Impulsive Stimulated Light Scattering (ISLS) conducted on diamond-anvil cell (DAC) encapsulated samples offers an experimental approach to determine cross-pair potential interactions through equation of state determinations. In addition the kinetics of structural relaxation in fluids can be studied. We compare our experimental results with our thermochemical computational model Cheetah. Experimentally grounded computational models provide a good basis to confidently understand the chemical nature of reactions at extreme conditions.
Investigations into the shape-preserving interpolants using symbolic computation
NASA Technical Reports Server (NTRS)
Lam, Maria
1988-01-01
Shape representation is a central issue in computer graphics and computer-aided geometric design. Many physical phenomena involve curves and surfaces that are monotone (in some directions) or are convex. The corresponding representation problem is given some monotone or convex data, and a monotone or convex interpolant is found. Standard interpolants need not be monotone or convex even though they may match monotone or convex data. Most of the methods of investigation of this problem involve the utilization of quadratic splines or Hermite polynomials. In this investigation, a similar approach is adopted. These methods require derivative information at the given data points. The key to the problem is the selection of the derivative values to be assigned to the given data points. Schemes for choosing derivatives were examined. Along the way, fitting given data points by a conic section has also been investigated as part of the effort to study shape-preserving quadratic splines.
An Exact Dual Adjoint Solution Method for Turbulent Flows on Unstructured Grids
NASA Technical Reports Server (NTRS)
Nielsen, Eric J.; Lu, James; Park, Michael A.; Darmofal, David L.
2003-01-01
An algorithm for solving the discrete adjoint system based on an unstructured-grid discretization of the Navier-Stokes equations is presented. The method is constructed such that an adjoint solution exactly dual to a direct differentiation approach is recovered at each time step, yielding a convergence rate which is asymptotically equivalent to that of the primal system. The new approach is implemented within a three-dimensional unstructured-grid framework and results are presented for inviscid, laminar, and turbulent flows. Improvements to the baseline solution algorithm, such as line-implicit relaxation and a tight coupling of the turbulence model, are also presented. By storing nearest-neighbor terms in the residual computation, the dual scheme is computationally efficient, while requiring twice the memory of the flow solution. The scheme is expected to have a broad impact on computational problems related to design optimization as well as error estimation and grid adaptation efforts.
Atomic detail visualization of photosynthetic membranes with GPU-accelerated ray tracing
DOE Office of Scientific and Technical Information (OSTI.GOV)
Stone, John E.; Sener, Melih; Vandivort, Kirby L.
The cellular process responsible for providing energy for most life on Earth, namely, photosynthetic light-harvesting, requires the cooperation of hundreds of proteins across an organelle, involving length and time scales spanning several orders of magnitude over quantum and classical regimes. Simulation and visualization of this fundamental energy conversion process pose many unique methodological and computational challenges. We present, in two accompanying movies, light-harvesting in the photosynthetic apparatus found in purple bacteria, the so-called chromatophore. The movies are the culmination of three decades of modeling efforts, featuring the collaboration of theoretical, experimental, and computational scientists. We describe the techniques that weremore » used to build, simulate, analyze, and visualize the structures shown in the movies, and we highlight cases where scientific needs spurred the development of new parallel algorithms that efficiently harness GPU accelerators and petascale computers.« less
Atomic detail visualization of photosynthetic membranes with GPU-accelerated ray tracing
Stone, John E.; Sener, Melih; Vandivort, Kirby L.; ...
2015-12-12
The cellular process responsible for providing energy for most life on Earth, namely, photosynthetic light-harvesting, requires the cooperation of hundreds of proteins across an organelle, involving length and time scales spanning several orders of magnitude over quantum and classical regimes. Simulation and visualization of this fundamental energy conversion process pose many unique methodological and computational challenges. In this paper, we present, in two accompanying movies, light-harvesting in the photosynthetic apparatus found in purple bacteria, the so-called chromatophore. The movies are the culmination of three decades of modeling efforts, featuring the collaboration of theoretical, experimental, and computational scientists. Finally, we describemore » the techniques that were used to build, simulate, analyze, and visualize the structures shown in the movies, and we highlight cases where scientific needs spurred the development of new parallel algorithms that efficiently harness GPU accelerators and petascale computers.« less
Navier-Stokes calculations of scramjet-nozzle-afterbody flowfields
NASA Technical Reports Server (NTRS)
Baysal, Oktay
1991-01-01
A comprehensive computational fluid dynamics effort was conducted from 1987 to 1990 to properly design a nozzle and lower aft end of a generic hypersonic vehicle powered by a scramjet engine. The interference of the exhaust on the control surfaces of the vehicle can have adverse effects on its stability. Two-dimensional Navier-Stokes computations were performed, where the exhaust gas was assumed to be air behaving as a perfect gas. Then the exhaust was simulated by a mixture of Freon-12 and argon, which required solving the Navier-Stokes equations for four species: (nitrogen, oxygen, Freon-12, and argon). This allowed gamma to be a field variable during the mixing of the multispecies gases. Two different mixing models were used and comparisons between them as well as the perfect gas air calculations were made to assess their relative merits. Finally, the three dimensional Navier-Stokes computations were made for the full-span scramjet nozzle afterbody module.
A database to enable discovery and design of piezoelectric materials
de Jong, Maarten; Chen, Wei; Geerlings, Henry; Asta, Mark; Persson, Kristin Aslaug
2015-01-01
Piezoelectric materials are used in numerous applications requiring a coupling between electrical fields and mechanical strain. Despite the technological importance of this class of materials, for only a small fraction of all inorganic compounds which display compatible crystallographic symmetry, has piezoelectricity been characterized experimentally or computationally. In this work we employ first-principles calculations based on density functional perturbation theory to compute the piezoelectric tensors for nearly a thousand compounds, thereby increasing the available data for this property by more than an order of magnitude. The results are compared to select experimental data to establish the accuracy of the calculated properties. The details of the calculations are also presented, along with a description of the format of the database developed to make these computational results publicly available. In addition, the ways in which the database can be accessed and applied in materials development efforts are described. PMID:26451252
Navier-Stokes calculations of scramjet-nozzle-afterbody flowfields
NASA Astrophysics Data System (ADS)
Baysal, Oktay
1991-07-01
A comprehensive computational fluid dynamics effort was conducted from 1987 to 1990 to properly design a nozzle and lower aft end of a generic hypersonic vehicle powered by a scramjet engine. The interference of the exhaust on the control surfaces of the vehicle can have adverse effects on its stability. Two-dimensional Navier-Stokes computations were performed, where the exhaust gas was assumed to be air behaving as a perfect gas. Then the exhaust was simulated by a mixture of Freon-12 and argon, which required solving the Navier-Stokes equations for four species: (nitrogen, oxygen, Freon-12, and argon). This allowed gamma to be a field variable during the mixing of the multispecies gases. Two different mixing models were used and comparisons between them as well as the perfect gas air calculations were made to assess their relative merits. Finally, the three dimensional Navier-Stokes computations were made for the full-span scramjet nozzle afterbody module.
Method and tool for network vulnerability analysis
Swiler, Laura Painton [Albuquerque, NM; Phillips, Cynthia A [Albuquerque, NM
2006-03-14
A computer system analysis tool and method that will allow for qualitative and quantitative assessment of security attributes and vulnerabilities in systems including computer networks. The invention is based on generation of attack graphs wherein each node represents a possible attack state and each edge represents a change in state caused by a single action taken by an attacker or unwitting assistant. Edges are weighted using metrics such as attacker effort, likelihood of attack success, or time to succeed. Generation of an attack graph is accomplished by matching information about attack requirements (specified in "attack templates") to information about computer system configuration (contained in a configuration file that can be updated to reflect system changes occurring during the course of an attack) and assumed attacker capabilities (reflected in "attacker profiles"). High risk attack paths, which correspond to those considered suited to application of attack countermeasures given limited resources for applying countermeasures, are identified by finding "epsilon optimal paths."
TADS: A CFD-based turbomachinery and analysis design system with GUI. Volume 2: User's manual
NASA Technical Reports Server (NTRS)
Myers, R. A.; Topp, D. A.; Delaney, R. A.
1995-01-01
The primary objective of this study was the development of a computational fluid dynamics (CFD) based turbomachinery airfoil analysis and design system, controlled by a graphical user interface (GUI). The computer codes resulting from this effort are referred to as the Turbomachinery Analysis and Design System (TADS). This document is intended to serve as a user's manual for the computer programs which comprise the TADS system. TADS couples a throughflow solver (ADPAC) with a quasi-3D blade-to-blade solver (RVCQ3D) in an interactive package. Throughflow analysis capability was developed in ADPAC through the addition of blade force and blockage terms to the governing equations. A GUI was developed to simplify user input and automate the many tasks required to perform turbomachinery analysis and design. The coupling of various programs was done in a way that alternative solvers or grid generators could be easily incorporated into the TADS framework.
Statistical Methodologies to Integrate Experimental and Computational Research
NASA Technical Reports Server (NTRS)
Parker, P. A.; Johnson, R. T.; Montgomery, D. C.
2008-01-01
Development of advanced algorithms for simulating engine flow paths requires the integration of fundamental experiments with the validation of enhanced mathematical models. In this paper, we provide an overview of statistical methods to strategically and efficiently conduct experiments and computational model refinement. Moreover, the integration of experimental and computational research efforts is emphasized. With a statistical engineering perspective, scientific and engineering expertise is combined with statistical sciences to gain deeper insights into experimental phenomenon and code development performance; supporting the overall research objectives. The particular statistical methods discussed are design of experiments, response surface methodology, and uncertainty analysis and planning. Their application is illustrated with a coaxial free jet experiment and a turbulence model refinement investigation. Our goal is to provide an overview, focusing on concepts rather than practice, to demonstrate the benefits of using statistical methods in research and development, thereby encouraging their broader and more systematic application.
A Comparison of FPGA and GPGPU Designs for Bayesian Occupancy Filters.
Medina, Luis; Diez-Ochoa, Miguel; Correal, Raul; Cuenca-Asensi, Sergio; Serrano, Alejandro; Godoy, Jorge; Martínez-Álvarez, Antonio; Villagra, Jorge
2017-11-11
Grid-based perception techniques in the automotive sector based on fusing information from different sensors and their robust perceptions of the environment are proliferating in the industry. However, one of the main drawbacks of these techniques is the traditionally prohibitive, high computing performance that is required for embedded automotive systems. In this work, the capabilities of new computing architectures that embed these algorithms are assessed in a real car. The paper compares two ad hoc optimized designs of the Bayesian Occupancy Filter; one for General Purpose Graphics Processing Unit (GPGPU) and the other for Field-Programmable Gate Array (FPGA). The resulting implementations are compared in terms of development effort, accuracy and performance, using datasets from a realistic simulator and from a real automated vehicle.
Dynamics of microtubules: highlights of recent computational and experimental investigations
NASA Astrophysics Data System (ADS)
Barsegov, Valeri; Ross, Jennifer L.; Dima, Ruxandra I.
2017-11-01
Microtubules are found in most eukaryotic cells, with homologs in eubacteria and archea, and they have functional roles in mitosis, cell motility, intracellular transport, and the maintenance of cell shape. Numerous efforts have been expended over the last two decades to characterize the interactions between microtubules and the wide variety of microtubule associated proteins that control their dynamic behavior in cells resulting in microtubules being assembled and disassembled where and when they are required by the cell. We present the main findings regarding microtubule polymerization and depolymerization and review recent work about the molecular motors that modulate microtubule dynamics by inducing either microtubule depolymerization or severing. We also discuss the main experimental and computational approaches used to quantify the thermodynamics and mechanics of microtubule filaments.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Clouse, C. J.; Edwards, M. J.; McCoy, M. G.
2015-07-07
Through its Advanced Scientific Computing (ASC) and Inertial Confinement Fusion (ICF) code development efforts, Lawrence Livermore National Laboratory (LLNL) provides a world leading numerical simulation capability for the National HED/ICF program in support of the Stockpile Stewardship Program (SSP). In addition the ASC effort provides high performance computing platform capabilities upon which these codes are run. LLNL remains committed to, and will work with, the national HED/ICF program community to help insure numerical simulation needs are met and to make those capabilities available, consistent with programmatic priorities and available resources.
Exploiting volatile opportunistic computing resources with Lobster
NASA Astrophysics Data System (ADS)
Woodard, Anna; Wolf, Matthias; Mueller, Charles; Tovar, Ben; Donnelly, Patrick; Hurtado Anampa, Kenyi; Brenner, Paul; Lannon, Kevin; Hildreth, Mike; Thain, Douglas
2015-12-01
Analysis of high energy physics experiments using the Compact Muon Solenoid (CMS) at the Large Hadron Collider (LHC) can be limited by availability of computing resources. As a joint effort involving computer scientists and CMS physicists at Notre Dame, we have developed an opportunistic workflow management tool, Lobster, to harvest available cycles from university campus computing pools. Lobster consists of a management server, file server, and worker processes which can be submitted to any available computing resource without requiring root access. Lobster makes use of the Work Queue system to perform task management, while the CMS specific software environment is provided via CVMFS and Parrot. Data is handled via Chirp and Hadoop for local data storage and XrootD for access to the CMS wide-area data federation. An extensive set of monitoring and diagnostic tools have been developed to facilitate system optimisation. We have tested Lobster using the 20 000-core cluster at Notre Dame, achieving approximately 8-10k tasks running simultaneously, sustaining approximately 9 Gbit/s of input data and 340 Mbit/s of output data.
NASA Technical Reports Server (NTRS)
Slater, John W.; Liou, Meng-Sing; Hindman, Richard G.
1994-01-01
An approach is presented for the generation of two-dimensional, structured, dynamic grids. The grid motion may be due to the motion of the boundaries of the computational domain or to the adaptation of the grid to the transient, physical solution. A time-dependent grid is computed through the time integration of the grid speeds which are computed from a system of grid speed equations. The grid speed equations are derived from the time-differentiation of the grid equations so as to ensure that the dynamic grid maintains the desired qualities of the static grid. The grid equations are the Euler-Lagrange equations derived from a variational statement for the grid. The dynamic grid method is demonstrated for a model problem involving boundary motion, an inviscid flow in a converging-diverging nozzle during startup, and a viscous flow over a flat plate with an impinging shock wave. It is shown that the approach is more accurate for transient flows than an approach in which the grid speeds are computed using a finite difference with respect to time of the grid. However, the approach requires significantly more computational effort.
Saito, S; Piccoli, B; Smith, M J; Sotoyama, M; Sweitzer, G; Villanueva, M B; Yoshitake, R
2000-10-01
In the 1980's, the visual display terminal (VDT) was introduced in workplaces of many countries. Soon thereafter, an upsurge in reported cases of related health problems, such as musculoskeletal disorders and eyestrain, was seen. Recently, the flat panel display or notebook personal computer (PC) became the most remarkable feature in modern workplaces with VDTs and even in homes. A proactive approach must be taken to avert foreseeable ergonomic and occupational health problems from the use of this new technology. Because of its distinct physical and optical characteristics, the ergonomic requirements for notebook PCs in terms of machine layout, workstation design, lighting conditions, among others, should be different from the CRT-based computers. The Japan Ergonomics Society (JES) technical committee came up with a set of guidelines for notebook PC use following exploratory discussions that dwelt on its ergonomic aspects. To keep in stride with this development, the Technical Committee on Human-Computer Interaction under the auspices of the International Ergonomics Association worked towards the international issuance of the guidelines. This paper unveils the result of this collaborative effort.
Quantized Average Consensus on Gossip Digraphs with Reduced Computation
NASA Astrophysics Data System (ADS)
Cai, Kai; Ishii, Hideaki
The authors have recently proposed a class of randomized gossip algorithms which solve the distributed averaging problem on directed graphs, with the constraint that each node has an integer-valued state. The essence of this algorithm is to maintain local records, called “surplus”, of individual state updates, thereby achieving quantized average consensus even though the state sum of all nodes is not preserved. In this paper we study a modified version of this algorithm, whose feature is primarily in reducing both computation and communication effort. Concretely, each node needs to update fewer local variables, and can transmit surplus by requiring only one bit. Under this modified algorithm we prove that reaching the average is ensured for arbitrary strongly connected graphs. The condition of arbitrary strong connection is less restrictive than those known in the literature for either real-valued or quantized states; in particular, it does not require the special structure on the network called balanced. Finally, we provide numerical examples to illustrate the convergence result, with emphasis on convergence time analysis.
NASA Technical Reports Server (NTRS)
Schoeberl, Mark; Rood, Richard B.; Hildebrand, Peter; Raymond, Carol
2003-01-01
The Earth System Model is the natural evolution of current climate models and will be the ultimate embodiment of our geophysical understanding of the planet. These models are constructed from components - atmosphere, ocean, ice, land, chemistry, solid earth, etc. models and merged together through a coupling program which is responsible for the exchange of data from the components. Climate models and future earth system models will have standardized modules, and these standards are now being developed by the ESMF project funded by NASA. The Earth System Model will have a variety of uses beyond climate prediction. The model can be used to build climate data records making it the core of an assimilation system, and it can be used in OSSE experiments to evaluate. The computing and storage requirements for the ESM appear to be daunting. However, the Japanese ES theoretical computing capability is already within 20% of the minimum requirements needed for some 2010 climate model applications. Thus it seems very possible that a focused effort to build an Earth System Model will achieve succcss.
A digital waveguide-based approach for Clavinet modeling and synthesis
NASA Astrophysics Data System (ADS)
Gabrielli, Leonardo; Välimäki, Vesa; Penttinen, Henri; Squartini, Stefano; Bilbao, Stefan
2013-12-01
The Clavinet is an electromechanical musical instrument produced in the mid-twentieth century. As is the case for other vintage instruments, it is subject to aging and requires great effort to be maintained or restored. This paper reports analyses conducted on a Hohner Clavinet D6 and proposes a computational model to faithfully reproduce the Clavinet sound in real time, from tone generation to the emulation of the electronic components. The string excitation signal model is physically inspired and represents a cheap solution in terms of both computational resources and especially memory requirements (compared, e.g., to sample playback systems). Pickups and amplifier models have been implemented which enhance the natural character of the sound with respect to previous work. A model has been implemented on a real-time software platform, Pure Data, capable of a 10-voice polyphony with low latency on an embedded device. Finally, subjective listening tests conducted using the current model are compared to previous tests showing slightly improved results.
An Energy-Based Limit State Function for Estimation of Structural Reliability in Shock Environments
Guthrie, Michael A.
2013-01-01
limit state function is developed for the estimation of structural reliability in shock environments. This limit state function uses peak modal strain energies to characterize environmental severity and modal strain energies at failure to characterize the structural capacity. The Hasofer-Lind reliability index is briefly reviewed and its computation for the energy-based limit state function is discussed. Applications to two degree of freedom mass-spring systems and to a simple finite element model are considered. For these examples, computation of the reliability index requires little effort beyond a modal analysis, but still accounts for relevant uncertainties in both the structure and environment.more » For both examples, the reliability index is observed to agree well with the results of Monte Carlo analysis. In situations where fast, qualitative comparison of several candidate designs is required, the reliability index based on the proposed limit state function provides an attractive metric which can be used to compare and control reliability.« less
Continuum and discrete approach in modeling biofilm development and structure: a review.
Mattei, M R; Frunzo, L; D'Acunto, B; Pechaud, Y; Pirozzi, F; Esposito, G
2018-03-01
The scientific community has recognized that almost 99% of the microbial life on earth is represented by biofilms. Considering the impacts of their sessile lifestyle on both natural and human activities, extensive experimental activity has been carried out to understand how biofilms grow and interact with the environment. Many mathematical models have also been developed to simulate and elucidate the main processes characterizing the biofilm growth. Two main mathematical approaches for biomass representation can be distinguished: continuum and discrete. This review is aimed at exploring the main characteristics of each approach. Continuum models can simulate the biofilm processes in a quantitative and deterministic way. However, they require a multidimensional formulation to take into account the biofilm spatial heterogeneity, which makes the models quite complicated, requiring significant computational effort. Discrete models are more recent and can represent the typical multidimensional structural heterogeneity of biofilm reflecting the experimental expectations, but they generate computational results including elements of randomness and introduce stochastic effects into the solutions.
Microprocessor control and networking for the amps breadboard
NASA Technical Reports Server (NTRS)
Floyd, Stephen A.
1987-01-01
Future space missions will require more sophisticated power systems, implying higher costs and more extensive crew and ground support involvement. To decrease this human involvement, as well as to protect and most efficiently utilize this important resource, NASA has undertaken major efforts to promote progress in the design and development of autonomously managed power systems. Two areas being actively pursued are autonomous power system (APS) breadboards and knowledge-based expert system (KBES) applications. The former are viewed as a requirement for the timely development of the latter. Not only will they serve as final testbeds for the various KBES applications, but will play a major role in the knowledge engineering phase of their development. The current power system breadboard designs are of a distributed microprocessor nature. The distributed nature, plus the need to connect various external computer capabilities (i.e., conventional host computers and symbolic processors), places major emphasis on effective networking. The communications and networking technologies for the first power system breadboard/test facility are described.
UWGSP6: a diagnostic radiology workstation of the future
NASA Astrophysics Data System (ADS)
Milton, Stuart W.; Han, Sang; Choi, Hyung-Sik; Kim, Yongmin
1993-06-01
The Univ. of Washington's Image Computing Systems Lab. (ICSL) has been involved in research into the development of a series of PACS workstations since the middle 1980's. The most recent research, a joint UW-IBM project, attempted to create a diagnostic radiology workstation using an IBM RISC System 6000 (RS6000) computer workstation and the X-Window system. While the results are encouraging, there are inherent limitations in the workstation hardware which prevent it from providing an acceptable level of functionality for diagnostic radiology. Realizing the RS6000 workstation's limitations, a parallel effort was initiated to design a workstation, UWGSP6 (Univ. of Washington Graphics System Processor #6), that provides the required functionality. This paper documents the design of UWGSP6, which not only addresses the requirements for a diagnostic radiology workstation in terms of display resolution, response time, etc., but also includes the processing performance necessary to support key functions needed in the implementation of algorithms for computer-aided diagnosis. The paper includes a description of the workstation architecture, and specifically its image processing subsystem. Verification of the design through hardware simulation is then discussed, and finally, performance of selected algorithms based on detailed simulation is provided.
Landázuri, Andrea C.; Sáez, A. Eduardo; Anthony, T. Renée
2016-01-01
This work presents fluid flow and particle trajectory simulation studies to determine the aspiration efficiency of a horizontally oriented occupational air sampler using computational fluid dynamics (CFD). Grid adaption and manual scaling of the grids were applied to two sampler prototypes based on a 37-mm cassette. The standard k–ε model was used to simulate the turbulent air flow and a second order streamline-upwind discretization scheme was used to stabilize convective terms of the Navier–Stokes equations. Successively scaled grids for each configuration were created manually and by means of grid adaption using the velocity gradient in the main flow direction. Solutions were verified to assess iterative convergence, grid independence and monotonic convergence. Particle aspiration efficiencies determined for both prototype samplers were undistinguishable, indicating that the porous filter does not play a noticeable role in particle aspiration. Results conclude that grid adaption is a powerful tool that allows to refine specific regions that require lots of detail and therefore better resolve flow detail. It was verified that adaptive grids provided a higher number of locations with monotonic convergence than the manual grids and required the least computational effort. PMID:26949268
León-Vargas, Fabian; Calm, Remei; Bondia, Jorge; Vehí, Josep
2012-01-01
Objective Set-inversion-based prandial insulin delivery is a new model-based bolus advisor for postprandial glucose control in type 1 diabetes mellitus (T1DM). It automatically coordinates the values of basal–bolus insulin to be infused during the postprandial period so as to achieve some predefined control objectives. However, the method requires an excessive computation time to compute the solution set of feasible insulin profiles, which impedes its integration into an insulin pump. In this work, a new algorithm is presented, which reduces computation time significantly and enables the integration of this new bolus advisor into current processing features of smart insulin pumps. Methods A new strategy was implemented that focused on finding the combined basal–bolus solution of interest rather than an extensive search of the feasible set of solutions. Analysis of interval simulations, inclusion of physiological assumptions, and search domain contractions were used. Data from six real patients with T1DM were used to compare the performance between the optimized and the conventional computations. Results In all cases, the optimized version yielded the basal–bolus combination recommended by the conventional method and in only 0.032% of the computation time. Simulations show that the mean number of iterations for the optimized computation requires approximately 3.59 s at 20 MHz processing power, in line with current features of smart pumps. Conclusions A computationally efficient method for basal–bolus coordination in postprandial glucose control has been presented and tested. The results indicate that an embedded algorithm within smart insulin pumps is now feasible. Nonetheless, we acknowledge that a clinical trial will be needed in order to justify this claim. PMID:23294789
DOE Office of Scientific and Technical Information (OSTI.GOV)
Connor, Carolyn Marie; Jacobson, Andree Lars; Bonnie, Amanda Marie
Sustainable and effective computing infrastructure depends critically on the skills and expertise of domain scientists and of committed and well-trained advanced computing professionals. But, in its ongoing High Performance Computing (HPC) work, Los Alamos National Laboratory noted a persistent shortage of well-prepared applicants, particularly for entry-level cluster administration, file systems administration, and high speed networking positions. Further, based upon recruiting efforts and interactions with universities graduating students in related majors of interest (e.g., computer science (CS)), there has been a long standing skillset gap, as focused training in HPC topics is typically lacking or absent in undergraduate and in evenmore » many graduate programs. Given that the effective operation and use of HPC systems requires specialized and often advanced training, that there is a recognized HPC skillset gap, and that there is intense global competition for computing and computational science talent, there is a long-standing and critical need for innovative approaches to help bridge the gap and create a well-prepared, next generation HPC workforce. Our paper places this need in the context of the HPC work and workforce requirements at Los Alamos National Laboratory (LANL) and presents one such innovative program conceived to address the need, bridge the gap, and grow an HPC workforce pipeline at LANL. The Computer System, Cluster, and Networking Summer Institute (CSCNSI) completed its 10th year in 2016. The story of the CSCNSI and its evolution is detailed below with a description of the design of its Boot Camp, and a summary of its success and some key factors that have enabled that success.« less
Connor, Carolyn Marie; Jacobson, Andree Lars; Bonnie, Amanda Marie; ...
2016-11-01
Sustainable and effective computing infrastructure depends critically on the skills and expertise of domain scientists and of committed and well-trained advanced computing professionals. But, in its ongoing High Performance Computing (HPC) work, Los Alamos National Laboratory noted a persistent shortage of well-prepared applicants, particularly for entry-level cluster administration, file systems administration, and high speed networking positions. Further, based upon recruiting efforts and interactions with universities graduating students in related majors of interest (e.g., computer science (CS)), there has been a long standing skillset gap, as focused training in HPC topics is typically lacking or absent in undergraduate and in evenmore » many graduate programs. Given that the effective operation and use of HPC systems requires specialized and often advanced training, that there is a recognized HPC skillset gap, and that there is intense global competition for computing and computational science talent, there is a long-standing and critical need for innovative approaches to help bridge the gap and create a well-prepared, next generation HPC workforce. Our paper places this need in the context of the HPC work and workforce requirements at Los Alamos National Laboratory (LANL) and presents one such innovative program conceived to address the need, bridge the gap, and grow an HPC workforce pipeline at LANL. The Computer System, Cluster, and Networking Summer Institute (CSCNSI) completed its 10th year in 2016. The story of the CSCNSI and its evolution is detailed below with a description of the design of its Boot Camp, and a summary of its success and some key factors that have enabled that success.« less
Sensitivity analysis of dynamic biological systems with time-delays.
Wu, Wu Hsiung; Wang, Feng Sheng; Chang, Maw Shang
2010-10-15
Mathematical modeling has been applied to the study and analysis of complex biological systems for a long time. Some processes in biological systems, such as the gene expression and feedback control in signal transduction networks, involve a time delay. These systems are represented as delay differential equation (DDE) models. Numerical sensitivity analysis of a DDE model by the direct method requires the solutions of model and sensitivity equations with time-delays. The major effort is the computation of Jacobian matrix when computing the solution of sensitivity equations. The computation of partial derivatives of complex equations either by the analytic method or by symbolic manipulation is time consuming, inconvenient, and prone to introduce human errors. To address this problem, an automatic approach to obtain the derivatives of complex functions efficiently and accurately is necessary. We have proposed an efficient algorithm with an adaptive step size control to compute the solution and dynamic sensitivities of biological systems described by ordinal differential equations (ODEs). The adaptive direct-decoupled algorithm is extended to solve the solution and dynamic sensitivities of time-delay systems describing by DDEs. To save the human effort and avoid the human errors in the computation of partial derivatives, an automatic differentiation technique is embedded in the extended algorithm to evaluate the Jacobian matrix. The extended algorithm is implemented and applied to two realistic models with time-delays: the cardiovascular control system and the TNF-α signal transduction network. The results show that the extended algorithm is a good tool for dynamic sensitivity analysis on DDE models with less user intervention. By comparing with direct-coupled methods in theory, the extended algorithm is efficient, accurate, and easy to use for end users without programming background to do dynamic sensitivity analysis on complex biological systems with time-delays.
Final Report: Quantification of Uncertainty in Extreme Scale Computations (QUEST)
DOE Office of Scientific and Technical Information (OSTI.GOV)
Marzouk, Youssef; Conrad, Patrick; Bigoni, Daniele
QUEST (\\url{www.quest-scidac.org}) is a SciDAC Institute that is focused on uncertainty quantification (UQ) in large-scale scientific computations. Our goals are to (1) advance the state of the art in UQ mathematics, algorithms, and software; and (2) provide modeling, algorithmic, and general UQ expertise, together with software tools, to other SciDAC projects, thereby enabling and guiding a broad range of UQ activities in their respective contexts. QUEST is a collaboration among six institutions (Sandia National Laboratories, Los Alamos National Laboratory, the University of Southern California, Massachusetts Institute of Technology, the University of Texas at Austin, and Duke University) with a historymore » of joint UQ research. Our vision encompasses all aspects of UQ in leadership-class computing. This includes the well-founded setup of UQ problems; characterization of the input space given available data/information; local and global sensitivity analysis; adaptive dimensionality and order reduction; forward and inverse propagation of uncertainty; handling of application code failures, missing data, and hardware/software fault tolerance; and model inadequacy, comparison, validation, selection, and averaging. The nature of the UQ problem requires the seamless combination of data, models, and information across this landscape in a manner that provides a self-consistent quantification of requisite uncertainties in predictions from computational models. Accordingly, our UQ methods and tools span an interdisciplinary space across applied math, information theory, and statistics. The MIT QUEST effort centers on statistical inference and methods for surrogate or reduced-order modeling. MIT personnel have been responsible for the development of adaptive sampling methods, methods for approximating computationally intensive models, and software for both forward uncertainty propagation and statistical inverse problems. A key software product of the MIT QUEST effort is the MIT Uncertainty Quantification library, called MUQ (\\url{muq.mit.edu}).« less
NiftyNet: a deep-learning platform for medical imaging.
Gibson, Eli; Li, Wenqi; Sudre, Carole; Fidon, Lucas; Shakir, Dzhoshkun I; Wang, Guotai; Eaton-Rosen, Zach; Gray, Robert; Doel, Tom; Hu, Yipeng; Whyntie, Tom; Nachev, Parashkev; Modat, Marc; Barratt, Dean C; Ourselin, Sébastien; Cardoso, M Jorge; Vercauteren, Tom
2018-05-01
Medical image analysis and computer-assisted intervention problems are increasingly being addressed with deep-learning-based solutions. Established deep-learning platforms are flexible but do not provide specific functionality for medical image analysis and adapting them for this domain of application requires substantial implementation effort. Consequently, there has been substantial duplication of effort and incompatible infrastructure developed across many research groups. This work presents the open-source NiftyNet platform for deep learning in medical imaging. The ambition of NiftyNet is to accelerate and simplify the development of these solutions, and to provide a common mechanism for disseminating research outputs for the community to use, adapt and build upon. The NiftyNet infrastructure provides a modular deep-learning pipeline for a range of medical imaging applications including segmentation, regression, image generation and representation learning applications. Components of the NiftyNet pipeline including data loading, data augmentation, network architectures, loss functions and evaluation metrics are tailored to, and take advantage of, the idiosyncracies of medical image analysis and computer-assisted intervention. NiftyNet is built on the TensorFlow framework and supports features such as TensorBoard visualization of 2D and 3D images and computational graphs by default. We present three illustrative medical image analysis applications built using NiftyNet infrastructure: (1) segmentation of multiple abdominal organs from computed tomography; (2) image regression to predict computed tomography attenuation maps from brain magnetic resonance images; and (3) generation of simulated ultrasound images for specified anatomical poses. The NiftyNet infrastructure enables researchers to rapidly develop and distribute deep learning solutions for segmentation, regression, image generation and representation learning applications, or extend the platform to new applications. Copyright © 2018 The Authors. Published by Elsevier B.V. All rights reserved.
NASA Astrophysics Data System (ADS)
Leuca, Maxim
CFD (Computational Fluid Dynamics) is a computational tool for studying flow in science and technology. The Aerospace Industry uses increasingly the CFD modeling and design phase of the aircraft, so the precision with which phenomena are simulated boundary layer is very important. The research efforts are focused on optimizing the aerodynamic performance of airfoils to predict the drag and delay the laminar-turbulent transition. CFD codes must be fast and efficient to model complex geometries for aerodynamic flows. The resolution of the boundary layer equations requires a large amount of computing resources for viscous flows. CFD codes are commonly used to simulate aerodynamic flows, require normal meshes to the wall, extremely fine, and, by consequence, the calculations are very expensive. . This thesis proposes a new approach to solve the equations of boundary layer for laminar and turbulent flows using an approach based on the finite difference method. Integrated into a code of panels, this concept allows to solve airfoils avoiding the use of iterative algorithms, usually computing time and often involving convergence problems. The main advantages of panels methods are their simplicity and ability to obtain, with minimal computational effort, solutions in complex flow conditions for relatively complicated configurations. To verify and validate the developed program, experimental data are used as references when available. Xfoil code is used to obtain data as a pseudo references. Pseudo-reference, as in the absence of experimental data, we cannot really compare two software together. Xfoil is a program that has proven to be accurate and inexpensive computing resources. Developed by Drela (1985), this program uses the method with two integral to design and analyze profiles of wings at low speed (Drela et Youngren, 2014), (Drela, 2003). NACA 0012, NACA 4412, and ATR-42 airfoils have been used for this study. For the airfoils NACA 0012 and NACA 4412 the calculations are made using the Mach number M =0.17 and Reynolds number Re = 6x10 6 conditions for which we have experimental results. For the airfoil ATR-42 the calculations are made using the Mach number M =0.1 and Reynolds number Re=536450 as it was analysed in LARCASE's Price-Paidoussis wind tunnel. Keywords: boundary layer, direct method, displacement thickness, finite differences, Xfoil code.
NASA Astrophysics Data System (ADS)
Lhamon, Michael Earl
A pattern recognition system which uses complex correlation filter banks requires proportionally more computational effort than single-real valued filters. This introduces increased computation burden but also introduces a higher level of parallelism, that common computing platforms fail to identify. As a result, we consider algorithm mapping to both optical and digital processors. For digital implementation, we develop computationally efficient pattern recognition algorithms, referred to as, vector inner product operators that require less computational effort than traditional fast Fourier methods. These algorithms do not need correlation and they map readily onto parallel digital architectures, which imply new architectures for optical processors. These filters exploit circulant-symmetric matrix structures of the training set data representing a variety of distortions. By using the same mathematical basis as with the vector inner product operations, we are able to extend the capabilities of more traditional correlation filtering to what we refer to as "Super Images". These "Super Images" are used to morphologically transform a complicated input scene into a predetermined dot pattern. The orientation of the dot pattern is related to the rotational distortion of the object of interest. The optical implementation of "Super Images" yields feature reduction necessary for using other techniques, such as artificial neural networks. We propose a parallel digital signal processor architecture based on specific pattern recognition algorithms but general enough to be applicable to other similar problems. Such an architecture is classified as a data flow architecture. Instead of mapping an algorithm to an architecture, we propose mapping the DSP architecture to a class of pattern recognition algorithms. Today's optical processing systems have difficulties implementing full complex filter structures. Typically, optical systems (like the 4f correlators) are limited to phase-only implementation with lower detection performance than full complex electronic systems. Our study includes pseudo-random pixel encoding techniques for approximating full complex filtering. Optical filter bank implementation is possible and they have the advantage of time averaging the entire filter bank at real time rates. Time-averaged optical filtering is computational comparable to billions of digital operations-per-second. For this reason, we believe future trends in high speed pattern recognition will involve hybrid architectures of both optical and DSP elements.
Integrated modeling tool for performance engineering of complex computer systems
NASA Technical Reports Server (NTRS)
Wright, Gary; Ball, Duane; Hoyt, Susan; Steele, Oscar
1989-01-01
This report summarizes Advanced System Technologies' accomplishments on the Phase 2 SBIR contract NAS7-995. The technical objectives of the report are: (1) to develop an evaluation version of a graphical, integrated modeling language according to the specification resulting from the Phase 2 research; and (2) to determine the degree to which the language meets its objectives by evaluating ease of use, utility of two sets of performance predictions, and the power of the language constructs. The technical approach followed to meet these objectives was to design, develop, and test an evaluation prototype of a graphical, performance prediction tool. The utility of the prototype was then evaluated by applying it to a variety of test cases found in the literature and in AST case histories. Numerous models were constructed and successfully tested. The major conclusion of this Phase 2 SBIR research and development effort is that complex, real-time computer systems can be specified in a non-procedural manner using combinations of icons, windows, menus, and dialogs. Such a specification technique provides an interface that system designers and architects find natural and easy to use. In addition, PEDESTAL's multiview approach provides system engineers with the capability to perform the trade-offs necessary to produce a design that meets timing performance requirements. Sample system designs analyzed during the development effort showed that models could be constructed in a fraction of the time required by non-visual system design capture tools.
Simulation in pediatric anesthesiology.
Fehr, James J; Honkanen, Anita; Murray, David J
2012-10-01
Simulation-based training, research and quality initiatives are expanding in pediatric anesthesiology just as in other medical specialties. Various modalities are available, from task trainers to standardized patients, and from computer-based simulations to mannequins. Computer-controlled mannequins can simulate pediatric vital signs with reasonable reliability; however the fidelity of skin temperature and color change, airway reflexes and breath and heart sounds remains rudimentary. Current pediatric mannequins are utilized in simulation centers, throughout hospitals in-situ, at national meetings for continuing medical education and in research into individual and team performance. Ongoing efforts by pediatric anesthesiologists dedicated to using simulation to improve patient care and educational delivery will result in further dissemination of this technology. Health care professionals who provide complex, subspecialty care to children require a curriculum supported by an active learning environment where skills directly relevant to pediatric care can be developed. The approach is not only the most effective method to educate adult learners, but meets calls for education reform and offers the potential to guide efforts toward evaluating competence. Simulation addresses patient safety imperatives by providing a method for trainees to develop skills and experience in various management strategies, without risk to the health and life of a child. A curriculum that provides pediatric anesthesiologists with the range of skills required in clinical practice settings must include a relatively broad range of task-training devises and electromechanical mannequins. Challenges remain in defining the best integration of this modality into training and clinical practice to meet the needs of pediatric patients. © 2012 Blackwell Publishing Ltd.
Measurement Requirements for Improved Modeling of Arcjet Facility Flows
NASA Technical Reports Server (NTRS)
Fletcher, Douglas G.
2000-01-01
Current efforts to develop new reusable launch vehicles and to pursue low-cost robotic planetary missions have led to a renewed interest in understanding arc-jet flows. Part of this renewed interest is concerned with improving the understanding of arc-jet test results and the potential use of available computational-fluid- dynamic (CFD) codes to aid in this effort. These CFD codes have been extensively developed and tested for application to nonequilibrium, hypersonic flow modeling. It is envisioned, perhaps naively, that the application of these CFD codes to the simulation of arc-jet flows would serve two purposes: first. the codes would help to characterize the nonequilibrium nature of the arc-jet flows; and second. arc-jet experiments could potentially be used to validate the flow models. These two objectives are, to some extent, mutually exclusive. However, the purpose of the present discussion is to address what role CFD codes can play in the current arc-jet flow characterization effort, and whether or not the simulation of arc-jet facility tests can be used to eva1uate some of the modeling that is used to formu1ate these codes. This presentation is organized into several sections. In the introductory section, the development of large-scale, constricted-arc test facilities within NASA is reviewed, and the current state of flow diagnostics using conventional instrumentation is summarized. The motivation for using CFD to simulate arc-jet flows is addressed in the next section, and the basic requirements for CFD models that would be used for these simulations are briefly discussed. This section is followed by a more detailed description of experimental measurements that are needed to initiate credible simulations and to evaluate their fidelity in the different flow regions of an arc-jet facility. Observations from a recent combined computational and experiment.al investigation of shock-layer flows in a large-scale arc-jet facility are then used to illustrate the current state of development of diagnostic instrumentation, CFD simulations, and general knowledge in the field of arc-jet characterization. Finally, the main points are summarized and recommendations for future efforts are given.
Computer-assisted learning and simulation systems in dentistry--a challenge to society.
Welk, A; Splieth, Ch; Wierinck, E; Gilpatrick, R O; Meyer, G
2006-07-01
Computer technology is increasingly used in practical training at universities. However, in spite of their potential, computer-assisted learning (CAL) and computer-assisted simulation (CAS) systems still appear to be underutilized in dental education. Advantages, challenges, problems, and solutions of computer-assisted learning and simulation in dentistry are discussed by means of MEDLINE, open Internet platform searches, and key results of a study among German dental schools. The advantages of computer-assisted learning are seen for example in self-paced and self-directed learning and increased motivation. It is useful for both objective theoretical and practical tests and for training students to handle complex cases. CAL can lead to more structured learning and can support training in evidence-based decision-making. The reasons for the still relatively rare implementation of CAL/CAS systems in dental education include an inability to finance, lack of studies of CAL/CAS, and too much effort required to integrate CAL/CAS systems into the curriculum. To overcome the reasons for the relative low degree of computer technology use, we should strive for multicenter research and development projects monitored by the appropriate national and international scientific societies, so that the potential of computer technology can be fully realized in graduate, postgraduate, and continuing dental education.
Zénon, Alexandre; Duclos, Yann; Carron, Romain; Witjas, Tatiana; Baunez, Christelle; Régis, Jean; Azulay, Jean-Philippe; Brown, Peter; Eusebio, Alexandre
2016-06-01
Adaptive behaviour entails the capacity to select actions as a function of their energy cost and expected value and the disruption of this faculty is now viewed as a possible cause of the symptoms of Parkinson's disease. Indirect evidence points to the involvement of the subthalamic nucleus-the most common target for deep brain stimulation in Parkinson's disease-in cost-benefit computation. However, this putative function appears at odds with the current view that the subthalamic nucleus is important for adjusting behaviour to conflict. Here we tested these contrasting hypotheses by recording the neuronal activity of the subthalamic nucleus of patients with Parkinson's disease during an effort-based decision task. Local field potentials were recorded from the subthalamic nucleus of 12 patients with advanced Parkinson's disease (mean age 63.8 years ± 6.8; mean disease duration 9.4 years ± 2.5) both OFF and ON levodopa while they had to decide whether to engage in an effort task based on the level of effort required and the value of the reward promised in return. The data were analysed using generalized linear mixed models and cluster-based permutation methods. Behaviourally, the probability of trial acceptance increased with the reward value and decreased with the required effort level. Dopamine replacement therapy increased the rate of acceptance for efforts associated with low rewards. When recording the subthalamic nucleus activity, we found a clear neural response to both reward and effort cues in the 1-10 Hz range. In addition these responses were informative of the subjective value of reward and level of effort rather than their actual quantities, such that they were predictive of the participant's decisions. OFF levodopa, this link with acceptance was weakened. Finally, we found that these responses did not index conflict, as they did not vary as a function of the distance from indifference in the acceptance decision. These findings show that low-frequency neuronal activity in the subthalamic nucleus may encode the information required to make cost-benefit comparisons, rather than signal conflict. The link between these neural responses and behaviour was stronger under dopamine replacement therapy. Our findings are consistent with the view that Parkinson's disease symptoms may be caused by a disruption of the processes involved in balancing the value of actions with their associated effort cost. © The Author (2016). Published by Oxford University Press on behalf of the Guarantors of Brain. All rights reserved. For Permissions, please email: journals.permissions@oup.com.
Zénon, Alexandre; Duclos, Yann; Carron, Romain; Witjas, Tatiana; Baunez, Christelle; Régis, Jean; Azulay, Jean-Philippe; Brown, Peter; Eusebio, Alexandre
2016-01-01
Adaptive behaviour entails the capacity to select actions as a function of their energy cost and expected value and the disruption of this faculty is now viewed as a possible cause of the symptoms of Parkinson’s disease. Indirect evidence points to the involvement of the subthalamic nucleus—the most common target for deep brain stimulation in Parkinson’s disease—in cost-benefit computation. However, this putative function appears at odds with the current view that the subthalamic nucleus is important for adjusting behaviour to conflict. Here we tested these contrasting hypotheses by recording the neuronal activity of the subthalamic nucleus of patients with Parkinson’s disease during an effort-based decision task. Local field potentials were recorded from the subthalamic nucleus of 12 patients with advanced Parkinson’s disease (mean age 63.8 years ± 6.8; mean disease duration 9.4 years ± 2.5) both OFF and ON levodopa while they had to decide whether to engage in an effort task based on the level of effort required and the value of the reward promised in return. The data were analysed using generalized linear mixed models and cluster-based permutation methods. Behaviourally, the probability of trial acceptance increased with the reward value and decreased with the required effort level. Dopamine replacement therapy increased the rate of acceptance for efforts associated with low rewards. When recording the subthalamic nucleus activity, we found a clear neural response to both reward and effort cues in the 1–10 Hz range. In addition these responses were informative of the subjective value of reward and level of effort rather than their actual quantities, such that they were predictive of the participant’s decisions. OFF levodopa, this link with acceptance was weakened. Finally, we found that these responses did not index conflict, as they did not vary as a function of the distance from indifference in the acceptance decision. These findings show that low-frequency neuronal activity in the subthalamic nucleus may encode the information required to make cost-benefit comparisons, rather than signal conflict. The link between these neural responses and behaviour was stronger under dopamine replacement therapy. Our findings are consistent with the view that Parkinson’s disease symptoms may be caused by a disruption of the processes involved in balancing the value of actions with their associated effort cost. PMID:27190012
Hypersonic Experimental and Computational Capability, Improvement and Validation. Volume 2
NASA Technical Reports Server (NTRS)
Muylaert, Jean (Editor); Kumar, Ajay (Editor); Dujarric, Christian (Editor)
1998-01-01
The results of the phase 2 effort conducted under AGARD Working Group 18 on Hypersonic Experimental and Computational Capability, Improvement and Validation are presented in this report. The first volume, published in May 1996, mainly focused on the design methodology, plans and some initial results of experiments that had been conducted to serve as validation benchmarks. The current volume presents the detailed experimental and computational data base developed during this effort.
Bacteria as computers making computers
Danchin, Antoine
2009-01-01
Various efforts to integrate biological knowledge into networks of interactions have produced a lively microbial systems biology. Putting molecular biology and computer sciences in perspective, we review another trend in systems biology, in which recursivity and information replace the usual concepts of differential equations, feedback and feedforward loops and the like. Noting that the processes of gene expression separate the genome from the cell machinery, we analyse the role of the separation between machine and program in computers. However, computers do not make computers. For cells to make cells requires a specific organization of the genetic program, which we investigate using available knowledge. Microbial genomes are organized into a paleome (the name emphasizes the role of the corresponding functions from the time of the origin of life), comprising a constructor and a replicator, and a cenome (emphasizing community-relevant genes), made up of genes that permit life in a particular context. The cell duplication process supposes rejuvenation of the machine and replication of the program. The paleome also possesses genes that enable information to accumulate in a ratchet-like process down the generations. The systems biology must include the dynamics of information creation in its future developments. PMID:19016882
Bacteria as computers making computers.
Danchin, Antoine
2009-01-01
Various efforts to integrate biological knowledge into networks of interactions have produced a lively microbial systems biology. Putting molecular biology and computer sciences in perspective, we review another trend in systems biology, in which recursivity and information replace the usual concepts of differential equations, feedback and feedforward loops and the like. Noting that the processes of gene expression separate the genome from the cell machinery, we analyse the role of the separation between machine and program in computers. However, computers do not make computers. For cells to make cells requires a specific organization of the genetic program, which we investigate using available knowledge. Microbial genomes are organized into a paleome (the name emphasizes the role of the corresponding functions from the time of the origin of life), comprising a constructor and a replicator, and a cenome (emphasizing community-relevant genes), made up of genes that permit life in a particular context. The cell duplication process supposes rejuvenation of the machine and replication of the program. The paleome also possesses genes that enable information to accumulate in a ratchet-like process down the generations. The systems biology must include the dynamics of information creation in its future developments.
Computational Methods for Stability and Control (COMSAC): The Time Has Come
NASA Technical Reports Server (NTRS)
Hall, Robert M.; Biedron, Robert T.; Ball, Douglas N.; Bogue, David R.; Chung, James; Green, Bradford E.; Grismer, Matthew J.; Brooks, Gregory P.; Chambers, Joseph R.
2005-01-01
Powerful computational fluid dynamics (CFD) tools have emerged that appear to offer significant benefits as an adjunct to the experimental methods used by the stability and control community to predict aerodynamic parameters. The decreasing costs for and increasing availability of computing hours are making these applications increasingly viable as time goes on and the cost of computing continues to drop. This paper summarizes the efforts of four organizations to utilize high-end computational fluid dynamics (CFD) tools to address the challenges of the stability and control arena. General motivation and the backdrop for these efforts will be summarized as well as examples of current applications.
Central American information system for energy planning (in English; Spanish)
DOE Office of Scientific and Technical Information (OSTI.GOV)
Fonseca, M.G.; Lyon, P.C.; Heskett, J.C.
1991-04-01
SICAPE (Sistema de Information Centroamericano para Planificacion Energetica) is an expandable information system designed for energy planning. Its objective is to satisfy ongoing information requirements by means of a menu driver operational environment. SICAPE is as easily used by the novice computer user as those with more experience. Moreover, the system is capable of evolving concurrently with future requirements of the individual country. The expansion is accomplished by menu restructuring as data and user requirements change. The new menu configurations require no programming effort. The use and modification of SICAPE are separate menu-driven processes that allow for rapid data query,more » minimal training, and effortless continued growth. SICAPE's data is organized by country or region. Information is available in the following areas: energy balance, macro economics, electricity generation capacity, and electricity and petroleum product pricing. (JF)« less
Thick Galactic Cosmic Radiation Shielding Using Atmospheric Data
NASA Technical Reports Server (NTRS)
Youngquist, Robert C.; Nurge, Mark A.; Starr, Stanley O.; Koontz, Steven L.
2013-01-01
NASA is concerned with protecting astronauts from the effects of galactic cosmic radiation and has expended substantial effort in the development of computer models to predict the shielding obtained from various materials. However, these models were only developed for shields up to about 120 g!cm2 in thickness and have predicted that shields of this thickness are insufficient to provide adequate protection for extended deep space flights. Consequently, effort is underway to extend the range of these models to thicker shields and experimental data is required to help confirm the resulting code. In this paper empirically obtained effective dose measurements from aircraft flights in the atmosphere are used to obtain the radiation shielding function of the earth's atmosphere, a very thick shield. Obtaining this result required solving an inverse problem and the method for solving it is presented. The results are shown to be in agreement with current code in the ranges where they overlap. These results are then checked and used to predict the radiation dosage under thick shields such as planetary regolith and the atmosphere of Venus.
Model-based verification and validation of the SMAP uplink processes
NASA Astrophysics Data System (ADS)
Khan, M. O.; Dubos, G. F.; Tirona, J.; Standley, S.
Model-Based Systems Engineering (MBSE) is being used increasingly within the spacecraft design community because of its benefits when compared to document-based approaches. As the complexity of projects expands dramatically with continually increasing computational power and technology infusion, the time and effort needed for verification and validation (V& V) increases geometrically. Using simulation to perform design validation with system-level models earlier in the life cycle stands to bridge the gap between design of the system (based on system-level requirements) and verifying those requirements/validating the system as a whole. This case study stands as an example of how a project can validate a system-level design earlier in the project life cycle than traditional V& V processes by using simulation on a system model. Specifically, this paper describes how simulation was added to a system model of the Soil Moisture Active-Passive (SMAP) mission's uplink process. Also discussed are the advantages and disadvantages of the methods employed and the lessons learned; which are intended to benefit future model-based and simulation-based development efforts.
Efficient strategies for leave-one-out cross validation for genomic best linear unbiased prediction.
Cheng, Hao; Garrick, Dorian J; Fernando, Rohan L
2017-01-01
A random multiple-regression model that simultaneously fit all allele substitution effects for additive markers or haplotypes as uncorrelated random effects was proposed for Best Linear Unbiased Prediction, using whole-genome data. Leave-one-out cross validation can be used to quantify the predictive ability of a statistical model. Naive application of Leave-one-out cross validation is computationally intensive because the training and validation analyses need to be repeated n times, once for each observation. Efficient Leave-one-out cross validation strategies are presented here, requiring little more effort than a single analysis. Efficient Leave-one-out cross validation strategies is 786 times faster than the naive application for a simulated dataset with 1,000 observations and 10,000 markers and 99 times faster with 1,000 observations and 100 markers. These efficiencies relative to the naive approach using the same model will increase with increases in the number of observations. Efficient Leave-one-out cross validation strategies are presented here, requiring little more effort than a single analysis.
The limits of intelligence in design
DOE Office of Scientific and Technical Information (OSTI.GOV)
Papamichael, K.; Protzen, J.P.
1993-05-01
A new, comprehensive design theory is presented, applicable to all design domains such as engineering and industrial design, architecture, city and regional planning, and, in general, any goal-oriented activity that involves decision making. The design process is analyzed into fundamental activities that are characterized with respect to the nature of knowledge requirements and the degree to which they can be specified and delegated to others, in general, and to computers in particular. The characterization of design problems as ``wicked,`` or ``ill-defined,`` design has been understood as a rational activity, that is ``thinking before acting.`` The new theory presented in thismore » paper suggests that design is ``thinking and feeling while acting,`` supporting the position that design is only partially rational. Intelligence, ``natural`` or ``artificial,`` is only one of two requirements for design, the other being emotions. Design decisions are only partially inferred, that is, they are not entirely the product of reasoning. Rather, design decisions are based on judgment that requires the notion of ``good`` and ``bad,`` which is attributed to feelings, rather than thoughts. The presentation of the design theory extends to the implications associated with the limits of intelligence in design, which, in turn, become constraints on the potential role of computers in design. Many of the current development efforts in computer-aided design violate these constraints, especially in the implementation of expert systems and multi-criterion evaluation models. These violations are identified and discussed in detail. Finally, specific areas for further research and development in computer-aided design are presented and discussed.« less
A compendium of computational fluid dynamics at the Langley Research Center
NASA Technical Reports Server (NTRS)
1980-01-01
Through numerous summary examples, the scope and general nature of the computational fluid dynamics (CFD) effort at Langley is identified. These summaries will help inform researchers in CFD and line management at Langley of the overall effort. In addition to the inhouse efforts, out of house CFD work supported by Langley through industrial contracts and university grants are included. Researchers were encouraged to include summaries of work in preliminary and tentative states of development as well as current research approaching definitive results.
Askay, Shelley Wiechman; Patterson, David R.; Sharar, Sam R.
2010-01-01
Scientific evidence for the viability of hypnosis as a treatment for pain has flourished over the past two decades (Rainville, Duncan, Price, Carrier and Bushnell, 1997; Montgomery, DuHamel and Redd, 2000; Lang and Rosen, 2002; Patterson and Jensen, 2003). However its widespread use has been limited by factors such as the advanced expertise, time and effort required by clinicians to provide hypnosis, and the cognitive effort required by patients to engage in hypnosis. The theory in developing virtual reality hypnosis was to apply three-dimensional, immersive, virtual reality technology to guide the patient through the same steps used when hypnosis is induced through an interpersonal process. Virtual reality replaces many of the stimuli that the patients have to struggle to imagine via verbal cueing from the therapist. The purpose of this paper is to explore how virtual reality may be useful in delivering hypnosis, and to summarize the scientific literature to date. We will also explore various theoretical and methodological issues that can guide future research. In spite of the encouraging scientific and clinical findings, hypnosis for analgesia is not universally used in medical centres. One reason for the slow acceptance is the extensive provider training required in order for hypnosis to be an effective pain management modality. Training in hypnosis is not commonly offered in medical schools or even psychology graduate curricula. Another reason is that hypnosis requires far more time and effort to administer than an analgesic pill or injection. Hypnosis requires training, skill and patience to deliver in medical centres that are often fast-paced and highly demanding of clinician time. Finally, the attention and cognitive effort required for hypnosis may be more than patients in an acute care setting, who may be under the influence of opiates and benzodiazepines, are able to impart. It is a challenge to make hypnosis a standard part of care in this environment. Over the past 25 years, researchers have been investigating ways to make hypnosis more standardized and accessible. There have been a handful of studies that have looked at the efficacy of using audiotapes to provide the hypnotic intervention (Johnson and Wiese, 1979; Hart, 1980; Block, Ghoneim, Sum Ping and Ali, 1991; Enqvist, Bjorklund, Engman and Jakobsson, 1997; Eberhart, Doring, Holzrichter, Roscher and Seeling, 1998; Perugini, Kirsch, Allen, et al., 1998; Forbes, MacAuley, Chiotakakou-Faliakou, 2000; Ghoneim, Block, Sarasin, Davis and Marchman, 2000). These studies have yielded mixed results. Generally, we can conclude that audio-taped hypnosis is more effective than no treatment at all, but less effective than the presence of a live hypnotherapist. Grant and Nash (1995) were the first to use computer-assisted hypnosis as a behavioural measure to assess hypnotizability. They used a digitized voice that guided subjects through a procedure and tailored software according to the subject’s unique responses and reactions. However, it utilized conventional two-dimensional screen technology that required patients to focus their attention on a computer screen, making them vulnerable to any type of distraction that might enter the environment. Further, the two-dimensional technology did not present compelling visual stimuli for capturing the user’s attention. PMID:20737029
Askay, Shelley Wiechman; Patterson, David R; Sharar, Sam R
2009-03-01
Scientific evidence for the viability of hypnosis as a treatment for pain has flourished over the past two decades (Rainville, Duncan, Price, Carrier and Bushnell, 1997; Montgomery, DuHamel and Redd, 2000; Lang and Rosen, 2002; Patterson and Jensen, 2003). However its widespread use has been limited by factors such as the advanced expertise, time and effort required by clinicians to provide hypnosis, and the cognitive effort required by patients to engage in hypnosis.The theory in developing virtual reality hypnosis was to apply three-dimensional, immersive, virtual reality technology to guide the patient through the same steps used when hypnosis is induced through an interpersonal process. Virtual reality replaces many of the stimuli that the patients have to struggle to imagine via verbal cueing from the therapist. The purpose of this paper is to explore how virtual reality may be useful in delivering hypnosis, and to summarize the scientific literature to date. We will also explore various theoretical and methodological issues that can guide future research.In spite of the encouraging scientific and clinical findings, hypnosis for analgesia is not universally used in medical centres. One reason for the slow acceptance is the extensive provider training required in order for hypnosis to be an effective pain management modality. Training in hypnosis is not commonly offered in medical schools or even psychology graduate curricula. Another reason is that hypnosis requires far more time and effort to administer than an analgesic pill or injection. Hypnosis requires training, skill and patience to deliver in medical centres that are often fast-paced and highly demanding of clinician time. Finally, the attention and cognitive effort required for hypnosis may be more than patients in an acute care setting, who may be under the influence of opiates and benzodiazepines, are able to impart. It is a challenge to make hypnosis a standard part of care in this environment.Over the past 25 years, researchers have been investigating ways to make hypnosis more standardized and accessible. There have been a handful of studies that have looked at the efficacy of using audiotapes to provide the hypnotic intervention (Johnson and Wiese, 1979; Hart, 1980; Block, Ghoneim, Sum Ping and Ali, 1991; Enqvist, Bjorklund, Engman and Jakobsson, 1997; Eberhart, Doring, Holzrichter, Roscher and Seeling, 1998; Perugini, Kirsch, Allen, et al., 1998; Forbes, MacAuley, Chiotakakou-Faliakou, 2000; Ghoneim, Block, Sarasin, Davis and Marchman, 2000). These studies have yielded mixed results. Generally, we can conclude that audio-taped hypnosis is more effective than no treatment at all, but less effective than the presence of a live hypnotherapist. Grant and Nash (1995) were the first to use computer-assisted hypnosis as a behavioural measure to assess hypnotizability. They used a digitized voice that guided subjects through a procedure and tailored software according to the subject's unique responses and reactions. However, it utilized conventional two-dimensional screen technology that required patients to focus their attention on a computer screen, making them vulnerable to any type of distraction that might enter the environment. Further, the two-dimensional technology did not present compelling visual stimuli for capturing the user's attention.
Motivational Beliefs, Student Effort, and Feedback Behaviour in Computer-Based Formative Assessment
ERIC Educational Resources Information Center
Timmers, Caroline F.; Braber-van den Broek, Jannie; van den Berg, Stephanie M.
2013-01-01
Feedback can only be effective when students seek feedback and process it. This study examines the relations between students' motivational beliefs, effort invested in a computer-based formative assessment, and feedback behaviour. Feedback behaviour is represented by whether a student seeks feedback and the time a student spends studying the…
Establishing a K-12 Circuit Design Program
ERIC Educational Resources Information Center
Inceoglu, Mustafa M.
2010-01-01
Outreach, as defined by Wikipedia, is an effort by an organization or group to connect its ideas or practices to the efforts of other organizations, groups, specific audiences, or the general public. This paper describes a computer engineering outreach project of the Department of Computer Engineering at Ege University, Izmir, Turkey, to a local…
ERIC Educational Resources Information Center
Sexton, Randall; Hignite, Michael; Margavio, Thomas M.; Margavio, Geanie W.
2009-01-01
Information Literacy is a concept that evolved as a result of efforts to move technology-based instructional and research efforts beyond the concepts previously associated with "computer literacy." While computer literacy was largely a topic devoted to knowledge of hardware and software, information literacy is concerned with students' abilities…
Digital data collection in paleoanthropology.
Reed, Denné; Barr, W Andrew; Mcpherron, Shannon P; Bobe, René; Geraads, Denis; Wynn, Jonathan G; Alemseged, Zeresenay
2015-01-01
Understanding patterns of human evolution across space and time requires synthesizing data collected by independent research teams, and this effort is part of a larger trend to develop cyber infrastructure and e-science initiatives. At present, paleoanthropology cannot easily answer basic questions about the total number of fossils and artifacts that have been discovered, or exactly how those items were collected. In this paper, we examine the methodological challenges to data integration, with the hope that mitigating the technical obstacles will further promote data sharing. At a minimum, data integration efforts must document what data exist and how the data were collected (discovery), after which we can begin standardizing data collection practices with the aim of achieving combined analyses (synthesis). This paper outlines a digital data collection system for paleoanthropology. We review the relevant data management principles for a general audience and supplement this with technical details drawn from over 15 years of paleontological and archeological field experience in Africa and Europe. The system outlined here emphasizes free open-source software (FOSS) solutions that work on multiple computer platforms; it builds on recent advances in open-source geospatial software and mobile computing. © 2015 Wiley Periodicals, Inc.
NASA Astrophysics Data System (ADS)
Oiknine, Yaniv; August, Isaac Y.; Revah, Liat; Stern, Adrian
2016-05-01
Recently we introduced a Compressive Sensing Miniature Ultra-Spectral Imaging (CS-MUSI) system. The system is based on a single Liquid Crystal (LC) cell and a parallel sensor array where the liquid crystal cell performs spectral encoding. Within the framework of compressive sensing, the CS-MUSI system is able to reconstruct ultra-spectral cubes captured with only an amount of ~10% samples compared to a conventional system. Despite the compression, the technique is extremely complex computationally, because reconstruction of ultra-spectral images requires processing huge data cubes of Gigavoxel size. Fortunately, the computational effort can be alleviated by using separable operation. An additional way to reduce the reconstruction effort is to perform the reconstructions on patches. In this work, we consider processing on various patch shapes. We present an experimental comparison between various patch shapes chosen to process the ultra-spectral data captured with CS-MUSI system. The patches may be one dimensional (1D) for which the reconstruction is carried out spatially pixel-wise, or two dimensional (2D) - working on spatial rows/columns of the ultra-spectral cube, as well as three dimensional (3D).
Accelerating deep neural network training with inconsistent stochastic gradient descent.
Wang, Linnan; Yang, Yi; Min, Renqiang; Chakradhar, Srimat
2017-09-01
Stochastic Gradient Descent (SGD) updates Convolutional Neural Network (CNN) with a noisy gradient computed from a random batch, and each batch evenly updates the network once in an epoch. This model applies the same training effort to each batch, but it overlooks the fact that the gradient variance, induced by Sampling Bias and Intrinsic Image Difference, renders different training dynamics on batches. In this paper, we develop a new training strategy for SGD, referred to as Inconsistent Stochastic Gradient Descent (ISGD) to address this problem. The core concept of ISGD is the inconsistent training, which dynamically adjusts the training effort w.r.t the loss. ISGD models the training as a stochastic process that gradually reduces down the mean of batch's loss, and it utilizes a dynamic upper control limit to identify a large loss batch on the fly. ISGD stays on the identified batch to accelerate the training with additional gradient updates, and it also has a constraint to penalize drastic parameter changes. ISGD is straightforward, computationally efficient and without requiring auxiliary memories. A series of empirical evaluations on real world datasets and networks demonstrate the promising performance of inconsistent training. Copyright © 2017 Elsevier Ltd. All rights reserved.
Couto, Francisco M; Pinto, H Sofia
2013-10-01
There is a prominent trend to augment and improve the formality of biomedical ontologies. For example, this is shown by the current effort on adding description logic axioms, such as disjointness. One of the key ontology applications that can take advantage of this effort is the conceptual (functional) similarity measurement. The presence of description logic axioms in biomedical ontologies make the current structural or extensional approaches weaker and further away from providing sound semantics-based similarity measures. Although beneficial in small ontologies, the exploration of description logic axioms by semantics-based similarity measures is computational expensive. This limitation is critical for biomedical ontologies that normally contain thousands of concepts. Thus in the process of gaining their rightful place, biomedical functional similarity measures have to take the journey of finding how this rich and powerful knowledge can be fully explored while keeping feasible computational costs. This manuscript aims at promoting and guiding the development of compelling tools that deliver what the biomedical community will require in a near future: a next-generation of biomedical similarity measures that efficiently and fully explore the semantics present in biomedical ontologies.
NASA Technical Reports Server (NTRS)
Boyalakuntla, Kishore; Soni, Bharat K.; Thornburg, Hugh J.; Yu, Robert
1996-01-01
During the past decade, computational simulation of fluid flow around complex configurations has progressed significantly and many notable successes have been reported, however, unsteady time-dependent solutions are not easily obtainable. The present effort involves unsteady time dependent simulation of temporally deforming geometries. Grid generation for a complex configuration can be a time consuming process and temporally varying geometries necessitate the regeneration of such grids for every time step. Traditional grid generation techniques have been tried and demonstrated to be inadequate to such simulations. Non-Uniform Rational B-splines (NURBS) based techniques provide a compact and accurate representation of the geometry. This definition can be coupled with a distribution mesh for a user defined spacing. The present method greatly reduces cpu requirements for time dependent remeshing, facilitating the simulation of more complex unsteady problems. A thrust vectoring nozzle has been chosen to demonstrate the capability as it is of current interest in the aerospace industry for better maneuverability of fighter aircraft in close combat and in post stall regimes. This current effort is the first step towards multidisciplinary design optimization which involves coupling the aerodynamic heat transfer and structural analysis techniques. Applications include simulation of temporally deforming bodies and aeroelastic problems.
Python in the NERSC Exascale Science Applications Program for Data
DOE Office of Scientific and Technical Information (OSTI.GOV)
Ronaghi, Zahra; Thomas, Rollin; Deslippe, Jack
We describe a new effort at the National Energy Re- search Scientific Computing Center (NERSC) in performance analysis and optimization of scientific Python applications targeting the Intel Xeon Phi (Knights Landing, KNL) many- core architecture. The Python-centered work outlined here is part of a larger effort called the NERSC Exascale Science Applications Program (NESAP) for Data. NESAP for Data focuses on applications that process and analyze high-volume, high-velocity data sets from experimental/observational science (EOS) facilities supported by the US Department of Energy Office of Science. We present three case study applications from NESAP for Data that use Python. These codesmore » vary in terms of “Python purity” from applications developed in pure Python to ones that use Python mainly as a convenience layer for scientists without expertise in lower level programming lan- guages like C, C++ or Fortran. The science case, requirements, constraints, algorithms, and initial performance optimizations for each code are discussed. Our goal with this paper is to contribute to the larger conversation around the role of Python in high-performance computing today and tomorrow, highlighting areas for future work and emerging best practices« less
H-P adaptive methods for finite element analysis of aerothermal loads in high-speed flows
NASA Technical Reports Server (NTRS)
Chang, H. J.; Bass, J. M.; Tworzydlo, W.; Oden, J. T.
1993-01-01
The commitment to develop the National Aerospace Plane and Maneuvering Reentry Vehicles has generated resurgent interest in the technology required to design structures for hypersonic flight. The principal objective of this research and development effort has been to formulate and implement a new class of computational methodologies for accurately predicting fine scale phenomena associated with this class of problems. The initial focus of this effort was to develop optimal h-refinement and p-enrichment adaptive finite element methods which utilize a-posteriori estimates of the local errors to drive the adaptive methodology. Over the past year this work has specifically focused on two issues which are related to overall performance of a flow solver. These issues include the formulation and implementation (in two dimensions) of an implicit/explicit flow solver compatible with the hp-adaptive methodology, and the design and implementation of computational algorithm for automatically selecting optimal directions in which to enrich the mesh. These concepts and algorithms have been implemented in a two-dimensional finite element code and used to solve three hypersonic flow benchmark problems (Holden Mach 14.1, Edney shock on shock interaction Mach 8.03, and the viscous backstep Mach 4.08).
Accessible Earth: Enhancing diversity in the Geosciences through accessible course design
NASA Astrophysics Data System (ADS)
Bennett, R. A.; Lamb, D. A.
2017-12-01
The tradition of field-based instruction in the geoscience curriculum, which culminates in a capstone geological field camp, presents an insurmountable barrier to many disabled students who might otherwise choose to pursue geoscience careers. There is a widespread perception that success as a practicing geoscientist requires direct access to outcrops and vantage points available only to those able to traverse inaccessible terrain. Yet many modern geoscience activities are based on remotely sensed geophysical data, data analysis, and computation that take place entirely from within the laboratory. To challenge the perception of geoscience as a career option only for the non-disabled, we have created the capstone Accessible Earth Study Abroad Program, an alternative to geologic field camp for all students, with a focus on modern geophysical observation systems, computational thinking, data science, and professional development.In this presentation, we will review common pedagogical approaches in geosciences and current efforts to make the field more inclusive. We will review curricular access and inclusivity relative to a wide range of learners and provide examples of accessible course design based on our experiences in teaching a study abroad course in central Italy, and our plans for ongoing assessment, refinement, and dissemination of the effectiveness of our efforts.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Bartlett, Roscoe A; Heroux, Dr. Michael A; Willenbring, James
2012-01-01
Software lifecycles are becoming an increasingly important issue for computational science & engineering (CSE) software. The process by which a piece of CSE software begins life as a set of research requirements and then matures into a trusted high-quality capability is both commonplace and extremely challenging. Although an implicit lifecycle is obviously being used in any effort, the challenges of this process--respecting the competing needs of research vs. production--cannot be overstated. Here we describe a proposal for a well-defined software lifecycle process based on modern Lean/Agile software engineering principles. What we propose is appropriate for many CSE software projects thatmore » are initially heavily focused on research but also are expected to eventually produce usable high-quality capabilities. The model is related to TriBITS, a build, integration and testing system, which serves as a strong foundation for this lifecycle model, and aspects of this lifecycle model are ingrained in the TriBITS system. Indeed this lifecycle process, if followed, will enable large-scale sustainable integration of many complex CSE software efforts across several institutions.« less
NASA Technical Reports Server (NTRS)
Berman, A. L.
1976-01-01
In the last two decades, increasingly sophisticated deep space missions have placed correspondingly stringent requirements on navigational accuracy. As part of the effort to increase navigational accuracy, and hence the quality of radiometric data, much effort has been expended in an attempt to understand and compute the tropospheric effect on range (and hence range rate) data. The general approach adopted has been that of computing a zenith range refraction, and then mapping this refraction to any arbitrary elevation angle via an empirically derived function of elevation. The prediction of zenith range refraction derived from surface measurements of meteorological parameters is presented. Refractivity is separated into wet (water vapor pressure) and dry (atmospheric pressure) components. The integration of dry refractivity is shown to be exact. Attempts to integrate wet refractivity directly prove ineffective; however, several empirical models developed by the author and other researchers at JPL are discussed. The best current wet refraction model is here considered to be a separate day/night model, which is proportional to surface water vapor pressure and inversely proportional to surface temperature. Methods are suggested that might improve the accuracy of the wet range refraction model.
Evaluating Application Resilience with XRay
DOE Office of Scientific and Technical Information (OSTI.GOV)
Chen, Sui; Bronevetsky, Greg; Li, Bin
2015-05-07
The rising count and shrinking feature size of transistors within modern computers is making them increasingly vulnerable to various types of soft faults. This problem is especially acute in high-performance computing (HPC) systems used for scientific computing, because these systems include many thousands of compute cores and nodes, all of which may be utilized in a single large-scale run. The increasing vulnerability of HPC applications to errors induced by soft faults is motivating extensive work on techniques to make these applications more resiilent to such faults, ranging from generic techniques such as replication or checkpoint/restart to algorithmspecific error detection andmore » tolerance techniques. Effective use of such techniques requires a detailed understanding of how a given application is affected by soft faults to ensure that (i) efforts to improve application resilience are spent in the code regions most vulnerable to faults and (ii) the appropriate resilience technique is applied to each code region. This paper presents XRay, a tool to view the application vulnerability to soft errors, and illustrates how XRay can be used in the context of a representative application. In addition to providing actionable insights into application behavior XRay automatically selects the number of fault injection experiments required to provide an informative view of application behavior, ensuring that the information is statistically well-grounded without performing unnecessary experiments.« less
NASA Technical Reports Server (NTRS)
Duncan, C. S.; Seidensticker, R. G.; Mchugh, J. P.; Hopkins, R. H.
1986-01-01
Efforts to demonstrate that the dendritic web technology is ready for commercial use by the end of 1986 continues. A commercial readiness goal involves improvements to crystal growth furnace throughput to demonstrate an area growth rate of greater than 15 sq cm/min while simultaneously growing 10 meters or more of ribbon under conditions of continuous melt replenishment. Continuous means that the silicon melt is being replenished at the same rate that it is being consumed by ribbon growth so that the melt level remains constant. Efforts continue on computer thermal modeling required to define high speed, low stress, continuous growth configurations; the study of convective effects in the molten silicon and growth furnace cover gas; on furnace component modifications; on web quality assessments; and on experimental growth activities.
Terminology development towards harmonizing multiple clinical neuroimaging research repositories.
Turner, Jessica A; Pasquerello, Danielle; Turner, Matthew D; Keator, David B; Alpert, Kathryn; King, Margaret; Landis, Drew; Calhoun, Vince D; Potkin, Steven G; Tallis, Marcelo; Ambite, Jose Luis; Wang, Lei
2015-07-01
Data sharing and mediation across disparate neuroimaging repositories requires extensive effort to ensure that the different domains of data types are referred to by commonly agreed upon terms. Within the SchizConnect project, which enables querying across decentralized databases of neuroimaging, clinical, and cognitive data from various studies of schizophrenia, we developed a model for each data domain, identified common usable terms that could be agreed upon across the repositories, and linked them to standard ontological terms where possible. We had the goal of facilitating both the current user experience in querying and future automated computations and reasoning regarding the data. We found that existing terminologies are incomplete for these purposes, even with the history of neuroimaging data sharing in the field; and we provide a model for efforts focused on querying multiple clinical neuroimaging repositories.
Terminology development towards harmonizing multiple clinical neuroimaging research repositories
Turner, Jessica A.; Pasquerello, Danielle; Turner, Matthew D.; Keator, David B.; Alpert, Kathryn; King, Margaret; Landis, Drew; Calhoun, Vince D.; Potkin, Steven G.; Tallis, Marcelo; Ambite, Jose Luis; Wang, Lei
2015-01-01
Data sharing and mediation across disparate neuroimaging repositories requires extensive effort to ensure that the different domains of data types are referred to by commonly agreed upon terms. Within the SchizConnect project, which enables querying across decentralized databases of neuroimaging, clinical, and cognitive data from various studies of schizophrenia, we developed a model for each data domain, identified common usable terms that could be agreed upon across the repositories, and linked them to standard ontological terms where possible. We had the goal of facilitating both the current user experience in querying and future automated computations and reasoning regarding the data. We found that existing terminologies are incomplete for these purposes, even with the history of neuroimaging data sharing in the field; and we provide a model for efforts focused on querying multiple clinical neuroimaging repositories. PMID:26688838
Missed deadline notification in best-effort schedulers
NASA Astrophysics Data System (ADS)
Banachowski, Scott A.; Wu, Joel; Brandt, Scott A.
2003-12-01
It is common to run multimedia and other periodic, soft real-time applications on general-purpose computer systems. These systems use best-effort scheduling algorithms that cannot guarantee applications will receive responsive scheduling to meet deadline or timing requirements. We present a simple mechanism called Missed Deadline Notification (MDN) that allows applications to notify the system when they do not receive their desired level of responsiveness. Consisting of a single system call with no arguments, this simple interface allows the operating system to provide better support for soft real-time applications without any a priori information about their timing or resource needs. We implemented MDN in three different schedulers: Linux, BEST, and BeRate. We describe these implementations and their performance when running real-time applications and discuss policies to prevent applications from abusing MDN to gain extra resources.
Robotic disaster recovery efforts with ad-hoc deployable cloud computing
NASA Astrophysics Data System (ADS)
Straub, Jeremy; Marsh, Ronald; Mohammad, Atif F.
2013-06-01
Autonomous operations of search and rescue (SaR) robots is an ill posed problem, which is complexified by the dynamic disaster recovery environment. In a typical SaR response scenario, responder robots will require different levels of processing capabilities during various parts of the response effort and will need to utilize multiple algorithms. Placing these capabilities onboard the robot is a mediocre solution that precludes algorithm specific performance optimization and results in mediocre performance. Architecture for an ad-hoc, deployable cloud environment suitable for use in a disaster response scenario is presented. Under this model, each service provider is optimized for the task and maintains a database of situation-relevant information. This service-oriented architecture (SOA 3.0) compliant framework also serves as an example of the efficient use of SOA 3.0 in an actual cloud application.
Development and verification of local/global analysis techniques for laminated composites
NASA Technical Reports Server (NTRS)
Griffin, O. Hayden, Jr.
1989-01-01
Analysis and design methods for laminated composite materials have been the subject of considerable research over the past 20 years, and are currently well developed. In performing the detailed three-dimensional analyses which are often required in proximity to discontinuities, however, analysts often encounter difficulties due to large models. Even with the current availability of powerful computers, models which are too large to run, either from a resource or time standpoint, are often required. There are several approaches which can permit such analyses, including substructuring, use of superelements or transition elements, and the global/local approach. This effort is based on the so-called zoom technique to global/local analysis, where a global analysis is run, with the results of that analysis applied to a smaller region as boundary conditions, in as many iterations as is required to attain an analysis of the desired region. Before beginning the global/local analyses, it was necessary to evaluate the accuracy of the three-dimensional elements currently implemented in the Computational Structural Mechanics (CSM) Testbed. It was also desired to install, using the Experimental Element Capability, a number of displacement formulation elements which have well known behavior when used for analysis of laminated composites.
Update on radiation-hardened microcomputers for robotics and teleoperated systems
DOE Office of Scientific and Technical Information (OSTI.GOV)
Sias, F.R. Jr.; Tulenko, J.S.
1993-12-31
Since many programs sponsored by the Department of Defense are being canceled, it is important to select carefully radiation-hardened microprocessors for projects that will mature (or will require continued support) several years in the future. At the present time there are seven candidate 32-bit processors that should be considered for long-range planning for high-performance radiation-hardened computer systems. For Department of Energy applications it is also important to consider efforts at standardization that require the use of the VxWorks operating system and hardware based on the VMEbus. Of the seven processors, one has been delivered and is operating and other systemsmore » are scheduled to be delivered late in 1993 or early in 1994. At the present time the Honeywell-developed RH32, the Harris RH-3000 and the Harris RHC-3000 are leading contenders for meeting DOE requirements for a radiation-hardened advanced 32-bit microprocessor. These are all either compatible with or are derivatives of the MIPS R3000 Reduced Instruction Set Computer. It is anticipated that as few as two of the seven radiation-hardened processors will be supported by the space program in the long run.« less
SU-E-T-419: Workflow and FMEA in a New Proton Therapy (PT) Facility
DOE Office of Scientific and Technical Information (OSTI.GOV)
Cheng, C; Wessels, B; Hamilton, H
2014-06-01
Purpose: Workflow is an important component in the operational planning of a new proton facility. By integrating the concept of failure mode and effect analysis (FMEA) and traditional QA requirements, a workflow for a proton therapy treatment course is set up. This workflow serves as the blue print for the planning of computer hardware/software requirements and network flow. A slight modification of the workflow generates a process map(PM) for FMEA and the planning of QA program in PT. Methods: A flowchart is first developed outlining the sequence of processes involved in a PT treatment course. Each process consists of amore » number of sub-processes to encompass a broad scope of treatment and QA procedures. For each subprocess, the personnel involved, the equipment needed and the computer hardware/software as well as network requirements are defined by a team of clinical staff, administrators and IT personnel. Results: Eleven intermediate processes with a total of 70 sub-processes involved in a PT treatment course are identified. The number of sub-processes varies, ranging from 2-12. The sub-processes within each process are used for the operational planning. For example, in the CT-Sim process, there are 12 sub-processes: three involve data entry/retrieval from a record-and-verify system, two controlled by the CT computer, two require department/hospital network, and the other five are setup procedures. IT then decides the number of computers needed and the software and network requirement. By removing the traditional QA procedures from the workflow, a PM is generated for FMEA analysis to design a QA program for PT. Conclusion: Significant efforts are involved in the development of the workflow in a PT treatment course. Our hybrid model of combining FMEA and traditional QA program serves a duo purpose of efficient operational planning and designing of a QA program in PT.« less
Advanced Methodology for Simulation of Complex Flows Using Structured Grid Systems
NASA Technical Reports Server (NTRS)
Steinthorsson, Erlendur; Modiano, David
1995-01-01
Detailed simulations of viscous flows in complicated geometries pose a significant challenge to current capabilities of Computational Fluid Dynamics (CFD). To enable routine application of CFD to this class of problems, advanced methodologies are required that employ (a) automated grid generation, (b) adaptivity, (c) accurate discretizations and efficient solvers, and (d) advanced software techniques. Each of these ingredients contributes to increased accuracy, efficiency (in terms of human effort and computer time), and/or reliability of CFD software. In the long run, methodologies employing structured grid systems will remain a viable choice for routine simulation of flows in complex geometries only if genuinely automatic grid generation techniques for structured grids can be developed and if adaptivity is employed more routinely. More research in both these areas is urgently needed.
A Comparison of FPGA and GPGPU Designs for Bayesian Occupancy Filters
Medina, Luis; Diez-Ochoa, Miguel; Correal, Raul; Cuenca-Asensi, Sergio; Godoy, Jorge; Martínez-Álvarez, Antonio
2017-01-01
Grid-based perception techniques in the automotive sector based on fusing information from different sensors and their robust perceptions of the environment are proliferating in the industry. However, one of the main drawbacks of these techniques is the traditionally prohibitive, high computing performance that is required for embedded automotive systems. In this work, the capabilities of new computing architectures that embed these algorithms are assessed in a real car. The paper compares two ad hoc optimized designs of the Bayesian Occupancy Filter; one for General Purpose Graphics Processing Unit (GPGPU) and the other for Field-Programmable Gate Array (FPGA). The resulting implementations are compared in terms of development effort, accuracy and performance, using datasets from a realistic simulator and from a real automated vehicle. PMID:29137137
Structural factoring approach for analyzing stochastic networks
NASA Technical Reports Server (NTRS)
Hayhurst, Kelly J.; Shier, Douglas R.
1991-01-01
The problem of finding the distribution of the shortest path length through a stochastic network is investigated. A general algorithm for determining the exact distribution of the shortest path length is developed based on the concept of conditional factoring, in which a directed, stochastic network is decomposed into an equivalent set of smaller, generally less complex subnetworks. Several network constructs are identified and exploited to reduce significantly the computational effort required to solve a network problem relative to complete enumeration. This algorithm can be applied to two important classes of stochastic path problems: determining the critical path distribution for acyclic networks and the exact two-terminal reliability for probabilistic networks. Computational experience with the algorithm was encouraging and allowed the exact solution of networks that have been previously analyzed only by approximation techniques.
Cryptography and the Internet: lessons and challenges
DOE Office of Scientific and Technical Information (OSTI.GOV)
McCurley, K.S.
1996-12-31
The popularization of the Internet has brought fundamental changes to the world, because it allows a universal method of communication between computers. This carries enormous benefits with it, but also raises many security considerations. Cryptography is a fundamental technology used to provide security of computer networks, and there is currently a widespread engineering effort to incorporate cryptography into various aspects of the Internet. The system-level engineering required to provide security services for the Internet carries some important lessons for researchers whose study is focused on narrowly defined problems. It also offers challenges to the cryptographic research community by raising newmore » questions not adequately addressed by the existing body of knowledge. This paper attempts to summarize some of these lessons and challenges for the cryptographic research community.« less
A Simple XML Producer-Consumer Protocol
NASA Technical Reports Server (NTRS)
Smith, Warren; Gunter, Dan; Quesnel, Darcy; Biegel, Bryan (Technical Monitor)
2001-01-01
There are many different projects from government, academia, and industry that provide services for delivering events in distributed environments. The problem with these event services is that they are not general enough to support all uses and they speak different protocols so that they cannot interoperate. We require such interoperability when we, for example, wish to analyze the performance of an application in a distributed environment. Such an analysis might require performance information from the application, computer systems, networks, and scientific instruments. In this work we propose and evaluate a standard XML-based protocol for the transmission of events in distributed systems. One recent trend in government and academic research is the development and deployment of computational grids. Computational grids are large-scale distributed systems that typically consist of high-performance compute, storage, and networking resources. Examples of such computational grids are the DOE Science Grid, the NASA Information Power Grid (IPG), and the NSF Partnerships for Advanced Computing Infrastructure (PACIs). The major effort to deploy these grids is in the area of developing the software services to allow users to execute applications on these large and diverse sets of resources. These services include security, execution of remote applications, managing remote data, access to information about resources and services, and so on. There are several toolkits for providing these services such as Globus, Legion, and Condor. As part of these efforts to develop computational grids, the Global Grid Forum is working to standardize the protocols and APIs used by various grid services. This standardization will allow interoperability between the client and server software of the toolkits that are providing the grid services. The goal of the Performance Working Group of the Grid Forum is to standardize protocols and representations related to the storage and distribution of performance data. These standard protocols and representations must support tasks such as profiling parallel applications, monitoring the status of computers and networks, and monitoring the performance of services provided by a computational grid. This paper describes a proposed protocol and data representation for the exchange of events in a distributed system. The protocol exchanges messages formatted in XML and it can be layered atop any low-level communication protocol such as TCP or UDP Further, we describe Java and C++ implementations of this protocol and discuss their performance. The next section will provide some further background information. Section 3 describes the main communication patterns of our protocol. Section 4 describes how we represent events and related information using XML. Section 5 describes our protocol and Section 6 discusses the performance of two implementations of the protocol. Finally, an appendix provides the XML Schema definition of our protocol and event information.
1984-06-29
effort that requires hard copy documentation. As a result, there are generally numerous delays in providing current quality information. In the FoF...process have had fixed controls or were based on " hard -coded" information. A template, for example, is hard -coded information defining the shape of a...represents soft-coded control information. (Although manual handling of punch tapes still possess some of the limitations of " hard -coded" controls
Challenges & Roadmap for Beyond CMOS Computing Simulation.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Rodrigues, Arun F.; Frank, Michael P.
Simulating HPC systems is a difficult task and the emergence of “Beyond CMOS” architectures and execution models will increase that difficulty. This document presents a “tutorial” on some of the simulation challenges faced by conventional and non-conventional architectures (Section 1) and goals and requirements for simulating Beyond CMOS systems (Section 2). These provide background for proposed short- and long-term roadmaps for simulation efforts at Sandia (Sections 3 and 4). Additionally, a brief explanation of a proof-of-concept integration of a Beyond CMOS architectural simulator is presented (Section 2.3).
NASA Technical Reports Server (NTRS)
Hua, Chongyu; Volakis, John L.
1990-01-01
AUTOMESH-2D is a computer program specifically designed as a preprocessor for the scattering analysis of two dimensional bodies by the finite element method. This program was developed due to a need for reproducing the effort required to define and check the geometry data, element topology, and material properties. There are six modules in the program: (1) Parameter Specification; (2) Data Input; (3) Node Generation; (4) Element Generation; (5) Mesh Smoothing; and (5) Data File Generation.
Shannon information, LMC complexity and Rényi entropies: a straightforward approach.
López-Ruiz, Ricardo
2005-04-01
The LMC complexity, an indicator of complexity based on a probabilistic description, is revisited. A straightforward approach allows us to establish the time evolution of this indicator in a near-equilibrium situation and gives us a new insight for interpreting the LMC complexity for a general non equilibrium system. Its relationship with the Rényi entropies is also explained. One of the advantages of this indicator is that its calculation does not require a considerable computational effort in many cases of physical and biological interest.
Automated CPX support system preliminary design phase
NASA Technical Reports Server (NTRS)
Bordeaux, T. A.; Carson, E. T.; Hepburn, C. D.; Shinnick, F. M.
1984-01-01
The development of the Distributed Command and Control System (DCCS) is discussed. The development of an automated C2 system stimulated the development of an automated command post exercise (CPX) support system to provide a more realistic stimulus to DCCS than could be achieved with the existing manual system. An automated CPX system to support corps-level exercise was designed. The effort comprised four tasks: (1) collecting and documenting user requirements; (2) developing a preliminary system design; (3) defining a program plan; and (4) evaluating the suitability of the TRASANA FOURCE computer model.
Kerfriden, P.; Goury, O.; Rabczuk, T.; Bordas, S.P.A.
2013-01-01
We propose in this paper a reduced order modelling technique based on domain partitioning for parametric problems of fracture. We show that coupling domain decomposition and projection-based model order reduction permits to focus the numerical effort where it is most needed: around the zones where damage propagates. No a priori knowledge of the damage pattern is required, the extraction of the corresponding spatial regions being based solely on algebra. The efficiency of the proposed approach is demonstrated numerically with an example relevant to engineering fracture. PMID:23750055
1981-05-01
hypersonic missiles will require high-lift, low-drag configu- rations with good control effectiveness . Non-circular airframe configurations that...substantial effort to devise cost effective wind tunnel systems. The result has been a number of good systems coupling hardware and computer systems...I lating a center bay, comparison is not good as shown in Figure 12. The effect of the dispenser angle of attack on the submissile aerodynamics is
Study for application of a sounding rocket experiment to spacelab/shuttle mission
NASA Technical Reports Server (NTRS)
Code, A. D.
1975-01-01
An inexpensive adaptation of rocket-size packages to Spacelab/Shuttle use was studied. A two-flight project extending over two years was baselined, requiring 80 man-months of effort. It was concluded that testing should be held to a minimum since rocket packages seem to be able to tolerate shuttle vibration and noise levels. A standard, flexible control and data collection language such as FORTH should be used rather than a computation language such as FORTRAN in order to hold programming costs to a minimum.
Spacelab data analysis and interactive control study
NASA Technical Reports Server (NTRS)
Tarbell, T. D.; Drake, J. F.
1980-01-01
The study consisted of two main tasks, a series of interviews of Spacelab users and a survey of data processing and display equipment. Findings from the user interviews on questions of interactive control, downlink data formats, and Spacelab computer software development are presented. Equipment for quick look processing and display of scientific data in the Spacelab Payload Operations Control Center (POCC) was surveyed. Results of this survey effort are discussed in detail, along with recommendations for NASA development of several specific display systems which meet common requirements of many Spacelab experiments.
Impact of remote sensing upon the planning, management, and development of water resources
NASA Technical Reports Server (NTRS)
Loats, H. L.; Fowler, T. R.; Frech, S. L.
1974-01-01
A survey of the principal water resource users was conducted to determine the impact of new remote data streams on hydrologic computer models. The analysis of the responses and direct contact demonstrated that: (1) the majority of water resource effort of the type suitable to remote sensing inputs is conducted by major federal water resources agencies or through federally stimulated research, (2) the federal government develops most of the hydrologic models used in this effort; and (3) federal computer power is extensive. The computers, computer power, and hydrologic models in current use were determined.
Computing pKa Values in Different Solvents by Electrostatic Transformation.
Rossini, Emanuele; Netz, Roland R; Knapp, Ernst-Walter
2016-07-12
We introduce a method that requires only moderate computational effort to compute pKa values of small molecules in different solvents with an average accuracy of better than 0.7 pH units. With a known pKa value in one solvent, the electrostatic transform method computes the pKa value in any other solvent if the proton solvation energy is known in both considered solvents. To apply the electrostatic transform method to a molecule, the electrostatic solvation energies of the protonated and deprotonated molecular species are computed in the two considered solvents using a dielectric continuum to describe the solvent. This is demonstrated for 30 molecules belonging to 10 different molecular families by considering 77 measured pKa values in 4 different solvents: water, acetonitrile, dimethyl sulfoxide, and methanol. The electrostatic transform method can be applied to any other solvent if the proton solvation energy is known. It is exclusively based on physicochemical principles, not using any empirical fetch factors or explicit solvent molecules, to obtain agreement with measured pKa values and is therefore ready to be generalized to other solute molecules and solvents. From the computed pKa values, we obtained relative proton solvation energies, which agree very well with the proton solvation energies computed recently by ab initio methods, and used these energies in the present study.
Use of Cloud Computing to Calibrate a Highly Parameterized Model
NASA Astrophysics Data System (ADS)
Hayley, K. H.; Schumacher, J.; MacMillan, G.; Boutin, L.
2012-12-01
We present a case study using cloud computing to facilitate the calibration of a complex and highly parameterized model of regional groundwater flow. The calibration dataset consisted of many (~1500) measurements or estimates of static hydraulic head, a high resolution time series of groundwater extraction and disposal rates at 42 locations and pressure monitoring at 147 locations with a total of more than one million raw measurements collected over a ten year pumping history, and base flow estimates at 5 surface water monitoring locations. This modeling project was undertaken to assess the sustainability of groundwater withdrawal and disposal plans for insitu heavy oil extraction in Northeast Alberta, Canada. The geological interpretations used for model construction were based on more than 5,000 wireline logs collected throughout the 30,865 km2 regional study area (RSA), and resulted in a model with 28 slices, and 28 hydro stratigraphic units (average model thickness of 700 m, with aquifers ranging from a depth of 50 to 500 m below ground surface). The finite element FEFLOW model constructed on this geological interpretation had 331,408 nodes and required 265 time steps to simulate the ten year transient calibration period. This numerical model of groundwater flow required 3 hours to run on a on a server with two, 2.8 GHz processers and 16 Gb. RAM. Calibration was completed using PEST. Horizontal and vertical hydraulic conductivity as well as specific storage for each unit were independent parameters. For the recharge and the horizontal hydraulic conductivity in the three aquifers with the most transient groundwater use, a pilot point parameterization was adopted. A 7*7 grid of pilot points was defined over the RSA that defined a spatially variable horizontal hydraulic conductivity or recharge field. A 7*7 grid of multiplier pilot points that perturbed the more regional field was then superimposed over the 3,600 km2 local study area (LSA). The pilot point multipliers were implemented so a higher resolution of spatial variability could be obtained where there was a higher density of observation data. Five geologic boundaries were modeled with a specified flux boundary condition and the transfer rate was used as an adjustable parameter for each of these boundaries. This parameterization resulted in 448 parameters for calibration. In the project planning stage it was estimated that the calibration might require as much 15,000 hours (1.7 years) of computing. In an effort to complete the calibration in a timely manner, the inversion was parallelized and implemented on as many as 250 computing nodes located on Amazon's EC2 servers. The results of the calibration provided a better fit to the data than previous efforts with homogenous parameters, and the highly parameterized approach facilitated subspace Monte Carlo analysis for predictive uncertainty. This scale of cloud computing is relatively new for the hydrogeology community and at the time of implementation it was believed to be the first implementation of FEFLOW model at this scale. While the experience provided several challenges, the implementation was successful and provides some valuable learning for future efforts.
An opportunity cost model of subjective effort and task performance
Kurzban, Robert; Duckworth, Angela; Kable, Joseph W.; Myers, Justus
2013-01-01
Why does performing certain tasks cause the aversive experience of mental effort and concomitant deterioration in task performance? One explanation posits a physical resource that is depleted over time. We propose an alternate explanation that centers on mental representations of the costs and benefits associated with task performance. Specifically, certain computational mechanisms, especially those associated with executive function, can be deployed for only a limited number of simultaneous tasks at any given moment. Consequently, the deployment of these computational mechanisms carries an opportunity cost – that is, the next-best use to which these systems might be put. We argue that the phenomenology of effort can be understood as the felt output of these cost/benefit computations. In turn, the subjective experience of effort motivates reduced deployment of these computational mechanisms in the service of the present task. These opportunity cost representations, then, together with other cost/benefit calculations, determine effort expended and, everything else equal, result in performance reductions. In making our case for this position, we review alternate explanations both for the phenomenology of effort associated with these tasks and for performance reductions over time. Likewise, we review the broad range of relevant empirical results from across subdisciplines, especially psychology and neuroscience. We hope that our proposal will help to build links among the diverse fields that have been addressing similar questions from different perspectives, and we emphasize ways in which alternate models might be empirically distinguished. PMID:24304775
An Improved Neutron Transport Algorithm for HZETRN2006
NASA Astrophysics Data System (ADS)
Slaba, Tony
NASA's new space exploration initiative includes plans for long term human presence in space thereby placing new emphasis on space radiation analyses. In particular, a systematic effort of verification, validation and uncertainty quantification of the tools commonly used for radiation analysis for vehicle design and mission planning has begun. In this paper, the numerical error associated with energy discretization in HZETRN2006 is addressed; large errors in the low-energy portion of the neutron fluence spectrum are produced due to a numerical truncation error in the transport algorithm. It is shown that the truncation error results from the narrow energy domain of the neutron elastic spectral distributions, and that an extremely fine energy grid is required in order to adequately resolve the problem under the current formulation. Since adding a sufficient number of energy points will render the code computationally inefficient, we revisit the light-ion transport theory developed for HZETRN2006 and focus on neutron elastic interactions. The new approach that is developed numerically integrates with adequate resolution in the energy domain without affecting the run-time of the code and is easily incorporated into the current code. Efforts were also made to optimize the computational efficiency of the light-ion propagator; a brief discussion of the efforts is given along with run-time comparisons between the original and updated codes. Convergence testing is then completed by running the code for various environments and shielding materials with many different energy grids to ensure stability of the proposed method.
NASA Technical Reports Server (NTRS)
1981-01-01
The National Science Foundation (NSF) initialized a new phase of exploration last year, a 10 year effort jointly funded by NSF and several major oil companies, known as the Ocean Margin Drilling Program (OMDP). The OMDP requires a ship with capabilities beyond existing drill ships; it must drill in 13,000 feet of water to a depth 20,000 feet below the ocean floor. To meet requirements, NSF is considering the conversion of the government-owned mining ship Glomar Explorer to a deep ocean drilling and coring vessel. Feasibility study performed by Donhaiser Marine, Inc. analyzed the ship's characteristics for suitability and evaluated conversion requirement. DMI utilized COSMIC's Ship Motion and Sea Load Computer program to perform analysis which could not be accomplished by other means. If approved for conversion, Glomar Explorer is expected to begin operations as a drillship in 1984.
An investigation of networking techniques for the ASRM facility
NASA Technical Reports Server (NTRS)
Moorhead, Robert J., II; Smith, Wayne D.; Thompson, Dale R.
1992-01-01
This report is based on the early design concepts for a communications network for the Advanced Solid Rocket Motor (ASRM) facility being built at Yellow Creek near Iuka, MS. The investigators have participated in the early design concepts and in the evaluation of the initial concepts. The continuing system design effort and any modification of the plan will require a careful evaluation of the required bandwidth of the network, the capabilities of the protocol, and the requirements of the controllers and computers on the network. The overall network, which is heterogeneous in protocol and bandwidth, is being modeled, analyzed, simulated, and tested to obtain some degree of confidence in its performance capabilities and in its performance under nominal and heavy loads. The results of the proposed work should have an impact on the design and operation of the ASRM facility.
Computational Models of Anterior Cingulate Cortex: At the Crossroads between Prediction and Effort.
Vassena, Eliana; Holroyd, Clay B; Alexander, William H
2017-01-01
In the last two decades the anterior cingulate cortex (ACC) has become one of the most investigated areas of the brain. Extensive neuroimaging evidence suggests countless functions for this region, ranging from conflict and error coding, to social cognition, pain and effortful control. In response to this burgeoning amount of data, a proliferation of computational models has tried to characterize the neurocognitive architecture of ACC. Early seminal models provided a computational explanation for a relatively circumscribed set of empirical findings, mainly accounting for EEG and fMRI evidence. More recent models have focused on ACC's contribution to effortful control. In parallel to these developments, several proposals attempted to explain within a single computational framework a wider variety of empirical findings that span different cognitive processes and experimental modalities. Here we critically evaluate these modeling attempts, highlighting the continued need to reconcile the array of disparate ACC observations within a coherent, unifying framework.
An Investigation of the Flow Physics of Acoustic Liners by Direct Numerical Simulation
NASA Technical Reports Server (NTRS)
Watson, Willie R. (Technical Monitor); Tam, Christopher
2004-01-01
This report concentrates on reporting the effort and status of work done on three dimensional (3-D) simulation of a multi-hole resonator in an impedance tube. This work is coordinated with a parallel experimental effort to be carried out at the NASA Langley Research Center. The outline of this report is as follows : 1. Preliminary consideration. 2. Computation model. 3. Mesh design and parallel computing. 4. Visualization. 5. Status of computer code development. 1. Preliminary Consideration.
Accelerating 3D Elastic Wave Equations on Knights Landing based Intel Xeon Phi processors
NASA Astrophysics Data System (ADS)
Sourouri, Mohammed; Birger Raknes, Espen
2017-04-01
In advanced imaging methods like reverse-time migration (RTM) and full waveform inversion (FWI) the elastic wave equation (EWE) is numerically solved many times to create the seismic image or the elastic parameter model update. Thus, it is essential to optimize the solution time for solving the EWE as this will have a major impact on the total computational cost in running RTM or FWI. From a computational point of view applications implementing EWEs are associated with two major challenges. The first challenge is the amount of memory-bound computations involved, while the second challenge is the execution of such computations over very large datasets. So far, multi-core processors have not been able to tackle these two challenges, which eventually led to the adoption of accelerators such as Graphics Processing Units (GPUs). Compared to conventional CPUs, GPUs are densely populated with many floating-point units and fast memory, a type of architecture that has proven to map well to many scientific computations. Despite its architectural advantages, full-scale adoption of accelerators has yet to materialize. First, accelerators require a significant programming effort imposed by programming models such as CUDA or OpenCL. Second, accelerators come with a limited amount of memory, which also require explicit data transfers between the CPU and the accelerator over the slow PCI bus. The second generation of the Xeon Phi processor based on the Knights Landing (KNL) architecture, promises the computational capabilities of an accelerator but require the same programming effort as traditional multi-core processors. The high computational performance is realized through many integrated cores (number of cores and tiles and memory varies with the model) organized in tiles that are connected via a 2D mesh based interconnect. In contrary to accelerators, KNL is a self-hosted system, meaning explicit data transfers over the PCI bus are no longer required. However, like most accelerators, KNL sports a memory subsystem consisting of low-level caches and 16GB of high-bandwidth MCDRAM memory. For capacity computing, up to 400GB of conventional DDR4 memory is provided. Such a strict hierarchical memory layout means that data locality is imperative if the true potential of this product is to be harnessed. In this work, we study a series of optimizations specifically targeting KNL for our EWE based application to reduce the time-to-solution time for the following 3D model sizes in grid points: 1283, 2563 and 5123. We compare the results with an optimized version for multi-core CPUs running on a dual-socket Xeon E5 2680v3 system using OpenMP. Our initial naive implementation on the KNL is roughly 20% faster than the multi-core version, but by using only one thread per core and careful memory placement using the memkind library, we could achieve higher speedups. Additionally, by using the MCDRAM as cache for problem sizes that are smaller than 16 GB further performance improvements were unlocked. Depending on the problem size, our overall results indicate that the KNL based system is approximately 2.2x faster than the 24-core Xeon E5 2680v3 system, with only modest changes to the code.
A Multifaceted Mathematical Approach for Complex Systems
DOE Office of Scientific and Technical Information (OSTI.GOV)
Alexander, F.; Anitescu, M.; Bell, J.
2012-03-07
Applied mathematics has an important role to play in developing the tools needed for the analysis, simulation, and optimization of complex problems. These efforts require the development of the mathematical foundations for scientific discovery, engineering design, and risk analysis based on a sound integrated approach for the understanding of complex systems. However, maximizing the impact of applied mathematics on these challenges requires a novel perspective on approaching the mathematical enterprise. Previous reports that have surveyed the DOE's research needs in applied mathematics have played a key role in defining research directions with the community. Although these reports have had significantmore » impact, accurately assessing current research needs requires an evaluation of today's challenges against the backdrop of recent advances in applied mathematics and computing. To address these needs, the DOE Applied Mathematics Program sponsored a Workshop for Mathematics for the Analysis, Simulation and Optimization of Complex Systems on September 13-14, 2011. The workshop had approximately 50 participants from both the national labs and academia. The goal of the workshop was to identify new research areas in applied mathematics that will complement and enhance the existing DOE ASCR Applied Mathematics Program efforts that are needed to address problems associated with complex systems. This report describes recommendations from the workshop and subsequent analysis of the workshop findings by the organizing committee.« less
Development of Sensors for Aerospace Applications
NASA Technical Reports Server (NTRS)
Medelius, Pedro
2005-01-01
Advances in technology have led to the availability of smaller and more accurate sensors. Computer power to process large amounts of data is no longer the prevailing issue; thus multiple and redundant sensors can be used to obtain more accurate and comprehensive measurements in a space vehicle. The successful integration and commercialization of micro- and nanotechnology for aerospace applications require that a close and interactive relationship be developed between the technology provider and the end user early in the project. Close coordination between the developers and the end users is critical since qualification for flight is time-consuming and expensive. The successful integration of micro- and nanotechnology into space vehicles requires a coordinated effort throughout the design, development, installation, and integration processes
NASA Technical Reports Server (NTRS)
1975-01-01
A system is presented which processes FORTRAN based software systems to surface potential problems before they become execution malfunctions. The system complements the diagnostic capabilities of compilers, loaders, and execution monitors rather than duplicating these functions. Also, it emphasizes frequent sources of FORTRAN problems which require inordinate manual effort to identify. The principle value of the system is extracting small sections of unusual code from the bulk of normal sequences. Code structures likely to cause immediate or future problems are brought to the user's attention. These messages stimulate timely corrective action of solid errors and promote identification of 'tricky' code. Corrective action may require recoding or simply extending software documentation to explain the unusual technique.
Rapid Monte Carlo Simulation of Gravitational Wave Galaxies
NASA Astrophysics Data System (ADS)
Breivik, Katelyn; Larson, Shane L.
2015-01-01
With the detection of gravitational waves on the horizon, astrophysical catalogs produced by gravitational wave observatories can be used to characterize the populations of sources and validate different galactic population models. Efforts to simulate gravitational wave catalogs and source populations generally focus on population synthesis models that require extensive time and computational power to produce a single simulated galaxy. Monte Carlo simulations of gravitational wave source populations can also be used to generate observation catalogs from the gravitational wave source population. Monte Carlo simulations have the advantes of flexibility and speed, enabling rapid galactic realizations as a function of galactic binary parameters with less time and compuational resources required. We present a Monte Carlo method for rapid galactic simulations of gravitational wave binary populations.
Cooperative high-performance storage in the accelerated strategic computing initiative
NASA Technical Reports Server (NTRS)
Gary, Mark; Howard, Barry; Louis, Steve; Minuzzo, Kim; Seager, Mark
1996-01-01
The use and acceptance of new high-performance, parallel computing platforms will be impeded by the absence of an infrastructure capable of supporting orders-of-magnitude improvement in hierarchical storage and high-speed I/O (Input/Output). The distribution of these high-performance platforms and supporting infrastructures across a wide-area network further compounds this problem. We describe an architectural design and phased implementation plan for a distributed, Cooperative Storage Environment (CSE) to achieve the necessary performance, user transparency, site autonomy, communication, and security features needed to support the Accelerated Strategic Computing Initiative (ASCI). ASCI is a Department of Energy (DOE) program attempting to apply terascale platforms and Problem-Solving Environments (PSEs) toward real-world computational modeling and simulation problems. The ASCI mission must be carried out through a unified, multilaboratory effort, and will require highly secure, efficient access to vast amounts of data. The CSE provides a logically simple, geographically distributed, storage infrastructure of semi-autonomous cooperating sites to meet the strategic ASCI PSE goal of highperformance data storage and access at the user desktop.
NASA Astrophysics Data System (ADS)
Gel, Aytekin; Hu, Jonathan; Ould-Ahmed-Vall, ElMoustapha; Kalinkin, Alexander A.
2017-02-01
Legacy codes remain a crucial element of today's simulation-based engineering ecosystem due to the extensive validation process and investment in such software. The rapid evolution of high-performance computing architectures necessitates the modernization of these codes. One approach to modernization is a complete overhaul of the code. However, this could require extensive investments, such as rewriting in modern languages, new data constructs, etc., which will necessitate systematic verification and validation to re-establish the credibility of the computational models. The current study advocates using a more incremental approach and is a culmination of several modernization efforts of the legacy code MFIX, which is an open-source computational fluid dynamics code that has evolved over several decades, widely used in multiphase flows and still being developed by the National Energy Technology Laboratory. Two different modernization approaches,'bottom-up' and 'top-down', are illustrated. Preliminary results show up to 8.5x improvement at the selected kernel level with the first approach, and up to 50% improvement in total simulated time with the latter were achieved for the demonstration cases and target HPC systems employed.
NASA Astrophysics Data System (ADS)
Gómez-Bombarelli, Rafael; Aguilera-Iparraguirre, Jorge; Hirzel, Timothy D.; Ha, Dong-Gwang; Einzinger, Markus; Wu, Tony; Baldo, Marc A.; Aspuru-Guzik, Alán.
2016-09-01
Discovering new OLED emitters requires many experiments to synthesize candidates and test performance in devices. Large scale computer simulation can greatly speed this search process but the problem remains challenging enough that brute force application of massive computing power is not enough to successfully identify novel structures. We report a successful High Throughput Virtual Screening study that leveraged a range of methods to optimize the search process. The generation of candidate structures was constrained to contain combinatorial explosion. Simulations were tuned to the specific problem and calibrated with experimental results. Experimentalists and theorists actively collaborated such that experimental feedback was regularly utilized to update and shape the computational search. Supervised machine learning methods prioritized candidate structures prior to quantum chemistry simulation to prevent wasting compute on likely poor performers. With this combination of techniques, each multiplying the strength of the search, this effort managed to navigate an area of molecular space and identify hundreds of promising OLED candidate structures. An experimentally validated selection of this set shows emitters with external quantum efficiencies as high as 22%.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Koniges, A.E.; Craddock, G.G.; Schnack, D.D.
The purpose of the workshop was to assemble workers, both within and outside of the fusion-related computations areas, for discussion regarding the issues of dynamically adaptive gridding. There were three invited talks related to adaptive gridding application experiences in various related fields of computational fluid dynamics (CFD), and nine short talks reporting on the progress of adaptive techniques in the specific areas of scrape-off-layer (SOL) modeling and magnetohydrodynamic (MHD) stability. Adaptive mesh methods have been successful in a number of diverse fields of CFD for over a decade. The method involves dynamic refinement of computed field profiles in a waymore » that disperses uniformly the numerical errors associated with discrete approximations. Because the process optimizes computational effort, adaptive mesh methods can be used to study otherwise the intractable physical problems that involve complex boundary shapes or multiple spatial/temporal scales. Recent results indicate that these adaptive techniques will be required for tokamak fluid-based simulations involving the diverted tokamak SOL modeling and MHD simulations problems related to the highest priority ITER relevant issues.Individual papers are indexed separately on the energy data bases.« less
Dragonfly: strengthening programming skills by building a game engine from scratch
NASA Astrophysics Data System (ADS)
Claypool, Mark
2013-06-01
Computer game development has been shown to be an effective hook for motivating students to learn both introductory and advanced computer science topics. While games can be made from scratch, to simplify the programming required game development often uses game engines that handle complicated or frequently used components of the game. These game engines present the opportunity to strengthen programming skills and expose students to a range of fundamental computer science topics. While educational efforts have been effective in using game engines to improve computer science education, there have been no published papers describing and evaluating students building a game engine from scratch as part of their course work. This paper presents the Dragonfly-approach in which students build a fully functional game engine from scratch and make a game using their engine as part of a junior-level course. Details on the programming projects are presented, as well as an evaluation of the results from two offerings that used Dragonfly. Student performance on the projects as well as student assessments demonstrates the efficacy of having students build a game engine from scratch in strengthening their programming skills.
The road to business process improvement--can you get there from here?
Gilberto, P A
1995-11-01
Historically, "improvements" within the organization have been frequently attained through automation by building and installing computer systems. Material requirements planning (MRP), manufacturing resource planning II (MRP II), just-in-time (JIT), computer aided design (CAD), computer aided manufacturing (CAM), electronic data interchange (EDI), and various other TLAs (three-letter acronyms) have been used as the methods to attain business objectives. But most companies have found that installing computer software, cleaning up their data, and providing every employee with training on how to best use the systems have not resulted in the level of business improvements needed. The software systems have simply made management around the problems easier but did little to solve the basic problems. The missing element in the efforts to improve the performance of the organization has been a shift in focus from individual department improvements to cross-organizational business process improvements. This article describes how the Electric Boat Division of General Dynamics Corporation, in conjunction with the Data Systems Division, moved its focus from one of vertical organizational processes to horizontal business processes. In other words, how we got rid of the dinosaurs.
Computational Systems Biology in Cancer: Modeling Methods and Applications
Materi, Wayne; Wishart, David S.
2007-01-01
In recent years it has become clear that carcinogenesis is a complex process, both at the molecular and cellular levels. Understanding the origins, growth and spread of cancer, therefore requires an integrated or system-wide approach. Computational systems biology is an emerging sub-discipline in systems biology that utilizes the wealth of data from genomic, proteomic and metabolomic studies to build computer simulations of intra and intercellular processes. Several useful descriptive and predictive models of the origin, growth and spread of cancers have been developed in an effort to better understand the disease and potential therapeutic approaches. In this review we describe and assess the practical and theoretical underpinnings of commonly-used modeling approaches, including ordinary and partial differential equations, petri nets, cellular automata, agent based models and hybrid systems. A number of computer-based formalisms have been implemented to improve the accessibility of the various approaches to researchers whose primary interest lies outside of model development. We discuss several of these and describe how they have led to novel insights into tumor genesis, growth, apoptosis, vascularization and therapy. PMID:19936081
Comparison of different models for non-invasive FFR estimation
NASA Astrophysics Data System (ADS)
Mirramezani, Mehran; Shadden, Shawn
2017-11-01
Coronary artery disease is a leading cause of death worldwide. Fractional flow reserve (FFR), derived from invasively measuring the pressure drop across a stenosis, is considered the gold standard to diagnose disease severity and need for treatment. Non-invasive estimation of FFR has gained recent attention for its potential to reduce patient risk and procedural cost versus invasive FFR measurement. Non-invasive FFR can be obtained by using image-based computational fluid dynamics to simulate blood flow and pressure in a patient-specific coronary model. However, 3D simulations require extensive effort for model construction and numerical computation, which limits their routine use. In this study we compare (ordered by increasing computational cost/complexity): reduced-order algebraic models of pressure drop across a stenosis; 1D, 2D (multiring) and 3D CFD models; as well as 3D FSI for the computation of FFR in idealized and patient-specific stenosis geometries. We demonstrate the ability of an appropriate reduced order algebraic model to closely predict FFR when compared to FFR from a full 3D simulation. This work was supported by the NIH, Grant No. R01-HL103419.
Linshiz, Gregory; Goldberg, Alex; Konry, Tania; Hillson, Nathan J
2012-01-01
Synthetic biology is a nascent field that emerged in earnest only around the turn of the millennium. It aims to engineer new biological systems and impart new biological functionality, often through genetic modifications. The design and construction of new biological systems is a complex, multistep process, requiring multidisciplinary collaborative efforts from "fusion" scientists who have formal training in computer science or engineering, as well as hands-on biological expertise. The public has high expectations for synthetic biology and eagerly anticipates the development of solutions to the major challenges facing humanity. This article discusses laboratory practices and the conduct of research in synthetic biology. It argues that the fusion science approach, which integrates biology with computer science and engineering best practices, including standardization, process optimization, computer-aided design and laboratory automation, miniaturization, and systematic management, will increase the predictability and reproducibility of experiments and lead to breakthroughs in the construction of new biological systems. The article also discusses several successful fusion projects, including the development of software tools for DNA construction design automation, recursive DNA construction, and the development of integrated microfluidics systems.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Pointer, William David
The objective of this effort is to establish a strategy and process for generation of suitable computational mesh for computational fluid dynamics simulations of departure from nucleate boiling in a 5 by 5 fuel rod assembly held in place by PWR mixing vane spacer grids. This mesh generation process will support ongoing efforts to develop, demonstrate and validate advanced multi-phase computational fluid dynamics methods that enable more robust identification of dryout conditions and DNB occurrence.Building upon prior efforts and experience, multiple computational meshes were developed using the native mesh generation capabilities of the commercial CFD code STAR-CCM+. These meshes weremore » used to simulate two test cases from the Westinghouse 5 by 5 rod bundle facility. The sensitivity of predicted quantities of interest to the mesh resolution was then established using two evaluation methods, the Grid Convergence Index method and the Least Squares method. This evaluation suggests that the Least Squares method can reliably establish the uncertainty associated with local parameters such as vector velocity components at a point in the domain or surface averaged quantities such as outlet velocity magnitude. However, neither method is suitable for characterization of uncertainty in global extrema such as peak fuel surface temperature, primarily because such parameters are not necessarily associated with a fixed point in space. This shortcoming is significant because the current generation algorithm for identification of DNB event conditions relies on identification of such global extrema. Ongoing efforts to identify DNB based on local surface conditions will address this challenge« less
MARC ES: a computer program for estimating medical information storage requirements.
Konoske, P J; Dobbins, R W; Gauker, E D
1998-01-01
During combat, documentation of medical treatment information is critical for maintaining continuity of patient care. However, knowledge of prior status and treatment of patients is limited to the information noted on a paper field medical card. The Multi-technology Automated Reader Card (MARC), a smart card, has been identified as a potential storage mechanism for casualty medical information. Focusing on data capture and storage technology, this effort developed a Windows program, MARC ES, to estimate storage requirements for the MARC. The program calculates storage requirements for a variety of scenarios using medical documentation requirements, casualty rates, and casualty flows and provides the user with a tool to estimate the space required to store medical data at each echelon of care for selected operational theaters. The program can also be used to identify the point at which data must be uploaded from the MARC if size constraints are imposed. Furthermore, this model can be readily extended to other systems that store or transmit medical information.
Reversibility and measurement in quantum computing
NASA Astrophysics Data System (ADS)
Leãao, J. P.
1998-03-01
The relation between computation and measurement at a fundamental physical level is yet to be understood. Rolf Landauer was perhaps the first to stress the strong analogy between these two concepts. His early queries have regained pertinence with the recent efforts to developed realizable models of quantum computers. In this context the irreversibility of quantum measurement appears in conflict with the requirement of reversibility of the overall computation associated with the unitary dynamics of quantum evolution. The latter in turn is responsible for the features of superposition and entanglement which make some quantum algorithms superior to classical ones for the same task in speed and resource demand. In this article we advocate an approach to this question which relies on a model of computation designed to enforce the analogy between the two concepts instead of demarcating them as it has been the case so far. The model is introduced as a symmetrization of the classical Turing machine model and is then carried on to quantum mechanics, first as a an abstract local interaction scheme (symbolic measurement) and finally in a nonlocal noninteractive implementation based on Aharonov-Bohm potentials and modular variables. It is suggested that this implementation leads to the most ubiquitous of quantum algorithms: the Discrete Fourier Transform.
Advances in target imaging of deep Earth structure
NASA Astrophysics Data System (ADS)
Masson, Y.; Romanowicz, B. A.; Clouzet, P.
2015-12-01
A new generation of global tomographic models (Lekić and Romanowicz, 2011; French et al, 2013, 2014) has emerged with the development of accurate numerical wavefield computations in a 3D earth combined with access to enhanced HPC capabilities. These models have sharpened up mantle images and unveiled relatively small scale structures that were blurred out in previous generation models. Fingerlike structures have been found at the base of the oceanic asthenosphere, and vertically oriented broad low velocity plume conduits extend throughout the lower mantle beneath those major hotspots that are located within the perimeter of the deep mantle large low shear velocity provinces (LLSVPs). While providing new insights into our understanding of mantle dynamics, the detailed morphology of these features, requires further efforts to obtain higher resolution images. The focus of our ongoing effort is to develop advanced tomographic methods to image remote regions of the Earth at fine scales. We have developed an approach in which distant sources (located outside of the target region) are replaced by an equivalent set of local sources located at the border of the computational domain (Masson et al., 2014). A limited number of global simulations in a reference 3D earth model is then required. These simulations are computed prior to the regional inversion, while iterations of the model need to be performed only within the region of interest, potentially allowing us to include shorter periods at limited additional computational cost. Until now, the application was limited to a distribution of receivers inside the target region. This is particularly suitable for studies of upper mantle structure in regions with dense arrays (e.g. see our companion presentation Clouzet et al., this Fall AGU). Here we present our latest development that now can include teleseismic data recorded outside the imaged region. This allows us to perform regional waveform tomography in the situation where neither earthquakes nor seismological stations are present within the region of interest, such as would be desireable for the study of a region in the deep mantle. We present benchmark tests showing how the uncertainties in the reference 3D model employed outside of the target region affects the quality of the regional tomographic images obtained.
A Survey of Techniques for Approximate Computing
Mittal, Sparsh
2016-03-18
Approximate computing trades off computation quality with the effort expended and as rising performance demands confront with plateauing resource budgets, approximate computing has become, not merely attractive, but even imperative. Here, we present a survey of techniques for approximate computing (AC). We discuss strategies for finding approximable program portions and monitoring output quality, techniques for using AC in different processing units (e.g., CPU, GPU and FPGA), processor components, memory technologies etc., and programming frameworks for AC. Moreover, we classify these techniques based on several key characteristics to emphasize their similarities and differences. Finally, the aim of this paper is tomore » provide insights to researchers into working of AC techniques and inspire more efforts in this area to make AC the mainstream computing approach in future systems.« less
2013-01-01
Background Information and communication technologies (ICTs) are often proposed as ‘technological fixes’ for problems facing healthcare. They promise to deliver services more quickly and cheaply. Yet research on the implementation of ICTs reveals a litany of delays, compromises and failures. Case studies have established that these technologies are difficult to embed in everyday healthcare. Methods We undertook an ethnographic comparative analysis of a single computer decision support system in three different settings to understand the implementation and everyday use of this technology which is designed to deal with calls to emergency and urgent care services. We examined the deployment of this technology in an established 999 ambulance call-handling service, a new single point of access for urgent care and an established general practice out-of-hours service. We used Normalization Process Theory as a framework to enable systematic cross-case analysis. Results Our data comprise nearly 500 hours of observation, interviews with 64 call-handlers, and stakeholders and documents about the technology and settings. The technology has been implemented and is used distinctively in each setting reflecting important differences between work and contexts. Using Normalisation Process Theory we show how the work (collective action) of implementing the system and maintaining its routine use was enabled by a range of actors who established coherence for the technology, secured buy-in (cognitive participation) and engaged in on-going appraisal and adjustment (reflexive monitoring). Conclusions Huge effort was expended and continues to be required to implement and keep this technology in use. This innovation must be understood both as a computer technology and as a set of practices related to that technology, kept in place by a network of actors in particular contexts. While technologies can be ‘made to work’ in different settings, successful implementation has been achieved, and will only be maintained, through the efforts of those involved in the specific settings and if the wider context continues to support the coherence, cognitive participation, and reflective monitoring processes that surround this collective action. Implementation is more than simply putting technologies in place – it requires new resources and considerable effort, perhaps on an on-going basis. PMID:23522021
Deep Unsupervised Learning on a Desktop PC: A Primer for Cognitive Scientists.
Testolin, Alberto; Stoianov, Ivilin; De Filippo De Grazia, Michele; Zorzi, Marco
2013-01-01
Deep belief networks hold great promise for the simulation of human cognition because they show how structured and abstract representations may emerge from probabilistic unsupervised learning. These networks build a hierarchy of progressively more complex distributed representations of the sensory data by fitting a hierarchical generative model. However, learning in deep networks typically requires big datasets and it can involve millions of connection weights, which implies that simulations on standard computers are unfeasible. Developing realistic, medium-to-large-scale learning models of cognition would therefore seem to require expertise in programing parallel-computing hardware, and this might explain why the use of this promising approach is still largely confined to the machine learning community. Here we show how simulations of deep unsupervised learning can be easily performed on a desktop PC by exploiting the processors of low cost graphic cards (graphic processor units) without any specific programing effort, thanks to the use of high-level programming routines (available in MATLAB or Python). We also show that even an entry-level graphic card can outperform a small high-performance computing cluster in terms of learning time and with no loss of learning quality. We therefore conclude that graphic card implementations pave the way for a widespread use of deep learning among cognitive scientists for modeling cognition and behavior.
Deep Unsupervised Learning on a Desktop PC: A Primer for Cognitive Scientists
Testolin, Alberto; Stoianov, Ivilin; De Filippo De Grazia, Michele; Zorzi, Marco
2013-01-01
Deep belief networks hold great promise for the simulation of human cognition because they show how structured and abstract representations may emerge from probabilistic unsupervised learning. These networks build a hierarchy of progressively more complex distributed representations of the sensory data by fitting a hierarchical generative model. However, learning in deep networks typically requires big datasets and it can involve millions of connection weights, which implies that simulations on standard computers are unfeasible. Developing realistic, medium-to-large-scale learning models of cognition would therefore seem to require expertise in programing parallel-computing hardware, and this might explain why the use of this promising approach is still largely confined to the machine learning community. Here we show how simulations of deep unsupervised learning can be easily performed on a desktop PC by exploiting the processors of low cost graphic cards (graphic processor units) without any specific programing effort, thanks to the use of high-level programming routines (available in MATLAB or Python). We also show that even an entry-level graphic card can outperform a small high-performance computing cluster in terms of learning time and with no loss of learning quality. We therefore conclude that graphic card implementations pave the way for a widespread use of deep learning among cognitive scientists for modeling cognition and behavior. PMID:23653617
Demonstration of two-qubit algorithms with a superconducting quantum processor.
DiCarlo, L; Chow, J M; Gambetta, J M; Bishop, Lev S; Johnson, B R; Schuster, D I; Majer, J; Blais, A; Frunzio, L; Girvin, S M; Schoelkopf, R J
2009-07-09
Quantum computers, which harness the superposition and entanglement of physical states, could outperform their classical counterparts in solving problems with technological impact-such as factoring large numbers and searching databases. A quantum processor executes algorithms by applying a programmable sequence of gates to an initialized register of qubits, which coherently evolves into a final state containing the result of the computation. Building a quantum processor is challenging because of the need to meet simultaneously requirements that are in conflict: state preparation, long coherence times, universal gate operations and qubit readout. Processors based on a few qubits have been demonstrated using nuclear magnetic resonance, cold ion trap and optical systems, but a solid-state realization has remained an outstanding challenge. Here we demonstrate a two-qubit superconducting processor and the implementation of the Grover search and Deutsch-Jozsa quantum algorithms. We use a two-qubit interaction, tunable in strength by two orders of magnitude on nanosecond timescales, which is mediated by a cavity bus in a circuit quantum electrodynamics architecture. This interaction allows the generation of highly entangled states with concurrence up to 94 per cent. Although this processor constitutes an important step in quantum computing with integrated circuits, continuing efforts to increase qubit coherence times, gate performance and register size will be required to fulfil the promise of a scalable technology.
34 CFR 461.45 - How does the Secretary compute maintenance of effort in the event of a waiver?
Code of Federal Regulations, 2012 CFR
2012-07-01
... Education (Continued) OFFICE OF VOCATIONAL AND ADULT EDUCATION, DEPARTMENT OF EDUCATION ADULT EDUCATION... awarded for the year after the year of the waiver by comparing the amount spent for adult education from... 34 Education 3 2012-07-01 2012-07-01 false How does the Secretary compute maintenance of effort in...
34 CFR 461.45 - How does the Secretary compute maintenance of effort in the event of a waiver?
Code of Federal Regulations, 2013 CFR
2013-07-01
... Education (Continued) OFFICE OF VOCATIONAL AND ADULT EDUCATION, DEPARTMENT OF EDUCATION ADULT EDUCATION... awarded for the year after the year of the waiver by comparing the amount spent for adult education from... 34 Education 3 2013-07-01 2013-07-01 false How does the Secretary compute maintenance of effort in...
34 CFR 461.45 - How does the Secretary compute maintenance of effort in the event of a waiver?
Code of Federal Regulations, 2011 CFR
2011-07-01
... Education (Continued) OFFICE OF VOCATIONAL AND ADULT EDUCATION, DEPARTMENT OF EDUCATION ADULT EDUCATION... awarded for the year after the year of the waiver by comparing the amount spent for adult education from... 34 Education 3 2011-07-01 2011-07-01 false How does the Secretary compute maintenance of effort in...
34 CFR 461.45 - How does the Secretary compute maintenance of effort in the event of a waiver?
Code of Federal Regulations, 2014 CFR
2014-07-01
... Education (Continued) OFFICE OF VOCATIONAL AND ADULT EDUCATION, DEPARTMENT OF EDUCATION ADULT EDUCATION... awarded for the year after the year of the waiver by comparing the amount spent for adult education from... 34 Education 3 2014-07-01 2014-07-01 false How does the Secretary compute maintenance of effort in...
34 CFR 461.45 - How does the Secretary compute maintenance of effort in the event of a waiver?
Code of Federal Regulations, 2010 CFR
2010-07-01
... Education (Continued) OFFICE OF VOCATIONAL AND ADULT EDUCATION, DEPARTMENT OF EDUCATION ADULT EDUCATION... awarded for the year after the year of the waiver by comparing the amount spent for adult education from... 34 Education 3 2010-07-01 2010-07-01 false How does the Secretary compute maintenance of effort in...
Optimizing a mobile robot control system using GPU acceleration
NASA Astrophysics Data System (ADS)
Tuck, Nat; McGuinness, Michael; Martin, Fred
2012-01-01
This paper describes our attempt to optimize a robot control program for the Intelligent Ground Vehicle Competition (IGVC) by running computationally intensive portions of the system on a commodity graphics processing unit (GPU). The IGVC Autonomous Challenge requires a control program that performs a number of different computationally intensive tasks ranging from computer vision to path planning. For the 2011 competition our Robot Operating System (ROS) based control system would not run comfortably on the multicore CPU on our custom robot platform. The process of profiling the ROS control program and selecting appropriate modules for porting to run on a GPU is described. A GPU-targeting compiler, Bacon, is used to speed up development and help optimize the ported modules. The impact of the ported modules on overall performance is discussed. We conclude that GPU optimization can free a significant amount of CPU resources with minimal effort for expensive user-written code, but that replacing heavily-optimized library functions is more difficult, and a much less efficient use of time.
Neilson, Matthew P; Mackenzie, John A; Webb, Steven D; Insall, Robert H
2010-11-01
In this paper we present a computational tool that enables the simulation of mathematical models of cell migration and chemotaxis on an evolving cell membrane. Recent models require the numerical solution of systems of reaction-diffusion equations on the evolving cell membrane and then the solution state is used to drive the evolution of the cell edge. Previous work involved moving the cell edge using a level set method (LSM). However, the LSM is computationally very expensive, which severely limits the practical usefulness of the algorithm. To address this issue, we have employed the parameterised finite element method (PFEM) as an alternative method for evolving a cell boundary. We show that the PFEM is far more efficient and robust than the LSM. We therefore suggest that the PFEM potentially has an essential role to play in computational modelling efforts towards the understanding of many of the complex issues related to chemotaxis.
Applications of Phase-Based Motion Processing
NASA Technical Reports Server (NTRS)
Branch, Nicholas A.; Stewart, Eric C.
2018-01-01
Image pyramids provide useful information in determining structural response at low cost using commercially available cameras. The current effort applies previous work on the complex steerable pyramid to analyze and identify imperceptible linear motions in video. Instead of implicitly computing motion spectra through phase analysis of the complex steerable pyramid and magnifying the associated motions, instead present a visual technique and the necessary software to display the phase changes of high frequency signals within video. The present technique quickly identifies regions of largest motion within a video with a single phase visualization and without the artifacts of motion magnification, but requires use of the computationally intensive Fourier transform. While Riesz pyramids present an alternative to the computationally intensive complex steerable pyramid for motion magnification, the Riesz formulation contains significant noise, and motion magnification still presents large amounts of data that cannot be quickly assessed by the human eye. Thus, user-friendly software is presented for quickly identifying structural response through optical flow and phase visualization in both Python and MATLAB.
NASA Technical Reports Server (NTRS)
Swisshelm, Julie M.
1989-01-01
An explicit flow solver, applicable to the hierarchy of model equations ranging from Euler to full Navier-Stokes, is combined with several techniques designed to reduce computational expense. The computational domain consists of local grid refinements embedded in a global coarse mesh, where the locations of these refinements are defined by the physics of the flow. Flow characteristics are also used to determine which set of model equations is appropriate for solution in each region, thereby reducing not only the number of grid points at which the solution must be obtained, but also the computational effort required to get that solution. Acceleration to steady-state is achieved by applying multigrid on each of the subgrids, regardless of the particular model equations being solved. Since each of these components is explicit, advantage can readily be taken of the vector- and parallel-processing capabilities of machines such as the Cray X-MP and Cray-2.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Curtis, Darren S.; Peterson, Elena S.; Oehmen, Chris S.
2008-05-04
This work presents the ScalaBLAST Web Application (SWA), a web based application implemented using the PHP script language, MySQL DBMS, and Apache web server under a GNU/Linux platform. SWA is an application built as part of the Data Intensive Computer for Complex Biological Systems (DICCBS) project at the Pacific Northwest National Laboratory (PNNL). SWA delivers accelerated throughput of bioinformatics analysis via high-performance computing through a convenient, easy-to-use web interface. This approach greatly enhances emerging fields of study in biology such as ontology-based homology, and multiple whole genome comparisons which, in the absence of a tool like SWA, require a heroicmore » effort to overcome the computational bottleneck associated with genome analysis. The current version of SWA includes a user account management system, a web based user interface, and a backend process that generates the files necessary for the Internet scientific community to submit a ScalaBLAST parallel processing job on a dedicated cluster.« less
Behavioral personal digital assistants: The seventh generation of computing
Stephens, Kenneth R.; Hutchison, William R.
1992-01-01
Skinner (1985) described two divergent approaches to developing computer systems that would behave with some approximation to intelligence. The first approach, which corresponds to the mainstream of artificial intelligence and expert systems, models intelligence as a set of production rules that incorporate knowledge and a set of heuristics for inference and symbol manipulation. The alternative is a system that models the behavioral repertoire as a network of associations between antecedent stimuli and operants, and adapts when supplied with reinforcement. The latter approach is consistent with developments in the field of “neural networks.” The authors describe how an existing adaptive network software system, based on behavior analysis and developed since 1983, can be extended to provide a new generation of software systems capable of acquiring verbal behavior. This effort will require the collaboration of the academic and commercial sectors of the behavioral community, but the end result will enable a generational change in computer systems and support for behavior analytic concepts. PMID:22477053